Apologies - the above should read:

"...we don't have a clear idea of whether "P is true" is a sentence in the object language, the meta-language, or a meta-meta-language."

Hi Michael,

I replied to your questions in my blog. There I also touched the ontological question about the "subject of matter" a bit, that underlies my view.

In fact I would say "this is a different square" or object if an observable symmetry operation O is present or not. We can do completely different stuff with it.

Your physical example about the Newtonian or relativistic star reminds me an argument by Hilary Putnam that the reality of an electron did not change because we have today a totally different theory than many years ago. But it is not about our theories that descibe things.

As you nicely wrote: "whether operational differences should be seen as representative of ontological ones". I would say yes. This has nothing to do with our theories that can be wrong or inaccurate. But what operations can be performed depends on the laws. And the properties can be assigned to objects also depends on the laws. The separation of objects and laws that is manifest in simplistic realist interpretations is very artificial. They build a unity. Besides there are conceptual dependencies on the definability of objects and laws. That cannot be conceptualized done independently.

In my essay I make the bold claim that objects and laws are emergent and changeable. And I think it could be fruitful to take such a view. And the decision whether such a view will be accepted will be a pragmatic one. Can such widening of the ontology and conceptual framework help to explain phenomena like consciousness, free will or help to develop a theory for quantum gravity etc.

I acknowledge that the fundamental ontology is not knowable, but manifested, emergent and changeable relations between things are. Without holding the things as fixed, as we mostly do.

Hope this makes my point a bit clearer?

Luca

Mike,

Thanks for your response. I found Curiel very relevant (but wordy). A recent paper of mine relates, I believe, to Curiel and to my essay: A Primordial Spacetime Metric

I look forward to any comments on my essay.

Warmest regards,

Edwin Eugene Klingman

Dear Michael,

you wrote an extraordinarily informing and deep essay and by reading and enjoying it as well as by thinking about it I must say that this is a brilliant piece of work that accurately nails the current essay themes. So thanks for that!

The only thing I would *add* as a probable interpretative result of the hole issue would be to state that obviously - very obviously indeed in my humble opinion -, formal systems can't completely capture what you call at page 8 the 'true nature'. As such, the overall lesson of Gödel and of your complete essay in my opinion is that the formalizability of the limits expressed therein at least epistemologically (if not ontologically as well) point to some profound *limits of formalizability*.

The latter is the theme of my own essay. I would be happy if you liked to check it out and eventually leave a comment there!

Best wishes,

Stefan

    Dear Michael,

    you start your essay by noting what could be reformulated as a tension between three classical arguments in the philosophy of science---Putnam's 'no miracles'-argument, Quine's 'indispensability'-argument, and Laudan's 'pessimistic meta-induction'. The 'no miracles'-argument tells us that theories must get something about the world right, as otherwise, their (predictive) success would be inexplicable (miraculous). The indispensability argument tells us that we must believe in the ontological reality of those elements of our theories which are indispensable to their success---including the mathematical ones. Pessimistic meta-induction, finally, points to the history of scientific theories, and concludes that, no matter how successful they are, our current theories are most likely wrong, and will eventually be replaced by better theories.

    Together, this forms a conundrum: we have to believe in entities postulated by theories, that nevertheless we have reason to expect will turn out to be wrong!

    You argue for a related conclusion, but by a different route. Unfortunately, I'm not entirely sure I understand your argumentation (or if I do, whether it's correct). You say that the Gödelian phenomenon points to facts true only about our theories; but the common view is rather the opposite---namely, that it tells us things about the models of theories that the theories themselves don't. Thus, the fact that Peano arithmetic doesn't prove the Gödel sentence nor its negation tells us that there are models of the Peano axioms---structures such that the its axioms come out true---in which the Gödel sentence holds, and such in which it doesn't. Hence, it is independent from the axioms.

    In other words, the Peano axioms don't have a single 'subject matter', but apply to different subject matters, which will not be isomorphic to one another---and the Gödelian proof tells us furthermore that we can't repair this fact: there will always be multiple subject matters for a given theory. For the Peano axioms, one such subject matter is the natural numbers N: as the 'standard model' of the Peano axioms, the Gödel sentence will be true for this structure. But another subject matter is the natural numbers extended by 'transfinite' elements, which can't be written in terms of a finite number of applications of the successor-function, and for which the Gödel sentence may be false (while all the Peano axioms still apply perfectly well).

    This is, in principle, all perfectly ordinary. Consider the field axioms---the common properties of addition and multiplication: associativity, commutativity, existence of an identity-element, existence of inverses, and distributivity of multiplication over addition. These axioms leave a number of questions one might have unanswered---such as, for any element e of a field, does there exist an element e' such that e' * e' = e (i. e. is there a square root for every element)? There are fields where that's the case (the real numbers), and fields where it's not (the rational numbers, where e. g. no element exists that yields 2 when multiplied with itself).

    Consequently, there are multiple, inequivalent 'subjects' of the field axioms. But that's no matter for concern. We could imagine extending the axioms such that the matter of square roots, for example, is settled, thus further 'narrowing down' the subject matter.

    But what Gödel taught us is that there's no end to this process, for sufficiently expressive theories: no matter how much we try to narrow down the subject, we always find that we can substitute alternatives to what we originally had in mind, while still fulfilling the axioms we have postulated.

    Or, as another example, take the axioms for a group, and an Abelian group: the latter are just the former, augmented by the axiom that the group operation must be commutative. Hence, both the real numbers with multiplication and 2x2-matrices with real entries under matrix multiplication are groups (fulfill the group axioms), but only the former is an Abelian group---the subject matter has been narrowed down thanks to the addition of the commutativity-axiom.

    Robinson- and Presburger-arithmetic are subtheories of Peano arithmetic, with less expressive power. Consequently, everything true in either is also true in Peano arithmetic, and in every model thereof. Additionally, Robinson arithmetic is, in fact, also subject to the incompleteness theorem, while Presburger arithmetic isn't---the latter doesn't include multiplication, and thus, one can't formulate a Gödel numbering scheme. Thus, Presburger arithmetic simply lacks the power to express the Gödel sentence.

    So I'm not quite sure what to make of your argumentation that Gödelian phenomena tell us something merely about the theory, and not its subject matter. To me, to the extent that all three theories of arithmetic are about the same subject, they capture it in greater or lesser detail---like the field axioms are as 'about' the real numbers as they are about the rational numbers, thus merely not capturing the detail of whether there are square roots for every element, and like the group axioms don't capture the detail about whether the group operation is commutative.

    In other words, while Presburger arithmetic is complete as a theory, it doesn't describe all of the properties of the natural numbers fully. Furthermore, all of the models of the Peano axioms are also models of Presburger arithmetic, including the non-standard ones. Robinson arithmetic has a model in the form of the set of integer-coefficient polynomials with positive leading coefficient. So Peano arithmetic will have fewer models, because it characterizes them more fully; but by Gödel's theorem, no complete characterization is possible.

    So I would say that, rather than the Gödel sentence relating to the theory instead of its subject matter, it effectively shows us that there is always more than one possible subject matter, and the theory doesn't suffice to adjudicate between them. But perhaps I'm misunderstanding you?

    Anyway, I wish your essay the best of luck in this contest.

    Cheers

    Jochen

      Dear Jochen,

      Thank you for the detailed comment!

      I think that there are a few points here that you've misunderstood (or that we simply disagree about!) Before I address these though I must make a concession. You pointed out a genuine error on my part. My original draft included Peano, Presberger, Robinson and Skolem arithmetic. When I was editing I meant to cut out Robinson arithmetic but accidentally removed Skolem arithmetic instead. As you may know, Skolem arithmetic includes multiplication but not addition (or the successor function), and to my understanding, is provably complete. I shall contact the contest organizers to see if I might remedy this, but note that this is a correction that doesn't at all influence my argument.

      Putting this aside then, allow me to respond to some of the individual points you've made:

      I'll begin by saying that I'm not sure I agree with your characterization of my introduction - I certainly don't take pessimistic meta-induction to play any role here at all. If I were asked to describe my position in terms of these arguments, I would say that my point is to bring attention to the difficulty of knowing when Quine's argument applies.

      Getting to your main point, you raise that Godel's theorem can be understood as proving that there exist non-standard models of a theory. This is correct, of course, and indeed we know that even the Dedekind-Peano axioms admit non-standard models (even though they don't describe a logic that is vulnerable to Godel's theorem). In these terms, the difficulty that I wish to raise in my paper might then very briefly(!) be framed as follows: We build theories to learn about subject matters. (For example, we build arithmetic to learn about the natural numbers and physics to learn about the physical world.) Godel's theorem tells us that these theories will admit multiple (non-isomorphic) models. The difficulty, then, is that if we don't know in advance what the structure of the subject matter is, we can't be sure about whether one of the admissible models reflects the actual subject matter (let alone which one does), nor whether the availability of multiple models itself reflects something about the subject matter.

      I think that perhaps part of the trouble is that you've equivocated between what I've called a theory's subject matter and its models. For example, you've said that the "Peano axioms don't have a single 'subject matter', but apply to different subject matters, which will not be isomorphic to one another." But I take the subject matter of Peano arithmetic to be the natural numbers, and the question we might then pose is whether the availability of multiple models reflects something about the natural numbers themselves. In the case of the natural numbers I believe we can answer this quite clearly. However, when it comes to physics things are quite complicated. For example, it might be a feature of spacetime itself that multiple models are admissible, as inferred by the freedoms of relativity theory. On the other hand, it may be that only one of the available models describes how spacetime actually is and the other models are equivalent to the non-standard models of arithmetic admissible by the Dedekind-Peano axioms.

      As a last point, your final few paragraphs seem to suggest the following line of thought: Moving from simpler to more complex theories of arithmetic, we're able to describe more and more properties of the natural numbers. Therefore, we should take the maximally informative theory as best representing the natural numbers. It is this last sentence that I have trouble with, however, as it seems that we have good reason to think that Peano arithmetic allows for proofs of sentences that at least might not be true of the natural numbers. Put in some of the terms that you've used in your comment, it may be that Peano arithmetic must admit non-standard models, but I don't think this means that the natural numbers don't themselves exhibit a unique model, namely the intended(!) model of the Dedekind-Peano axioms.

      I hope this clarifies a little bit. I haven't yet taken a look at your essay, but I'll do so soon.

      Mike

      Dear Michael,

      thanks for your clarifications; I think I understand the point you're making a little better now.

      As for my point regarding pessimistic meta-induction and the like, that was merely me trying to outline another way as to how one might come to doubt the notion that 'a theory's predictive power indicates representational success'---we have theories of unparalleled power, which should commit us to accepting the reality of the entities (including mathematical ones) they deem essential, or else be forced to accept it as miraculous; yet, we have good reason to expect that these theories will eventually be overthrown in favor of better ones, which might entail commitments to entities wholly unlike those present in current theories. In this way, one might doubt that predictive power entails representational success---which seems at least sympathetic to your conclusion.

      I recognize you're arguing along a very different route, however. And you're right to point out that I conflated your 'subject matter' with the notion of a model from mathematical logic. So, let me try to rephrase it in my own words: there's something out there, call it 'stuff', which we want to describe. ('Stuff' here being something like the physical world, or parts thereof, or the natural numbers, or whatever else.) To do so, we formulate a formal theory that, in some way, axiomatizes our knowledge of stuff---in whatever way we might have acquired it (by experiment, say---including the particular experiments we do when we calculate things).

      We might then hope that we eventually hit on a theory that describes stuff perfectly, and completely, and does not describe anything else (perhaps up to isomorphism). That is, everything we derive using our theory ends up being actually true about stuff, and every true fact about stuff can be derived from our theory. There are then two natural questions to ask---is this possible at all?, and if it is, can we actually tell that we've done it?

      I think that at least part of your argument is that the answer to the second question depends on the access we have to stuff. With something like the natural numbers, we have, in a way, a pre-theoretic grasp about what the stuff we intend the theory to model actually is. The Gödelian phenomenon entails that we can't just derive every true fact about the numbers from the Peano-axioms (incidentally, I'm not quite clear on the distinction you seem to draw with the Dedekind-Peano axioms; to me, those would've been the same, and thus, just as susceptible to the Gödel argument). But that doesn't actually mean that we can't know whether such undecidable statements are true about the natural numbers---the Gödel sentence is independent from the Peano axioms, but comes out true in the actual natural numbers; you need to add non-standard elements to the model, which are not part of 'the' natural numbers, to make it come out false.

      So I think this is how I would reconstruct your argument (or part of it that had been confusing to me): while for the natural numbers, we have a good grasp of the subject matter, the intended model, that's not the case with physics. Hence, by analogy, we don't know whether the standard model of the, say, standard model actually maps to the stuff we want to describe; hence, we don't know whether some sentence of the theory that is independent from its axioms, but 'true in the standard model', is actually true of physical reality. That would be something like trying to discover the natural numbers empirically, believing that they correspond to the standard model of the Peano axioms, and hence, thinking that the Gödel sentence comes out true, while the actual natural numbers correspond to some non-standard model, where the Gödel sentence in fact comes out false. Does that get the gist right?

      I'm still not sure, if it does, if I would gloss this as the theory proving something that turns out to be false regarding the subject matter; rather, the theory fails do decide the truth of some sentence, and we use misapplied reasoning based on the assumption we're dealing with the standard model to derive a falsehood about the stuff it deals with. The Peano axioms can only derive something false about the natural numbers if either they are inconsistent, or they don't, in fact, axiomatize the structure we have in mind when we talk about 'the natural numbers'. Otherwise, if the natural numbers proper is actually one of the models of PA, then every theorem of PA will be true in the natural numbers. But there will be truths that PA can't decide---such as the Gödel sentence.

      So I think one can validly think of stronger systems better characterizing a structure. All of the models of PA will also be models of Robinson, Presburger, and Skolem-arithmetic, but the latter three will have models that are not model of PA. Indeed, they will have models that aren't actually numbers at all---such as the integer-coefficient polynomials with positive leading coefficient that form a model of Robinson arithmetic. There are truths about the natural numbers that can be derived using Peano arithmetic that can't be derived within the other systems, but all truths derivable within them can likewise be derived in PA.

      But anyway, I think your overall point---even if I disagree somewhat with the formulation---is a valid one; in a sense, it's perhaps related to Newman's problem, that knowledge of the structure (the axioms, in this case) of a domain radically underdetermines the domain itself. (William Seager makes this point in chapter 15 of his excellent 'Theories of Consciousness'.)

      Thanks for taking the time to further explain your views to me.

      Cheers

      Jochen

      Lovely essay! I really appreciated the main idea that models are not necessarily the same as the things they're trying to represent, and the arguments that this is true even in math (the one place you'd expect this not to be true). Hard to read in some places though.

      A bit of semantics: you take N to mean the numbers themselves, along with addition and some other things, but not multiplication. I understand what you're doing here, but I also think it's sort of cheating. Lots of actively studied claims about the natural numbers relate to multiplication and prime-ness and so on. If you do include multiplication, you can't exploit the completeness of Presberger arithmetic. But I don't know enough about Robinson arithmetic to have a sense of whether the argument would still work.

      There seems to be a typo on pg. 2. I think "Take G to be a Godel sentence in A_Pb" should be "...in A_Pn" if I understood correctly. Had to reread pages 2-3 a few times to really get the argument, since it's a bit dense there.

      Not sure I agree that temperature is 'adding something' in the same way that adding an axiom is 'adding something', if it can be described and reasoned about purely in terms of microscopic language.

      It seems pretty clear to me that classical mechanics offers a representation of the world (in the appropriate domain) which contains truths not about 'the real world'. My favorite example is how there are situations where five or more particles interacting gravitationally can be jettisoned infinitely far away within a finite amount of time (see "Off to Infinity in Finite Time" by Saari and Xia). But this example is kind of weak, because you can dismiss it by saying the real world is not classical, and arbitrary velocities aren't allowed.

      Interesting point about how different equivalent mathematical formulations of quantum mechanics make it hard to 'get at' what's fundamental, whatever that means. I've wondered about this too, particularly given how the path integral is popularly explained: quantum particles somehow 'sniff out' all possible paths, and take all of them at once. But there are path integral formulations of classical stochastic systems (e.g. the Onsager-Machhlup path integral for stochastic dynamics), and it is clear in that case that particles don't take all possible paths...so what does the path integral formulation really tell us about what real particles do?

      On the same topic, it's fun to note that there are also many different equivalent formulations of classical mechanics. Of course, there are the usual suspects: Newton's laws, the Lagrangian formulation, the Hamiltonian formulation, and the Hamilton-Jacobi formulation. But then there's weird stuff, like Koopman-von Neumann mechanics, which describes classical mechanics in terms of Hilbert space, bras, and kets in a way deliberately intended to be as close as possible to how quantum mechanics is usually treated.

      Finally, a philosophical question I'm curious to hear your thoughts on. We model the world (in my view) to understand it. Newton's laws add understanding because tell us that things change how they move when we push them, for example. And physical laws, possibly due to our limited cognitive abilities, are crucial for understanding: they compress existing observations into a compact and comprehensible form, and help us reason about what will happen in new situations.

      Given a well-tested theory like the standard model, which describes 'almost everything', what kind of truth should we ascribe to its predictions that aren't (currently) directly experimentally testable? Should we remain skeptical that they may just be mathematical artifacts, until it is possible to probe them? Also, do you think there exists a way to construct a mathematical model that really is 'equivalent' to the thing it's trying to represent, or will we always have to deal with this issue?

      John

      Dear Michael Dascal,

      As you are young brilliant student, this essay included the wonderful context. I really enjoyed reading this. From the viewpoint of the Godel theorem, you summarized this essay.

      On the other hand, as the computational resource, quantum mechanics is often used. Especially, the Godel theorem is often cited in the context of computation. Therefore, I would like to know the relationship to quantum computation as well. For reference, my essay focused on history of computation related to the random number generation.

      Best wishes,

      Yutaka

      Dear Dr Michael Dascal,

      Yours is a smooth flowing essay with a very nice discussion. Very good.

      Many people discussed about Godel theorem and its applicability to Quantum Mechanics. Will this theorem be applicable to theories in Cosmology?

      For example Dynamic Universe Model is a theory see my essay(A properly deciding, Computing and Predicting new theory's Philosophy)gave lots of results in Cosmology and many of its predictions came true.

      Can we apply Godel's theorem there?

      Best wishes to your essay

      =snp

      Dear Stefan,

      Thank you for the comment! That's a nice way of putting it: Insofar as the structure of reality is at least as rich as to reqire a representation of arithmetic with recursive addition and multiplication, there must be a limit to how formalizable it is!

      I'll be sure to look at yours as well!

      Best,

      Mike

      "incidentally, I'm not quite clear on the distinction you seem to draw with the Dedekind-Peano axioms; to me, those would've been the same, and thus, just as susceptible to the Gödel argument"

      I realize now that you were talking about second-order arithmetic here (right?). If so, then yes, there is only one 'natural numbers' there, for which the Gödel sentence is true (but unprovable since second-order logic has no completeness theorem, that is, doesn't derive every valid sentence). In this case, we would rather have a theory that fails to tell us everything about its domain, while still fixing that domain fully. But where does that leave us?

      "If so, then yes, there is only one 'natural numbers' there..."

      Actually, that should probably be qualified: there is only one natural numbers, relative to the standard model of set theory---you need set theory to define semantics for second order arithmetic (giving the universe of discourse), and set theory is first order, so if you use a non-standard model there, you'll get a non-standard model of the natural numbers in second-order arithmetic, as well.

      • [deleted]

      Dear Michael,

      You might find the following approach interesting as regards some of the very thought-provoking questions you pose in your essay.

      So, you could start with the things that we do know about, i.e. the things we perceive, such as colours and shapes. Then think about the qualities they share. Let's say they share the qualities of amount (or quantity), position, direction (if moving) and change. If it was then possible to build a theory of physics out of just these things, then it would be a theory about things that we do know about.

      Rather surprisingly it would appear possible to do just that, build a theory out of these things alone; but not just any old theory - the theory of Quantum Mechanics. This suggests that Quantum Mechanics is really about things we know about. Of course, this is carefully disguised: most people would say Quantum Mechanics is about photons and electrons and 'physical things' and not about the quantity and change of the things we perceive.

      This theory is itself is just a list of instructions (an algorithm). But the discovery of that list of instructions is the discovery of something we didn't know about the world.

      What do you think? If you interested there are some more details in my essay.

      All the best,

      David

      9 days later

      Dear Michael,

      I greatly appreciated your work and discussion. I am very glad that you are not thinking in abstract patterns.

      "It doesn't seem that this is what physics isfor- it feels like it's meant torepresent and inform usabout the structure of the physicalworld, not merely to predict our observations of it".

      While the discussion lasted, I wrote an article: "Practical guidance on calculating resonant frequencies at four levels of diagnosis and inactivation of COVID-19 coronavirus", due to the high relevance of this topic. The work is based on the practical solution of problems in quantum mechanics, presented in the essay FQXi 2019-2020 "Universal quantum laws of the universe to solve the problems of unsolvability, computability and unpredictability".

      I hope that my modest results of work will provide you with information for thought.

      Warm Regards, `

      Vladimir

      Write a Reply...