Essay Abstract

We typically take a theory's predictive power to indicate representational success. In this essay I argue that this is unjustified, particularly in the case of physical theories. Looking to Gödel's theorem as a guide, I show that it demonstrates how a theory can prove sentences that are true 'merely within' the theory - not true of the theory's subject matter. Given this, I explain that in cases where we don't have a clear pre-theoretic knowledge about a theory's subject matter it can be difficult or impossible to know into which category a given sentence falls. Applying this to physical theories, I first explain how this implies certain limitations on what we can learn from classical and relativistic representations of physical systems and laws. I then look to quantum mechanics, where a complete obscurity of the theory's subject matter makes it nearly impossible to learn anything from how it represents the universe.

Author Bio

Michael Dascal is a doctoral candidate in the Department of Philosophy and the Joint Center for Quantum Information and Computer Science at the University of Maryland. His research focuses on perspectival quantum mechanics and on quantum computational efficiency and supremacy.

Download Essay PDF File

5 days later

Dear Michael,

what an intriguing, interesting and clearly argued essay. Of what I have read until now, it is the one, I was the most attracted. If I criticize it now, it is not to invalidate, what you wrote. But in my essay I somewhat take the opposite view.

Let me make an example. A square can be seen as a representation of the rotation group with rotations generated by 90° rotations. Or as a representation of the rotation group including the 4 mirror symmetries. For both groups the representation is the same, but a square, that can only be rotated, seem to me something quite different than one, that also can be turned upside down. So it seams that the possible operations, the physical laws or the axioms that axioms convey the meaning of what a square is and not so much the square as the object on which one operates.

In that sense if the natural numbers might be identified by some counting operation. The different axioms would express different additional possible operation and hence physical laws that might or might not be realized in the world.

In my essay there is also an underdetermination. If we identify objects (ie. the square), on which one operates, with the subject of matter, but only the possible operations are knowable (expressed as relations between objects), then the subject of matter is underdetermined by laws hence the axioms.

I don't know, if this makes sense for you or maybe I just twist the words a bit. I would like to know, what you think of my essay called: 'Semantically Closed Theories and the Unpredictable Evolution of the Laws of Physics'.

Luca

    Dear Michael Dascal,

    What a wonderful essay (which means it largely agrees with me!)

    My drumbeat in this contest has been that physicists project mathematical structure on the world and then come to believe that the physical universe actually has this structure.

    For example, qubit structure is appropriate for spins in magnetic domains and statistical analysis, but very inappropriate for Stern_Gerlach spins traversing an inhomogeneous magnetic field, as seen in the famous postcard data. Instead, a 3D spin yields exactly the correct deflection distribution. Yet Bell forces the qubit in his 1st equation: A,B = +1,-1 and goes on to prove 'nonlocality', whereas the 3D spin yields exactly the correlation that Bell claims is impossible.

    I love the following quotes from your essay:

    "...our theories may contain provable sentences which have nothing to do with their subject matter.

    The point is that our theories contain provable sentences which might not reflect anything about the structure of the physical universe.

    One way to think of all this is that a theory's axioms all stand on equal footing. There's no differentiation between those that capture the subject matter's structure and those that build on top of it. (Moreover, there's no guarantee that such a delineation exists in the axioms.) This means that we can't expect a theory to 'indicate' which of its theorems are about its subject matter and which aren't. This may not be worrisome in the case of arithmetic, but the consequences of this are troublesome when our knowledge of the subject matter is at all obscured, as is the case in physics.

    In other words, if we wish to accept our experience and empirical data (of single measurement outcomes) without 'breaking' quantum mechanics or adding new measurement dynamics to it, then we must give up any attempt to learn metaphysical or ontological lessons from the theory."

    I invite you to read my essay, Deciding on the nature of time and space.

    My best wishes to you in your career, and good luck in the contest.

    Edwin Eugene Klingman

      Dear Michael Dascal,

      I'm still trying to figure out whether you added anything to Tarski's theory of truth:

      The truth of sentences of an object-language (here: e.g. QM) can only be judged in meta-language.

      However, the crucial questions you bypass are:

      - what is meta-language?

      - what is a legitimate object-language?

      - what's the relation between object- and meta-language?

      thanks for an courageous essay,

      Heinz

        Dear Luca,

        Thank you for the comment! I'm not certain that I follow all of this, but please let me know if there's something I've misunderstood.

        First, I should say that I just read your essay and you seem to adopt a number of positions that I sympathize with, if not in my essay here then in my other work. (My dissertation defends "perspectival" theories of quantum mechanics, which fall under the "subject interpretation" umbrella - though I think this is a bit of a misnomer! Just as you hint at here, I look to the universality of unitary evolution to explain how Wigner and his friend can consistently observe their own measurement results while they disagree about whether the friend evovles unitarily. I do have some questions about what you say about this, but I'll post these in your essay's discussion thread shortly.)

        Regarding the comment you posted here though, I'd like to push back a bit. (Though again, if I've misunderstood then please let me know!) You wrote, "A square can be seen as a representation of the rotation group with rotations generated by 90° rotations. Or as a representation of the rotation group including the 4 mirror symmetries. For both groups the representation is the same, but a square, that can only be rotated, seem to me something quite different than one, that also can be turned upside down." (emphasis added). My concern here is that you seem to suggest that there are two distinct objects - one square that can be reflected and one that can't. However, this seems to me to imply that what's true of the square - whether it admits certain reflective symmetries, in this case - depends on whether some rational agent has a theory that performs these operations. This can't be right though - if someone were to introduce a theory that includes a completely novel kind of operation, O, and it turns out that O is a symmetry operation on all squares, we wouldn't say that we discovered "a new type of square", nor that the squares our old theories discuss were necessarily different objects than the ones described by O-symmetry.

        This might be a tricky case because it's difficult to picture abstract mathematical objects as existing outsides of our theories, but it may be clearer if we think about a physical example. When we learn that stars are governed by relativistic (not Newtonian) physics, we don't think of this as replacing Newtonian systems with relativistic systems out in the world. Rather, we think about this as learning a new feature about physical systems that we didn't know about before. That is, things that we thought were Newtonian systems turned out to be relativistic ones. (We can further contrast this with cases where we have learned that certain objects in our theories didn't exist, such as phlogiston or caloric, for example.)

        Now that said, I certainly don't think that fundamental ontology is remotely transparent to us! All I mean to push back on here is whether operational differences should be seen as representative of ontological ones. As long as we accept that multiple theories can describe different operations over a single domain of objects then we can certainly talk about the domain of objects independently of the theories. For example, I'm perfectly happy to talk about fundamental particles while acknowledging that I'm not clear on their ontological nature. Whatever this nature is, it seems clear that there must exist something in the physical world that our theories at least try to represent (to varying degrees of success!).

        Does this clarify - maybe I've made this more confusing than it needs to be!

        Dear Edwin,

        Thank you for the kind comment! I'll be sure to look at yours too. (If you haven't already, you should read the paper I cite by Curiel. From what you've said here it sounds like you'd like it.)

        All my best,

        Mike

        Dear Heinz,

        Aha! You've caught some of the material I had to cut out of my last draft due to length restrictions!

        In brief, part of what my essay is trying to highlight is that when a theory can represent itself, the formal distinction between object and meta-language blurs or fades away completely. As such, without a second 'source' for identifying the object language (i.e. some pre-theoretic knowledge of the theory's subject matter, or perhaps a simpler theory to refer to), we can't know how to judge a sentence in the theory.

        For instance, say a theory (capable of self-reference) proves some sentence P. If we don't have a clear grasp of the theory's subject matter then this means that we don't have a clear idea of whether P is a sentence in the object language, the meta-language, or a meta-meta-language.

        This is the problem, of course, as without being able to distinguish these cases means that we can't detemine which sentences are about the subject matter and which aren't.

        I hope this helps - and thank you for the opportunity to say this, albeit quickly!

        All my best,

        Mike

        Apologies - the above should read:

        "...we don't have a clear idea of whether "P is true" is a sentence in the object language, the meta-language, or a meta-meta-language."

        Hi Michael,

        I replied to your questions in my blog. There I also touched the ontological question about the "subject of matter" a bit, that underlies my view.

        In fact I would say "this is a different square" or object if an observable symmetry operation O is present or not. We can do completely different stuff with it.

        Your physical example about the Newtonian or relativistic star reminds me an argument by Hilary Putnam that the reality of an electron did not change because we have today a totally different theory than many years ago. But it is not about our theories that descibe things.

        As you nicely wrote: "whether operational differences should be seen as representative of ontological ones". I would say yes. This has nothing to do with our theories that can be wrong or inaccurate. But what operations can be performed depends on the laws. And the properties can be assigned to objects also depends on the laws. The separation of objects and laws that is manifest in simplistic realist interpretations is very artificial. They build a unity. Besides there are conceptual dependencies on the definability of objects and laws. That cannot be conceptualized done independently.

        In my essay I make the bold claim that objects and laws are emergent and changeable. And I think it could be fruitful to take such a view. And the decision whether such a view will be accepted will be a pragmatic one. Can such widening of the ontology and conceptual framework help to explain phenomena like consciousness, free will or help to develop a theory for quantum gravity etc.

        I acknowledge that the fundamental ontology is not knowable, but manifested, emergent and changeable relations between things are. Without holding the things as fixed, as we mostly do.

        Hope this makes my point a bit clearer?

        Luca

        Mike,

        Thanks for your response. I found Curiel very relevant (but wordy). A recent paper of mine relates, I believe, to Curiel and to my essay: A Primordial Spacetime Metric

        I look forward to any comments on my essay.

        Warmest regards,

        Edwin Eugene Klingman

        Dear Michael,

        you wrote an extraordinarily informing and deep essay and by reading and enjoying it as well as by thinking about it I must say that this is a brilliant piece of work that accurately nails the current essay themes. So thanks for that!

        The only thing I would *add* as a probable interpretative result of the hole issue would be to state that obviously - very obviously indeed in my humble opinion -, formal systems can't completely capture what you call at page 8 the 'true nature'. As such, the overall lesson of Gödel and of your complete essay in my opinion is that the formalizability of the limits expressed therein at least epistemologically (if not ontologically as well) point to some profound *limits of formalizability*.

        The latter is the theme of my own essay. I would be happy if you liked to check it out and eventually leave a comment there!

        Best wishes,

        Stefan

          Dear Michael,

          you start your essay by noting what could be reformulated as a tension between three classical arguments in the philosophy of science---Putnam's 'no miracles'-argument, Quine's 'indispensability'-argument, and Laudan's 'pessimistic meta-induction'. The 'no miracles'-argument tells us that theories must get something about the world right, as otherwise, their (predictive) success would be inexplicable (miraculous). The indispensability argument tells us that we must believe in the ontological reality of those elements of our theories which are indispensable to their success---including the mathematical ones. Pessimistic meta-induction, finally, points to the history of scientific theories, and concludes that, no matter how successful they are, our current theories are most likely wrong, and will eventually be replaced by better theories.

          Together, this forms a conundrum: we have to believe in entities postulated by theories, that nevertheless we have reason to expect will turn out to be wrong!

          You argue for a related conclusion, but by a different route. Unfortunately, I'm not entirely sure I understand your argumentation (or if I do, whether it's correct). You say that the Gödelian phenomenon points to facts true only about our theories; but the common view is rather the opposite---namely, that it tells us things about the models of theories that the theories themselves don't. Thus, the fact that Peano arithmetic doesn't prove the Gödel sentence nor its negation tells us that there are models of the Peano axioms---structures such that the its axioms come out true---in which the Gödel sentence holds, and such in which it doesn't. Hence, it is independent from the axioms.

          In other words, the Peano axioms don't have a single 'subject matter', but apply to different subject matters, which will not be isomorphic to one another---and the Gödelian proof tells us furthermore that we can't repair this fact: there will always be multiple subject matters for a given theory. For the Peano axioms, one such subject matter is the natural numbers N: as the 'standard model' of the Peano axioms, the Gödel sentence will be true for this structure. But another subject matter is the natural numbers extended by 'transfinite' elements, which can't be written in terms of a finite number of applications of the successor-function, and for which the Gödel sentence may be false (while all the Peano axioms still apply perfectly well).

          This is, in principle, all perfectly ordinary. Consider the field axioms---the common properties of addition and multiplication: associativity, commutativity, existence of an identity-element, existence of inverses, and distributivity of multiplication over addition. These axioms leave a number of questions one might have unanswered---such as, for any element e of a field, does there exist an element e' such that e' * e' = e (i. e. is there a square root for every element)? There are fields where that's the case (the real numbers), and fields where it's not (the rational numbers, where e. g. no element exists that yields 2 when multiplied with itself).

          Consequently, there are multiple, inequivalent 'subjects' of the field axioms. But that's no matter for concern. We could imagine extending the axioms such that the matter of square roots, for example, is settled, thus further 'narrowing down' the subject matter.

          But what Gödel taught us is that there's no end to this process, for sufficiently expressive theories: no matter how much we try to narrow down the subject, we always find that we can substitute alternatives to what we originally had in mind, while still fulfilling the axioms we have postulated.

          Or, as another example, take the axioms for a group, and an Abelian group: the latter are just the former, augmented by the axiom that the group operation must be commutative. Hence, both the real numbers with multiplication and 2x2-matrices with real entries under matrix multiplication are groups (fulfill the group axioms), but only the former is an Abelian group---the subject matter has been narrowed down thanks to the addition of the commutativity-axiom.

          Robinson- and Presburger-arithmetic are subtheories of Peano arithmetic, with less expressive power. Consequently, everything true in either is also true in Peano arithmetic, and in every model thereof. Additionally, Robinson arithmetic is, in fact, also subject to the incompleteness theorem, while Presburger arithmetic isn't---the latter doesn't include multiplication, and thus, one can't formulate a Gödel numbering scheme. Thus, Presburger arithmetic simply lacks the power to express the Gödel sentence.

          So I'm not quite sure what to make of your argumentation that Gödelian phenomena tell us something merely about the theory, and not its subject matter. To me, to the extent that all three theories of arithmetic are about the same subject, they capture it in greater or lesser detail---like the field axioms are as 'about' the real numbers as they are about the rational numbers, thus merely not capturing the detail of whether there are square roots for every element, and like the group axioms don't capture the detail about whether the group operation is commutative.

          In other words, while Presburger arithmetic is complete as a theory, it doesn't describe all of the properties of the natural numbers fully. Furthermore, all of the models of the Peano axioms are also models of Presburger arithmetic, including the non-standard ones. Robinson arithmetic has a model in the form of the set of integer-coefficient polynomials with positive leading coefficient. So Peano arithmetic will have fewer models, because it characterizes them more fully; but by Gödel's theorem, no complete characterization is possible.

          So I would say that, rather than the Gödel sentence relating to the theory instead of its subject matter, it effectively shows us that there is always more than one possible subject matter, and the theory doesn't suffice to adjudicate between them. But perhaps I'm misunderstanding you?

          Anyway, I wish your essay the best of luck in this contest.

          Cheers

          Jochen

            Dear Jochen,

            Thank you for the detailed comment!

            I think that there are a few points here that you've misunderstood (or that we simply disagree about!) Before I address these though I must make a concession. You pointed out a genuine error on my part. My original draft included Peano, Presberger, Robinson and Skolem arithmetic. When I was editing I meant to cut out Robinson arithmetic but accidentally removed Skolem arithmetic instead. As you may know, Skolem arithmetic includes multiplication but not addition (or the successor function), and to my understanding, is provably complete. I shall contact the contest organizers to see if I might remedy this, but note that this is a correction that doesn't at all influence my argument.

            Putting this aside then, allow me to respond to some of the individual points you've made:

            I'll begin by saying that I'm not sure I agree with your characterization of my introduction - I certainly don't take pessimistic meta-induction to play any role here at all. If I were asked to describe my position in terms of these arguments, I would say that my point is to bring attention to the difficulty of knowing when Quine's argument applies.

            Getting to your main point, you raise that Godel's theorem can be understood as proving that there exist non-standard models of a theory. This is correct, of course, and indeed we know that even the Dedekind-Peano axioms admit non-standard models (even though they don't describe a logic that is vulnerable to Godel's theorem). In these terms, the difficulty that I wish to raise in my paper might then very briefly(!) be framed as follows: We build theories to learn about subject matters. (For example, we build arithmetic to learn about the natural numbers and physics to learn about the physical world.) Godel's theorem tells us that these theories will admit multiple (non-isomorphic) models. The difficulty, then, is that if we don't know in advance what the structure of the subject matter is, we can't be sure about whether one of the admissible models reflects the actual subject matter (let alone which one does), nor whether the availability of multiple models itself reflects something about the subject matter.

            I think that perhaps part of the trouble is that you've equivocated between what I've called a theory's subject matter and its models. For example, you've said that the "Peano axioms don't have a single 'subject matter', but apply to different subject matters, which will not be isomorphic to one another." But I take the subject matter of Peano arithmetic to be the natural numbers, and the question we might then pose is whether the availability of multiple models reflects something about the natural numbers themselves. In the case of the natural numbers I believe we can answer this quite clearly. However, when it comes to physics things are quite complicated. For example, it might be a feature of spacetime itself that multiple models are admissible, as inferred by the freedoms of relativity theory. On the other hand, it may be that only one of the available models describes how spacetime actually is and the other models are equivalent to the non-standard models of arithmetic admissible by the Dedekind-Peano axioms.

            As a last point, your final few paragraphs seem to suggest the following line of thought: Moving from simpler to more complex theories of arithmetic, we're able to describe more and more properties of the natural numbers. Therefore, we should take the maximally informative theory as best representing the natural numbers. It is this last sentence that I have trouble with, however, as it seems that we have good reason to think that Peano arithmetic allows for proofs of sentences that at least might not be true of the natural numbers. Put in some of the terms that you've used in your comment, it may be that Peano arithmetic must admit non-standard models, but I don't think this means that the natural numbers don't themselves exhibit a unique model, namely the intended(!) model of the Dedekind-Peano axioms.

            I hope this clarifies a little bit. I haven't yet taken a look at your essay, but I'll do so soon.

            Mike

            Dear Michael,

            thanks for your clarifications; I think I understand the point you're making a little better now.

            As for my point regarding pessimistic meta-induction and the like, that was merely me trying to outline another way as to how one might come to doubt the notion that 'a theory's predictive power indicates representational success'---we have theories of unparalleled power, which should commit us to accepting the reality of the entities (including mathematical ones) they deem essential, or else be forced to accept it as miraculous; yet, we have good reason to expect that these theories will eventually be overthrown in favor of better ones, which might entail commitments to entities wholly unlike those present in current theories. In this way, one might doubt that predictive power entails representational success---which seems at least sympathetic to your conclusion.

            I recognize you're arguing along a very different route, however. And you're right to point out that I conflated your 'subject matter' with the notion of a model from mathematical logic. So, let me try to rephrase it in my own words: there's something out there, call it 'stuff', which we want to describe. ('Stuff' here being something like the physical world, or parts thereof, or the natural numbers, or whatever else.) To do so, we formulate a formal theory that, in some way, axiomatizes our knowledge of stuff---in whatever way we might have acquired it (by experiment, say---including the particular experiments we do when we calculate things).

            We might then hope that we eventually hit on a theory that describes stuff perfectly, and completely, and does not describe anything else (perhaps up to isomorphism). That is, everything we derive using our theory ends up being actually true about stuff, and every true fact about stuff can be derived from our theory. There are then two natural questions to ask---is this possible at all?, and if it is, can we actually tell that we've done it?

            I think that at least part of your argument is that the answer to the second question depends on the access we have to stuff. With something like the natural numbers, we have, in a way, a pre-theoretic grasp about what the stuff we intend the theory to model actually is. The Gödelian phenomenon entails that we can't just derive every true fact about the numbers from the Peano-axioms (incidentally, I'm not quite clear on the distinction you seem to draw with the Dedekind-Peano axioms; to me, those would've been the same, and thus, just as susceptible to the Gödel argument). But that doesn't actually mean that we can't know whether such undecidable statements are true about the natural numbers---the Gödel sentence is independent from the Peano axioms, but comes out true in the actual natural numbers; you need to add non-standard elements to the model, which are not part of 'the' natural numbers, to make it come out false.

            So I think this is how I would reconstruct your argument (or part of it that had been confusing to me): while for the natural numbers, we have a good grasp of the subject matter, the intended model, that's not the case with physics. Hence, by analogy, we don't know whether the standard model of the, say, standard model actually maps to the stuff we want to describe; hence, we don't know whether some sentence of the theory that is independent from its axioms, but 'true in the standard model', is actually true of physical reality. That would be something like trying to discover the natural numbers empirically, believing that they correspond to the standard model of the Peano axioms, and hence, thinking that the Gödel sentence comes out true, while the actual natural numbers correspond to some non-standard model, where the Gödel sentence in fact comes out false. Does that get the gist right?

            I'm still not sure, if it does, if I would gloss this as the theory proving something that turns out to be false regarding the subject matter; rather, the theory fails do decide the truth of some sentence, and we use misapplied reasoning based on the assumption we're dealing with the standard model to derive a falsehood about the stuff it deals with. The Peano axioms can only derive something false about the natural numbers if either they are inconsistent, or they don't, in fact, axiomatize the structure we have in mind when we talk about 'the natural numbers'. Otherwise, if the natural numbers proper is actually one of the models of PA, then every theorem of PA will be true in the natural numbers. But there will be truths that PA can't decide---such as the Gödel sentence.

            So I think one can validly think of stronger systems better characterizing a structure. All of the models of PA will also be models of Robinson, Presburger, and Skolem-arithmetic, but the latter three will have models that are not model of PA. Indeed, they will have models that aren't actually numbers at all---such as the integer-coefficient polynomials with positive leading coefficient that form a model of Robinson arithmetic. There are truths about the natural numbers that can be derived using Peano arithmetic that can't be derived within the other systems, but all truths derivable within them can likewise be derived in PA.

            But anyway, I think your overall point---even if I disagree somewhat with the formulation---is a valid one; in a sense, it's perhaps related to Newman's problem, that knowledge of the structure (the axioms, in this case) of a domain radically underdetermines the domain itself. (William Seager makes this point in chapter 15 of his excellent 'Theories of Consciousness'.)

            Thanks for taking the time to further explain your views to me.

            Cheers

            Jochen

            Lovely essay! I really appreciated the main idea that models are not necessarily the same as the things they're trying to represent, and the arguments that this is true even in math (the one place you'd expect this not to be true). Hard to read in some places though.

            A bit of semantics: you take N to mean the numbers themselves, along with addition and some other things, but not multiplication. I understand what you're doing here, but I also think it's sort of cheating. Lots of actively studied claims about the natural numbers relate to multiplication and prime-ness and so on. If you do include multiplication, you can't exploit the completeness of Presberger arithmetic. But I don't know enough about Robinson arithmetic to have a sense of whether the argument would still work.

            There seems to be a typo on pg. 2. I think "Take G to be a Godel sentence in A_Pb" should be "...in A_Pn" if I understood correctly. Had to reread pages 2-3 a few times to really get the argument, since it's a bit dense there.

            Not sure I agree that temperature is 'adding something' in the same way that adding an axiom is 'adding something', if it can be described and reasoned about purely in terms of microscopic language.

            It seems pretty clear to me that classical mechanics offers a representation of the world (in the appropriate domain) which contains truths not about 'the real world'. My favorite example is how there are situations where five or more particles interacting gravitationally can be jettisoned infinitely far away within a finite amount of time (see "Off to Infinity in Finite Time" by Saari and Xia). But this example is kind of weak, because you can dismiss it by saying the real world is not classical, and arbitrary velocities aren't allowed.

            Interesting point about how different equivalent mathematical formulations of quantum mechanics make it hard to 'get at' what's fundamental, whatever that means. I've wondered about this too, particularly given how the path integral is popularly explained: quantum particles somehow 'sniff out' all possible paths, and take all of them at once. But there are path integral formulations of classical stochastic systems (e.g. the Onsager-Machhlup path integral for stochastic dynamics), and it is clear in that case that particles don't take all possible paths...so what does the path integral formulation really tell us about what real particles do?

            On the same topic, it's fun to note that there are also many different equivalent formulations of classical mechanics. Of course, there are the usual suspects: Newton's laws, the Lagrangian formulation, the Hamiltonian formulation, and the Hamilton-Jacobi formulation. But then there's weird stuff, like Koopman-von Neumann mechanics, which describes classical mechanics in terms of Hilbert space, bras, and kets in a way deliberately intended to be as close as possible to how quantum mechanics is usually treated.

            Finally, a philosophical question I'm curious to hear your thoughts on. We model the world (in my view) to understand it. Newton's laws add understanding because tell us that things change how they move when we push them, for example. And physical laws, possibly due to our limited cognitive abilities, are crucial for understanding: they compress existing observations into a compact and comprehensible form, and help us reason about what will happen in new situations.

            Given a well-tested theory like the standard model, which describes 'almost everything', what kind of truth should we ascribe to its predictions that aren't (currently) directly experimentally testable? Should we remain skeptical that they may just be mathematical artifacts, until it is possible to probe them? Also, do you think there exists a way to construct a mathematical model that really is 'equivalent' to the thing it's trying to represent, or will we always have to deal with this issue?

            John

            Dear Michael Dascal,

            As you are young brilliant student, this essay included the wonderful context. I really enjoyed reading this. From the viewpoint of the Godel theorem, you summarized this essay.

            On the other hand, as the computational resource, quantum mechanics is often used. Especially, the Godel theorem is often cited in the context of computation. Therefore, I would like to know the relationship to quantum computation as well. For reference, my essay focused on history of computation related to the random number generation.

            Best wishes,

            Yutaka

            Dear Dr Michael Dascal,

            Yours is a smooth flowing essay with a very nice discussion. Very good.

            Many people discussed about Godel theorem and its applicability to Quantum Mechanics. Will this theorem be applicable to theories in Cosmology?

            For example Dynamic Universe Model is a theory see my essay(A properly deciding, Computing and Predicting new theory's Philosophy)gave lots of results in Cosmology and many of its predictions came true.

            Can we apply Godel's theorem there?

            Best wishes to your essay

            =snp

            Dear Stefan,

            Thank you for the comment! That's a nice way of putting it: Insofar as the structure of reality is at least as rich as to reqire a representation of arithmetic with recursive addition and multiplication, there must be a limit to how formalizable it is!

            I'll be sure to look at yours as well!

            Best,

            Mike

            "incidentally, I'm not quite clear on the distinction you seem to draw with the Dedekind-Peano axioms; to me, those would've been the same, and thus, just as susceptible to the Gödel argument"

            I realize now that you were talking about second-order arithmetic here (right?). If so, then yes, there is only one 'natural numbers' there, for which the Gödel sentence is true (but unprovable since second-order logic has no completeness theorem, that is, doesn't derive every valid sentence). In this case, we would rather have a theory that fails to tell us everything about its domain, while still fixing that domain fully. But where does that leave us?

            "If so, then yes, there is only one 'natural numbers' there..."

            Actually, that should probably be qualified: there is only one natural numbers, relative to the standard model of set theory---you need set theory to define semantics for second order arithmetic (giving the universe of discourse), and set theory is first order, so if you use a non-standard model there, you'll get a non-standard model of the natural numbers in second-order arithmetic, as well.