Dear Jochen,

finally I've found time to read your essay. Interesting food for thought! Very well-argued that we should reconstruct QM, instead of just "guessing the ontology" (i.e. interpreting it). You draw an interesting analogy between Goedel-type undecidability and the kind of "undecided" outcomes of quantum measurement, in the context of several quantum phenomena.

However, I do have some reservations. All that your diagonalization argument shows is: for any countably-infinite set (of "states"), there are uncountably many binary functions on it. Hence no single algorithm can compute ("predict") them all.

But this is completely true in any possible world -- classical or quantum or post-quantum. In other words, that simple observation cannot be enough to motivate quantumness.

Or what would you say?

Also, in a continuous context (like the continuous phase space that you describe), the naive definition of "any assignment of +1 or -1" will have to be supplemented by some topological or continuity arguments to say what it even means to compute a prediction, or what types of measurements are physically meaningful (not measuring along Cantor sets etc.). There is quite some literature in computer science and philosophy that deals with versions this.

In particular, let me ask you about Bohmian mechanics. This is a well-defined hidden-variable model of QM, and it is computable at least in the sense that people run simulations and compare the outcomes to experiments. (For example, see Valentini's work on equilibration.) I'm not endorsing Bohmian mechanics, but I wonder whether it is a counterargument to your claim. In some sense, there we *can* have a prediction algorithm for any possible measurement setting that we may be interested in...

Finally, are you familiar with Philipp Hoehn's work?

https://arxiv.org/abs/1511.01130

He derives QM in all mathematical detail from postulates of the kind that you mention. Including the two that you mention on page 1.

Best,

Markus

    5 days later

    Dear Markus,

    thanks for reading my essay, and for commenting!

    You are quite right in your observation that my argument, basically, is just equal to Cantor's regarding the fact that the powerset of a set necessarily has a greater cardinality than the set itself, and hence, that there can be no bijection between the two. This is a very familiar fact to us, today, but still, depending on the context, has quite nontrivial implications---the fact that there are uncomputable functions, or indeed, undecidable statements in any sufficiently expressive theories, follow exactly the same fold.

    It's not quite right to say that this applies equally well to a quantum world, however. The reason for this is that the basic underlying structure---Lawvere's fixed-point theorem---works in the setting of Cartesian closed categories; the category Hilb that has Hilbert spaces as its objects, and linear operators as its morphisms is not Cartesian closed, however. Baez has provided an interesting discussion on how it's exactly this categorical difference that underlies most of the 'weirdness' of quantum theory.

    In particular, the absence of a cloning operation means that the diagonalization doesn't go through---you can't, in a sense, feed the system back the information about the system. So in that sense, my argument entails that sets aren't a good setting for a physical theory, as you run into the paradoxical, and you have to adduce extra structure (by a deformation of the algebra of observables) to avoid this---which leads to something like phase-space quantization. Or, alternatively, you can start out with a categorical setting where you get this structure for free---leading to something like Hilb.

    Bohmian mechanics, by the way, isn't a counterexample---indeed, I think it supports my argumentation (this is discussed more in depth in the Foundations of Physics-article). In the end, it comes down to the fact that every function---including noncomputable ones---can be represented by means of a finite algorithm, augmented with an infinite string of random digits (every set is reducible to a random set by the Kucera-Gacs theorem). In general, thus, every measurement outcome in Bohmian mechanics is a function of the entire random initial conditions---which must fit the equilibrium hypothesis to give rise to quantum predictions. (Indeed, if the generation of measurement outcomes in Bohmian mechanics were computable, that would lead to exploitable nonlocal signalling.)

    Indeed, that's to me at least a suggestive way of forming the connection to quantum mechanics: a noncomputable function (or sequences) can be 'computed' in different ways---one, with a finite algorithm with interspersed random events, two, with a finite algorithm that reads out a fixed algorithmically random number, three, with an interleaving process computing every possible sequence. These correspond to the major interpretations of quantum mechanics---something like a Copenhagen collapse process, with the algorithm being the Schrödinger dynamics (von Neumann's 'process II'), and random events yielding the 'collapse' ('process I'), a Bohmian nonlocal hidden-variable approach, and a kind of many worlds theory.

    That said, I view this as very much a sketch of a theory---perhaps itself a kind of toy theory. To me, it seems a promising avenue to investigate, but I have no illusions about having painted any sort of complete picture at all. I ride roughshod over many subtleties, as you note; and there are several additional open questions. Some of this is treated more carefully in the Foundations of Physics-paper (which also properly cites the work by Hoehn and Wever---well, not quite properly, since I call him Höhn!), where I am also more cautious about some of my claims. There, also an argument based on Chaitin's incompleteness theorem, that doesn't boil down to 'mere diagonalization', is included.

    Thanks, again, for taking the time to read and comment on my essay. I would very much enjoy continuing this discussion---since I work on this mostly in isolation, there's a high danger of getting lost down blind alleys, so I welcome any reality check on my ideas. So any and all criticism is greatly appreciated!

    Cheers

    Jochen

    Hmm, I have problems getting my comments to post. Initially, I got a 'post is waiting for moderation' or something like that, then I had apparently gotten logged out. I will wait for a while whether the comment appears, and if it doesn't, type a new one sometime later.

    Dear Marcus,

    I've decided to try again submitting my comment, as long as my reply is still fresh on my mind.

    First of all, thank you for your comments, and criticism! I work on this topic largely in isolation, so it's good to have a little reality check now and then, to be kept on track, and not loose myself down blind alleys. Therefore, I hope to keep this discussion going, in some form!

    Now, to try and answer some of your concerns. You're of course perfectly right to point out that my argument really doesn't do more than point out that the powerset of the set of states can't be put into one-to-one correspondence with the states themselves---a fact of course long familiar, thanks to Cantor. But that doesn't mean it can't have subtle consequences---essentially, the existence of uncomputable functions, and the undecidability of certain propositions, all boil down to the same phenomenon.

    This was worked out by Lawvere, who first exhibited the fixed-point theorem that underlies the different realizations of unpredictability, undecidability, and so on. Within the preconditions of this theorem also lies an answer to your objection that the same should be possible in quantum- and even post-quantum worlds: the theorem's setting is that of Cartesian closed categories (such as Set, with sets as objects and maps between them as morphisms). In particular, in these categories, there exists a natural copying operation---which is basically what makes the diagonalization-argument possible, by 'feeding' the information contained in the system back to the system itself (think about the halting-checker examining its own source-code).

    Of course, this isn't possible in quantum theory, due to the absence of a cloning operation---which, in category-theoretic terms, means that the category Hilb with Hilbert spaces as objects and bounded linear operators as morphisms isn't Cartesian closed. John Baez has pointed out that much of the 'weirdness' of quantum mechanics boils down to this fact.

    So in this sense, my argument can be read as saying that Set isn't a good arena for a physical theory, for to avoid it lapsing into paradox, you have to adduce extra structure---corresponding to the *-deformation of the algebra of observables that essentially leads to deformation quantization (not that I'm claiming to have the complete picture there, mind). On the other hand, you can directly work in a setting---such as Hilb---where these problems don't arise.

    As to Bohmian mechanics, as I also argue in some more detail in the Foundations of Physics-paper, I think it's not a counterexample to my ideas, but in fact, very well in line with them---Bohmian mechanics, to reproduce the quantum predictions, essentially needs to be seeded with an initial random configuration (conforming to the 'quantum equilibrium hypothesis'). Its nonlocality means that essentially every measurement outcome is a function of this random seed (and not just of some finite portion thereof confined to the past light-cone, say). But every function (including non-computable functions) can be decomposed into a finite algorithm and an infinite, algorithmically random seed (this is just the Kucera-Gacs theorem that every set is reducible to a random one). Consequently, one could always interpret the 'computation' of a non-computable function as a finite algorithm seeded with an infinite random initial string---which then is what I would say Bohmian mechanics boils down to.

    Besides, one can show that every model in which non-local correlations are generated in a deterministic way must either be uncomputable, or will lead to exploitable signalling.

    Furthermore, there are (at least) two more ways to interpret the 'computation' of a non-computable function (or sequence). One is that every now and then, genuinely random events occur---that is, an algorithmic 'process II' is interspersed with 'process I' random occurrences. The other is simply to compute all possible sequences, in an interleaving manner---leading to a sort of many-worlds picture. Hence, the attempts to make sense of quantum mechanics at least suggestively map to the attempts to make sense of the non-computable. But this is of course merely heuristic.

    However, you are right to point out that I ride roughshod over many subtleties that need to be addressed, eventually. Personally, I consider this to be more of a sketch, than a full-fledged theory---a direction that I find suitably promising to explore (and hey, two other essays in this contest directly reference my work, so that's something at least!---or, of course, it could just mean that I've managed to lead others down the same blind alley I'm lost in. Hmm, that's not as cheerful a thought...). I am somewhat more careful, both in pointing out the preliminary nature of my investigations and in trying to make them more robust, in the Foundations of Physics-paper; in particular, there, I also present an argument based on Chaitin's incompleteness and algorithmic complexity that doesn't boil down to 'mere diagonalization'. (I also properly cite the work by Hoehn and Wever---or rather, almost properly, as I spelled his name 'Höhn' by mistake!)

    Anyway, I think this was most of what I originally intended to post. I would like to thank you again for engaging with my ideas---ideas that grow for too long in the shade away from the light of others' examination tend not to bear fruit; one sometimes needs the expertise of others to know where to cut, where to graft, and where to nurture a tender sapling.

    I hope this one will post!

    Cheers

    Jochen

    Dear Jochen,

    such a great essay. Fascinating to reconstruct quantum mechanics from "epistemic horizons".

    There are a few points that escaped my understanding in your essay and I would like to use the chance of this blog to ask a few questions and make some remarks.

    Classical physics worked pretty well for a few hundred years (and still does) for many phenomena. Also measurements can be described with classical physics. Quantum mechanics came in slowly in the attempts to explain the blackbody radiation and the discrete atomic spectra and other phenomena. None of these connected directly to limits of measurement or knowability. The point I want to make is: If classical physics/science is principally possible, where did the 'quantum' sneak in, in your argument? Such that the quantum would become necessary for epistemic reasons. I have not seen your two principles of section 1 in your prove by contradiction in section 2.

    I sympathise with the aim to use an epistemic horizon for some arguments about the structure of laws or even reality (whatever this means). Specially because I belief that the vieew that things, properties and laws that are completely independent of the relations of the things with the rest is overly onesided. However you certainly know the quote from Einstein, when Heisenberg went to him and told him, that Einstein's theory taught them that only observable elements should enter the theory. Einstein replied that it was the other way around. It is the theory that tells us what can be observed. This means for me, that to use an epistemic horizon of what can be know, must at least be justified.

    To advertise my essay: I came to a similar conclusion as you regarding the EPR experiment. You wrote: "Only given that one has actually measured xA is reasoning about the value of xB possible." In my essay I wrote on page 6: "But the very same experimental setup (EPR), shows that the setting of the reference frame far away from the object determines the possible, defined propositions."

    Luca

      Dear Jochen,

      I finally had time to read your essay. I really appreciated the clarity of your arguments.

      Your first example of the derivation of Heisenberg's principle from Finiteness and Extensibility is enlightening. Your introduction to superposition from diagonalisation is also very interesting. However, as you pointed out in your essay, "quantum mechanics, itself, does not fall prey to the same issues", notably because of the no-cloning theorem and the fact that the diagonal state yields a fixed point for the X gate. The fact that the superposition allows to avoid a logical contradiction reminds me of escaping the self-referential paradoxes by invoking a many-valued logic (e.g. trivalent), where '' (or "indeterminate") would be another kind of truth value, in addition to 0 and 1. But what about complex states ? I don't know if you are familiar with it, I heard about a book entitled 'Laws of Form' by Spencer-Brown which presents a calculus dealing with self-reference without running into paradoxes, by introducing an imaginary Boolean algebra. Take the equation x=-1/x , which entails in some way a self-reference, a mimic of the Liar. If x=1 then it is equal to -1 and vice-versa, leading to a contradiction. The solution is to introduce an imaginary number, i , defined by i=-1/i.

      Your reading of Bell's theorem as revealing a counterfactual undecidability was enjoyable to read, as it is in line with my presentation of contextuality as a similar undecidability.

      Another point : As you may have read as an epilogue in my essay, I am also interested in the Liar like structure that can emerge from "physical (hypothetical) loops" like CTCs. I am especially interested in quantum-based simulations of such CTCs, as Bennett and Loyd's P-CTCs. In the literature, e.g. https://arxiv.org/abs/1511.05444, people have studied the relation between logical consistency and the existence of unique fixed-point. I was wondering if you had also some kind of epistemic reading of such loops, if you think that this is also related to Lamvere's theorem.

      Cheers,

      Hippolyte

        Dear Jochen,

        thank you so much for your detailed and very illuminating answer! And thanks so much also for the links, in particular to Baez' work -- this is highly appreciated!

        I feel like I would need to dive deeper into category theory to continue the proper discussion. Nonetheless, perhaps one follow-up question. If I understand the notion of "not being Cartesian closed" well enough, then this property also applies to a structure much more mundane than Hilbert space quantum mechanics: probability distributions.

        In fact, since you point out the no-cloning theorem, there is a general no-cloning theorem for probability distributions (and more general probabilistic theories), see e.g. here:

        https://arxiv.org/abs/quant-ph/0611295v1

        In other words, you cannot copy (clone) probability distributions (trying to do so will introduce correlations, i.e. simply broadcast). Therefore, I would be surprised if the structural property that you point out was in some specific way part of the reason "why" we should expect quantum effects.

        However, all that I'm writing here comes with the grain of salt that I don't have much background in category theory.

        Best,

        Markus

        Dear Jochen,

        Thank you so much for writing this essay! I enjoyed it very much. I especially enjoyed the idea of an epistemic horizon, and of a potential link between mathematical undecidability and physical unknowability, as we know it in quantum physics. As a very rough summary, would it be fair to say that you are applying the limitations obtained by self-reference and negation in a clever way in order to derive some of the epistemic limitations characteristic of quantum mechanics?

        I was also wondering how your view point relates to the principle "universality everywhere"; in particular, to the quantum notions of universality in computation, spin models, etc (as mentioned in my essay). My impression was always that these quantum notions would suffer from the same limitations as their classical counterparts. Perhaps we are just applying the paradox of self-reference and negation in different ways?

        Finally, I was wondering about your opinion of this work , in particular in relation to your work.

        Thanks again, and best regards,

        Gemma

          Dear Jochen,

          Very interesting. Great to hear from the young talented student like you. I would like to ask you the following point. According to the PBR theorem,

          In conclusion, we have presented a no-go theorem, which - modulo assumptions - shows that models in which the quantum state is interpreted as mere information about an objective physical state of a system cannot reproduce the predictions of quantum theory. The result is in the same spirit as Bell's theorem, which states that no local theory can reproduce the predictions of quantum theory.

          From your viewpoint, 'epistemic horizon', what do you deal with this theorem?

          Best wishes,

          Yutaka

            Dear Marcus,

            good to hear I could add some clarification! I have to admit, I'm myself insufficiently familiar with category theory to really get into the thick of it---it's too vast a subject for me to really get a general overview over, without expending equally vast amounts of time.

            Therefore, I'm not sure, offhand, how to answer your question. However, it seems to me that any assignment of classical probabilities must include the case of certainty---that is, of stipulating for each observable, whether a system possess it, or fails to. (As in the vertices of the CHSH-polytope.) In my framework, this assignment isn't possible, so it seems to me that we can't be left with 'just' classical probability distributions.

            But I will have to think about this some more. I have wondered about the role of the classical no-cloning theorem, in particular in light of the Clifton-Bub-Halvorson theorem (I know this also includes a 'no bit commitment'-requirement, but in the end, that just tells you that at least some of the entangled states of the resulting algebra must be physically realizable, I think).

            And in a sense, no-cloning is just the measurement problem: if you could clone, you could make a perfect measurement; and if you could make a perfect measurement of every state, you could clone. So if the no-cloning theorem in classical probability theory has the same significance, then why isn't there an equivalent measurement problem? (Or is there?)

            Anyway, this seems an interesting line of thought to pursue, thanks for suggesting it!

            Cheers

            Jochen

            Dear Luca,

            thanks for your generous comment. You ask some good questions, and I'll do my best to do them justice.

            First of all, the point where quantum physics enters into the picture isn't one of 'smallness', exactly, as it's often glossed, but it is one where the amount of information you have about a system nearly or completely exhausts the amount of information it's possible to acquire about that system. In other words: if you know the state of a system only in its gross properties, you will not notice any quantum mechanical behavior; and in everyday experience, we only ever know a tiny amount about anything we engage with.

            Think about the properties we typically know about a chair---approximate size, weight, shape---in comparison with the complete specification of each of its constituent atoms. The former will be perfectly sufficient for a classical description, yielding accurate predictions; only if we really had access to something approaching the full microstate would we come close to exhausting the information available about a chair, and thus, notice its quantum character. As this is generally only possible for systems with very few characteristics, and these tend to be submicroscopic, quantum theory tends to be glossed as a theory of 'the small'.

            Hence, that there is an approximate classical description of macroscopic systems, as long as we don't know very much about them---as long as we don't exhaust the information available due to the 'finiteness' principle---is a consequence of the approach.

            Your second point is more difficult to answer precisely, so I'll wave my hands around a little. In a sense, what I'm proposing is that the epistemic horizon is a metatheoretic principle: it's a boundary on what's possible to grasp of the world in terms of a model thereof. Hence, it's not quite the theory that tells us what we can know, but rather, the act of theorizing. This is a little Kantian in spirit: how we view the world is not just a flat consequence of the way the world is, but equally, a consequence of our viewing of it. (This is perhaps explained a bit better, from a different angle, in my contribution to last year's contest.)

            Does this make sense to you?

            The part you quote from your essay seems certainly not too far from my own views. I will have a look, and try and see whether I find something interesting to say; thanks for highlighting it.

            Cheers

            Jochen

            Dear Hippolyte,

            thanks for your comments! I think we've got a bit of a common direction in our thinking---the 'Laws of Form' has long held some intrigue for me, but I was never quite able to come to any definite conclusions about it. (Perhaps you know the work of Louis Kauffmann, who if I remember correctly has also proposed some interesting connection between the paradoxes of 'reentrant' forms, complex numbers, and the passage of time---perhaps in the paper on 'Imaginary Values in Mathematical Logic'.)

            As for the introduction of an 'indeterminate' logical value, this alone probably won't solve Gödelian paradoxes---you can appeal to the 'strengthened liar', the sentence 'this sentence is false or meaningless', which is either true, or not; if it is true, then it must be false or meaningless, and if it is not true, then it is either false of meaningless, hence true. (That's why you also can't get out of trouble postulating 'null' results for measurements as a way out.) Superposition then can't be thought of as another truth value to be added, but rather, the absence of any definite truth value.

            The connection between self-referential paradoxes and temporal paradoxes is an interesting one, but I haven't yet found much time (irony?) to spend on exploring it. In a sense, the two most discussed paradoxes---the grandfather paradox and the unproven theorem---bear a close connection to the self-negating Gödel sentence, and the self-affirming Henkin sentence: one eliminating the conditions of its own determinateness, the other creating them.

            But as I said, beyond such generalities, I don't have much to offer. But I'll try and spent a little time thinking about this, if I come up with anything worthwhile, I'll make sure to let you know.

            Cheers

            Jochen

            Dear Gemma,

            I'm glad you found something worth your while in my essay! Your summary, I think, is accurate: in the general sense, there exists a bound on the information obtainable about any given system (which, in the full sense, requires the appeal to Chaitin's quantified incompleteness theorem given in the Foundations-paper), and once one has reached that limit, 'old' information must become obsolete---as it does when we try to expand our horizon by walking (on the spherical Earth, or, well, any planet will do) west, losing sight of what lies to the east.

            I'm not completely sure I get the gist of your question regarding quantum computation (etc.) right. Are you asking whether my proposal implies that quantum computers should be capable of beyond-Turing computation? If so, then the answer is kind of a 'it depends': any functional model of quantum computation will be able to compute only those functions that a classical Turing machine can compute. But still, if quantum mechanics is genuinely (algorithmically or Martin-Löf) random, then obviously, we can use quantum resources to do something no classical machine can do, namely, output a genuinely random number! Hence, as Feynman put it, "it is impossible to represent the results of

            quantum mechanics with a classical universal device."

            So in a sense, we need to be careful with our definitions, here---any way of implementing a finite, fixed procedure, whether classically or with quantum resources, will yield a device with no more power than a classical Turing machine (regarding the class of functions that can be computed, if not the complexity hierarchy). The reason for this is that simply outputting something random does not avail any kind of 'useful' hypercomputation, because one could always eliminate the randomness by taking a majority vote (provided the probability of being correct is greater than 1/2), and hence, do the same with a deterministic machine.

            In a way, the story with quantum mechanics and computability is like that with non-signalling: it somehow seems like the classical constraint ought to be violated---but then, the quantum just stops short a hair's breadth of actually breaking through.

            As for the Deutsch et al. paper, I can't really offer an exhaustive analysis. I'm somewhat sympathetic to the notion of 'empiricizing' mathematics (indeed, Chaitin has made the point that the undecidability results mean that at least to a certain degree, there are mathematical facts 'true for no reason'), but I think that notions such as those in Putnam's famous paper 'Is Logic Empirical?' go a step too far. In a way, different logics are like different Turing machines---you can do all the same thing using a computer based on a three-valued logic (the Russian 'Setun' being an example) as you can do using one based on the familiar binary logic, so in that sense, there can't be any empirical fact about which logic is 'better'.

            But again, I'm not familiar enough with the paper to really offer a balanced view.

            Cheers

            Jochen

            Dear Yutaka,

            thanks for your comment! And I'll take both the 'young' and the 'talented' as compliments...

            As for the PBR theorem, it essentially states that there is no more fundamental description than quantum theory that bears the same relation to it as classical mechanics does to quantum mechanics---i. e. quantum mechanics can't be the statistical version of some underlying, more definite theory.

            In that sense, it's very well in line with my result---which essentially states that there is no more fundamental description than that given by the quantum formalism; this is, so to speak, all the description that's possible. Whether that means that's all there is, or whether, as in QBism, there is an epistemic element to this description, is something that, I think, is an open question as of now.

            Does this make sense to you?

            Cheers

            Jochen

            Hi Jochen,

            Thanks for your reply. However I still don't get where your Finiteness and Extensibility principles enter your prove of the impossibility of Assumption 1.

            Do you think your principles are connected to the impossibility of copying the whole information in quantum systems? So in your chair example, I was thinking, the information of weight, form, size etc. actually already exhausts all the properties, that make a chair a chair. What it makes classical is, that this information is available abundantly/redundantly, whereas this is not the case for quantum objects.

            The reason for the the necessity of such epistemic restriction remains unclear. And might be not further justifiable than by the empirical evidence.

            However the justification of why such epistemic consideration should have an effect on the ontology, cannot be in my opinion, by the way we (only can) view the world. This makes the whole picture a bit to anthropocentric. Don't you think? The objective quantum mechanical phenomena like super conduction, stability of atoms, etc. cannot be because of epistemic limits in the knowability the underlying world.

            In my essay I probe the possibility, that the underlying objective 'reality' is emergent from relations between emergent objects themselves. Something like this could justify an epistemic impact on the underlying structure. I'll be curious on your comments on my essay.

            Luca

            Dear Jochen,

            Yes probably you are correct, Godel theorem may not be applicable to Cosmology.

            I hope you will have CRITICAL examination of my essay... "A properly deciding, Computing and Predicting new theory's Philosophy"..... ASAP

            Best Regards

            =snp

            Hi Jochen,

            Thank you for your response. "Your notions regarding---if I interpret you correctly---an inherent thermal 'noise' making the acquiring of perfect information about a system impossible remind me of Nelsonian stochastic mechanics. Is there a connection?"

            Your interpretation is not quite right at a subtle but fundamental level. The idea of thermal noise and stochastic mechanics implicitly assumes random fluctuations of precise coordinates, but precise coordinates are definable only with respect to an assumed ambient temperature of absolute zero. Perfect information for a system contextually defined at a positive ambient temperature is complete. There is no randomness in the actual contextual state. Randomness only comes in during irreversible transition from a metastable state to a more stable (higher entropy) state. As long as a state exists, there is no randomness and the state evolves deterministically.

            Harrison

            Dear Luca,

            if I understand you correctly, I think you've gotten something a little mixed up---Finiteness and Extensibility are the starting points for the reconstruction of quantum mechanics in many recent attempts to justify the formalism from first principles (the way the invariance of the speed of light and the relativity principle are for special relativity). However, that just invites the question---but why are we limited in the amount of information we can obtain about a physical system?

            That's the question I'm trying to answer---in other words, Finiteness and Extensibility are the output of my approach, they're what I'm arguing must hold, due to the application of Lawvere's theorem to the notion of measurement. That these principles hold is then equivalent to Assumption 1 being false---there isn't a function f(n,k) such that it yields a value for every state and measurement. There are some measurements on certain states such that it doesn't yield a value (Finiteness, although not quite---you need the argument from the Foundations paper based on Chaitin's theorem for that), and for these, we will learn new information upon measurement (Extensibility).

            This doesn't really impinge on the objectivity of quantum phenomena, by the way. My proposal of a relative realism---which I don't really develop in the essay, admittedly---assigns values only to those measurements where f(n,k) yields a value, but that's a perfectly objective statement: in a given state, only those properties where the measurement outcomes can be predicted with certainty actually have definite values.

            You can, of course, also interpret this as a subjectivist stance---i. e. claim that there's some real values out there, but our descriptions can't include them. But that's an additional interpretational commitment, nothing that's forced on us by my argument.

            Cheers

            Jochen

            Hi Jochen,

            Thanks for the really interesting essay! Your introduction to the effects of Finitness and Extensibility was very intuitive and how they may be used to understand Heisenberg's uncertainty principle. I think there could be a very close connection between the amount of energy required to extract information perfect information about the system---I'm thinking squeezed states---and the finitness principle. A perfectly localised measurement requires an infinite amount of energy over an infinitesimally short period of time.

            I guess my question is, do you suppose that the epistemic horizon is a physical horizon? However, we might be wading into that age old debate of ontological vs epistemic interpretations of quantum physics!

            In any case, it was a terrific essay and I rated it highly! I hope you get a chance to take a look at mine. We certainly have overlap in our ideas, although I fall down on the opposite side of you conclusions if you argue it a thermodynamic angle.

            All the best!

            Michael

              Hi Jochen,

              Yes. I really got it mixed up. That is why I could not find the principles in the prove.

              Thanks for the clarification and also for reading and commenting on my essay.

              Good luck on the competition with your great essay and you certainly get the price for the best title.

              Luca