Dear Jochen,

Thank you for reading my work and taking the time to comment. You are suggesting some interesting directions for future work, some of which have been attempted and others are still waiting to be tried. I do try to address such alternatives in the appendix, and will try to do more for your ideas in my future work. Thank you!

Best,

Roman Yampolskiy

You write in your nice expose at the end: our results are very timely and should allow for a better understanding of limits to verifiability and resources required to get desired levels of confidence in the results.

Please, allow me to comment:

The development of Being goes from total unformedness to total formedness.

Observing observes and forms "That which is" as a world. As long as an observer is not equal to the whole, then his observer is limited to his personal possibilities, just as the unformedness of Being does not allow the totality of Being to be totally formed, thus there is always room for more formedness of Being. This will have the effect that knowing can always only be knowing up to and including that specific state of being formed.

Conversely, therefore, that which is formed is sufficient for knowing the next step, but certainly not for the step after that. In Category Theory this is that if A goes to B and B goes to C then A can go to C, but that is only true if B indeed goes to C. Hence AC has the notation AC after BC.

The conclusion is that we always know exactly enough for now, because now is as it is. That means "we" is the decisive factor and that is the scientific community as a whole as verifier (for that moment).

Bests,

Jos

    Dear Jos,

    Thanks for taking the time to read my work. We, as the scientific community, a an ultimate verifier of truth.

    Best,

    Roman Yampolskiy

    Dear Roman V Yampolskiy, after reading your essay, I realized that I should ask you to verify the new Cartesian generalization of modern physics, which is based on the identity of physical space and Descartes's matter. According to this identity, it is common for physical space to move relative to itself, since it is matter. Arguing in this way, I showed that the probability density of states in an atom depends on Lorentz abbreviations: length, time, mass, etc. I invite you to discuss my essay, in which I show the successes of the neocartesian generalization of modern physics: "The transformation of uncertainty into certainty. The relationship of the Lorentz factor with the probability density of states. And more from a new Cartesian generalization of modern physics. by Dizhechko Boris Semyonovich »

      Dear Boris,

      Thank you for the invitation, I will take a look.

      Best,

      Roman Yampolskiy

      Dear Roman,

      thank you for your answer. I think that it's an interesting thread to consider whether to merely certify a proof, one always needs to be capable of checking whether it is correct. Certain proofs, if you possess them, may enable you to perform certain tasks---hence, your ability to perform these tasks will certify your having that proof, up to any given standard of certainty (strictly smaller than absolute certainty, of course). This sort of thing seems closely related, to me, to the problem of certifying whether one party has a certain capacity (say, access to a universal quantum computer) without the other party necessarily having that capacity (a quantum computer to check).

      Therefore, it doesn't seem quite right to me that each verifier necessarily needs capacities equal to or exceeding that of the system it verifies; indeed, there may be ways for you to convince me you've proven something without me having any hope of ever checking the proof, which would indicate that a proof-checker is not the only possible kind of verifier imaginable.

      I think that makes sense. Something definitely to consider as I continue work in this domain. Thank you!

      10 days later

      Hi Roman,

      I like your overview of the reductionist scientific paradigm in his many empirical, theoretical and pratical implications very much. But I personally can't find any deep new conclusions. Correct me if I'm wrong, but couldn't you reduce all scientific concepts to an abstract (turing-related...) 'reduction' and that's really it?

      Greetings

      Morris

        Morris,

        Thank you for liking my work.

        If you are looking for new conclusions, I would suggest concentrating on the subsections on AI.

        Best,

        Roman

        Hi Roman,

        sorry for this long reply. But I was really intrigued by your essay, which inspired me to a lot of thoughts.

        In physics I used to be a bit of a Platonist interested only in theoretical physics, which is reflecting the true forms and not much interested in experimental physics that are concerned only with the shadows of ideal forms. In your essay you made the theory of verification - which I compare with experimental physics - really interesting! Your essay is nice to read and interesting from the beginning to the end.

        A third of all mathematical publication contains errors? That is worse than medical publications!

        In physics realism that states the truth of ontological propositions should be independent of the means of its verifications (measurements). This of course is questioned by quantum mechanics. And the search for a type of realistic interpretation of physics, which does not relay to the human observer is still going on. In mathematics even more than in physics, we use to have a realistic attitude. Do you defend in your essay an non-realist position, where the truth of mathematical proposition depends on the ability of verification of the verifier?

        I am not up to date in AI research and I found your exposition very interesting. Were you able to give to 'comprehensibility' a precise mathematical meaning?

        When I was reading your essay my son asked me what I was reading. I sad it is about whether a computer program could be able to check if other programs and himself is working correctly. I instantaneously asked: why don't you write a second program. the two programs could check themselves. So if there were another barber ... In your essay you say that there are minds that can never be understood by a human mind as it has the greater number of states. But could two minds that have the same amount of states comprehend each other?

        In the context of physics I always wondered, whether a measurement instrument must have greater complexity than the objects that is measured. For instance for measuring a square one needs at least a field over 4 points in order the be able to distinguish, if a square has been rotated. Also this would lead to an infinite regress.

        In your physics section you seem to imply that the probabilistic nature of mathematical verification implies the probabilistic nature of mathematics and

        hence the probabilistic nature of physics (=QM) in the MU. Is that so?

        I always wondered, if the fact that a system cannot completely know itself, and an external measurer is needed to completely specify the system , could be the cause of the probabilistic nature of QM. If an object has n degrees of freedom, its measurer must have at least n degrees of freedom. Let's say m. So the total system must have n*m degrees of freedom, which is greater than m. Hence there are undetermined elements within the system. Hence probability.

        Well in relativistic classical physics only relative distances and velocities are measurable within the system. While the absolute values are measurable only from outside - relative to the location of that system. There is also an infinite regress here, but I think this is completely consistent and classical and no 'problem' arises with that kind of infinite regress.

        Last but not least I want to advertise my essay, that I still need to write and that will have a title like: "Changing axioms - the unpredictable evolution of physical laws". There I propose that the definability of basic concepts (quantities) that make up the physical theory (laws) depends on the possibility to separate the system from its environment. For instance relative distances are only defined as long as the system is separable (translation invariant) from its environment. This in my opinion not only solves the objectivity problem raised by Wigner friend type of experiments, since it protects the system from outside interventions and its symmetry and unitarity as condition to have well defined concepts within the system. But the conditioning of the definability of concepts by its environment (which is contingent) means that the basic concepts that make up the theory may change by changing environment and so does the theory itself. I think that gives an interesting view also on AI which is able to adapt itself and change its program according to the environment.

        Best regards

        Luca

          Hey Luca,

          Thank you for your detailed comment.

          "I am not up to date in AI research and I found your exposition very interesting. Were you able to give to 'comprehensibility' a precise mathematical meaning?"

          Please see: https://arxiv.org/abs/1907.03869

          "But could two minds that have the same amount of states comprehend each other?"

          Please see: https://arxiv.org/abs/0708.1362

          "In your physics section you seem to imply that the probabilistic nature of mathematical verification implies the probabilistic nature of mathematics and hence the probabilistic nature of physics (=QM) in the MU. Is that so?"

          Yes, that is one of my ideas.

          Best,

          Roman

          24 days later

          Dear Roman,

          Thank you for disentangling all these different limitations of human knowledge.

          Science made great strides by formulating the intuitive notion of a computation with a Turing machine. With this formulation, we were able to conquer the notion of undecidability. It would be nice to formulate the many intuitive concepts you bring to light.

          You have a line: "Essentially the intelligence based complexity of an algorithms related to the minimum intelligence level required to design an algorithm or to understand it." I suspect that he vast majority of programs available on the market are not intelligible to any single person. I once heard that Microsoft Windows was made of 40,000,000 lines of code. Surely that is beyond the ability of one. Perhaps a more general definition is needed. Do two designers have a higher IQ than one?

          Thank you for a very interesting article.

          All the best,

          Noson

            Dear Noson,

            Thank you for your comment. You are asking great questions about how IQ of multiple agents can be combined. This week, a book chapter of mine (chapter 1: TOWARDS THE MATHEMATICS OF INTELLIGENCE) on this topic is out in the book: https://vernonpress.com/book/935 I think it answers some of your questions, but still leaves much room for future work.

            Best,

            Roman

            Thank you for this well-written and stimulating essay.

            Let me add something to your sentence: "The Born rule [76], a fundamental component of Copenhagen interpretation, provides a link between mathematics and experimental observations." . I would like to point out to you that the interpretation known as QBsim, whose author explicitly consider a refinement of Copenhagen, takes the Born rule as an element of reality. In fact, the only "element of reality", while the rest is all subjective.

            I invite you to have a look at my essay

            regarding the role of "elements of reality" that we grant to mathematical entities like numbers and what are the consequences for natural sciences.

            Very high rate from me, and good luck with the contest!

            Flavio

              Dear Flavio,

              Thank you for your kind words and useful pointers. I look forward to reading suggested materials.

              Best,

              Dr. Yampolskiy

              a month later

              Dear Roman,

              Your extremely important essay makes it possible to conclude: the centenary problem of the "foundations of mathematics" (justification, substantiation), which Morris Kline beautifully presented in "Mathematics: Loss of Certainty," remains the philosophical and mathematical problem No. 1 for cognition as a whole. Uncertainty in the foundations of knowledge, the "language of Nature", ultimately gives rise to undecidability, uncomputability, unpredictability ... unverifiability, unexplainability ... unrepresentation

              The problem of the "foundations of mathematics" is an ontological problem. Therefore, I call this problem: the problem of the ontological basis of mathematics (knowledge). The unsolved problem of the essential "foundations of mathematics", in turn, gives rise to problems in Computational Mathematics, which Narin'yani A.S. described well in the article Mathematics XXI - a radical paradigm shift. Model, not Algorithm . A.Narin'yani notes: "The current state of affairs in Computational Mathematics can be tried to evaluate by contrasting two alternative points of view. One, as it were, for granted: Computational mathematics is a successful, rapidly developing field, extremely demanded by practice and basically meeting its needs. Second, far from being so optimistic: Computational mathematics is in a deepening crisis, becoming more and more inadequate in the context of growing practice demands. "At present, Computational Mathematics has no conceptual ideas for breaking this deadlock."

              Do you agree with this conclusion?

              In an interview with mathematician and mathematical physicist Ludwig Faddeev The equation of the evil spirit it is written:"Faddeev is convinced that just as physics solved all the theoretical problems of chemistry, thereby "closing" chemistry, so mathematics will create a "unified theory of everything" and "close" physics."

              How can mathematics "close physics" if the problem of "foundations of mathematics" (ontological basification of mathematics) is not solved? ... In your opinion, why is the age-old problem of the justification (basification) of mathematics "swept under the rug" primarily by mathematicians themselves?

              With kind regards, Vladimir

                Dear Vladimir,

                Thank you for your kind words and for sharing some interesting references. I will be sure to read them. As to your last question, recent work by Wolfram may be an interesting direction to follow in that regard: https://www.wolframphysics.org/

                Best,

                Roman

                Thank you very much, Roman, for your quick reply and link. I'm starting to read with interest.

                Best,

                Vladimir

                13 days later

                hi roman I appreciate your comprehension of human observer.Do human selection effects filter into the eventual outcome of an experiment ,or vice versa.? please read/rate my take on my essay -https://fqxi.org/community/forum/topic/3525.i would greatly love to hear you on this topic. thanks and All the best to you

                Dear Roman,

                Excellent Essay, please take a look at the long form version of my essay;

                The sections where I compare biological complexity to computer, you will find that part very interesting

                Please take a look at my essay A grand Introduction to Darwinian mechanic

                https://fqxi.org/community/forum/topic/3549

                Write a Reply...