I think that makes sense. Something definitely to consider as I continue work in this domain. Thank you!
Unverifiability, Unexplainability & Unpredictability by Roman V Yampolskiy
Hi Roman,
I like your overview of the reductionist scientific paradigm in his many empirical, theoretical and pratical implications very much. But I personally can't find any deep new conclusions. Correct me if I'm wrong, but couldn't you reduce all scientific concepts to an abstract (turing-related...) 'reduction' and that's really it?
Greetings
Morris
Morris,
Thank you for liking my work.
If you are looking for new conclusions, I would suggest concentrating on the subsections on AI.
Best,
Roman
Hi Roman,
sorry for this long reply. But I was really intrigued by your essay, which inspired me to a lot of thoughts.
In physics I used to be a bit of a Platonist interested only in theoretical physics, which is reflecting the true forms and not much interested in experimental physics that are concerned only with the shadows of ideal forms. In your essay you made the theory of verification - which I compare with experimental physics - really interesting! Your essay is nice to read and interesting from the beginning to the end.
A third of all mathematical publication contains errors? That is worse than medical publications!
In physics realism that states the truth of ontological propositions should be independent of the means of its verifications (measurements). This of course is questioned by quantum mechanics. And the search for a type of realistic interpretation of physics, which does not relay to the human observer is still going on. In mathematics even more than in physics, we use to have a realistic attitude. Do you defend in your essay an non-realist position, where the truth of mathematical proposition depends on the ability of verification of the verifier?
I am not up to date in AI research and I found your exposition very interesting. Were you able to give to 'comprehensibility' a precise mathematical meaning?
When I was reading your essay my son asked me what I was reading. I sad it is about whether a computer program could be able to check if other programs and himself is working correctly. I instantaneously asked: why don't you write a second program. the two programs could check themselves. So if there were another barber ... In your essay you say that there are minds that can never be understood by a human mind as it has the greater number of states. But could two minds that have the same amount of states comprehend each other?
In the context of physics I always wondered, whether a measurement instrument must have greater complexity than the objects that is measured. For instance for measuring a square one needs at least a field over 4 points in order the be able to distinguish, if a square has been rotated. Also this would lead to an infinite regress.
In your physics section you seem to imply that the probabilistic nature of mathematical verification implies the probabilistic nature of mathematics and
hence the probabilistic nature of physics (=QM) in the MU. Is that so?
I always wondered, if the fact that a system cannot completely know itself, and an external measurer is needed to completely specify the system , could be the cause of the probabilistic nature of QM. If an object has n degrees of freedom, its measurer must have at least n degrees of freedom. Let's say m. So the total system must have n*m degrees of freedom, which is greater than m. Hence there are undetermined elements within the system. Hence probability.
Well in relativistic classical physics only relative distances and velocities are measurable within the system. While the absolute values are measurable only from outside - relative to the location of that system. There is also an infinite regress here, but I think this is completely consistent and classical and no 'problem' arises with that kind of infinite regress.
Last but not least I want to advertise my essay, that I still need to write and that will have a title like: "Changing axioms - the unpredictable evolution of physical laws". There I propose that the definability of basic concepts (quantities) that make up the physical theory (laws) depends on the possibility to separate the system from its environment. For instance relative distances are only defined as long as the system is separable (translation invariant) from its environment. This in my opinion not only solves the objectivity problem raised by Wigner friend type of experiments, since it protects the system from outside interventions and its symmetry and unitarity as condition to have well defined concepts within the system. But the conditioning of the definability of concepts by its environment (which is contingent) means that the basic concepts that make up the theory may change by changing environment and so does the theory itself. I think that gives an interesting view also on AI which is able to adapt itself and change its program according to the environment.
Best regards
Luca
Hey Luca,
Thank you for your detailed comment.
"I am not up to date in AI research and I found your exposition very interesting. Were you able to give to 'comprehensibility' a precise mathematical meaning?"
Please see: https://arxiv.org/abs/1907.03869
"But could two minds that have the same amount of states comprehend each other?"
Please see: https://arxiv.org/abs/0708.1362
"In your physics section you seem to imply that the probabilistic nature of mathematical verification implies the probabilistic nature of mathematics and hence the probabilistic nature of physics (=QM) in the MU. Is that so?"
Yes, that is one of my ideas.
Best,
Roman
Dear Roman,
Thank you for disentangling all these different limitations of human knowledge.
Science made great strides by formulating the intuitive notion of a computation with a Turing machine. With this formulation, we were able to conquer the notion of undecidability. It would be nice to formulate the many intuitive concepts you bring to light.
You have a line: "Essentially the intelligence based complexity of an algorithms related to the minimum intelligence level required to design an algorithm or to understand it." I suspect that he vast majority of programs available on the market are not intelligible to any single person. I once heard that Microsoft Windows was made of 40,000,000 lines of code. Surely that is beyond the ability of one. Perhaps a more general definition is needed. Do two designers have a higher IQ than one?
Thank you for a very interesting article.
All the best,
Noson
Dear Noson,
Thank you for your comment. You are asking great questions about how IQ of multiple agents can be combined. This week, a book chapter of mine (chapter 1: TOWARDS THE MATHEMATICS OF INTELLIGENCE) on this topic is out in the book: https://vernonpress.com/book/935 I think it answers some of your questions, but still leaves much room for future work.
Best,
Roman
Thank you for this well-written and stimulating essay.
Let me add something to your sentence: "The Born rule [76], a fundamental component of Copenhagen interpretation, provides a link between mathematics and experimental observations." . I would like to point out to you that the interpretation known as QBsim, whose author explicitly consider a refinement of Copenhagen, takes the Born rule as an element of reality. In fact, the only "element of reality", while the rest is all subjective.
I invite you to have a look at my essay
regarding the role of "elements of reality" that we grant to mathematical entities like numbers and what are the consequences for natural sciences.
Very high rate from me, and good luck with the contest!
Flavio
Dear Flavio,
Thank you for your kind words and useful pointers. I look forward to reading suggested materials.
Best,
Dr. Yampolskiy
Dear Roman,
Your extremely important essay makes it possible to conclude: the centenary problem of the "foundations of mathematics" (justification, substantiation), which Morris Kline beautifully presented in "Mathematics: Loss of Certainty," remains the philosophical and mathematical problem No. 1 for cognition as a whole. Uncertainty in the foundations of knowledge, the "language of Nature", ultimately gives rise to undecidability, uncomputability, unpredictability ... unverifiability, unexplainability ... unrepresentation
The problem of the "foundations of mathematics" is an ontological problem. Therefore, I call this problem: the problem of the ontological basis of mathematics (knowledge). The unsolved problem of the essential "foundations of mathematics", in turn, gives rise to problems in Computational Mathematics, which Narin'yani A.S. described well in the article Mathematics XXI - a radical paradigm shift. Model, not Algorithm . A.Narin'yani notes: "The current state of affairs in Computational Mathematics can be tried to evaluate by contrasting two alternative points of view. One, as it were, for granted: Computational mathematics is a successful, rapidly developing field, extremely demanded by practice and basically meeting its needs. Second, far from being so optimistic: Computational mathematics is in a deepening crisis, becoming more and more inadequate in the context of growing practice demands. "At present, Computational Mathematics has no conceptual ideas for breaking this deadlock."
Do you agree with this conclusion?
In an interview with mathematician and mathematical physicist Ludwig Faddeev The equation of the evil spirit it is written:"Faddeev is convinced that just as physics solved all the theoretical problems of chemistry, thereby "closing" chemistry, so mathematics will create a "unified theory of everything" and "close" physics."
How can mathematics "close physics" if the problem of "foundations of mathematics" (ontological basification of mathematics) is not solved? ... In your opinion, why is the age-old problem of the justification (basification) of mathematics "swept under the rug" primarily by mathematicians themselves?
With kind regards, Vladimir
Dear Vladimir,
Thank you for your kind words and for sharing some interesting references. I will be sure to read them. As to your last question, recent work by Wolfram may be an interesting direction to follow in that regard: https://www.wolframphysics.org/
Best,
Roman
Thank you very much, Roman, for your quick reply and link. I'm starting to read with interest.
Best,
Vladimir
hi roman I appreciate your comprehension of human observer.Do human selection effects filter into the eventual outcome of an experiment ,or vice versa.? please read/rate my take on my essay -https://fqxi.org/community/forum/topic/3525.i would greatly love to hear you on this topic. thanks and All the best to you
Dear Roman,
Excellent Essay, please take a look at the long form version of my essay;
The sections where I compare biological complexity to computer, you will find that part very interesting
Please take a look at my essay A grand Introduction to Darwinian mechanic
https://fqxi.org/community/forum/topic/3549
We carefully read and discussed everything. There is something to think about. The scientific perspective is visible. Your ideas are very close to us! One of us works at the department of philosophy, the other at the department of computer science. Therefore, your essay was interesting to both of us. We really liked the use of the principle of "regression to infinity" for an observer in physics and for checking evidence in mathematics. This comparison is very heuristic. We liked the fact that you do not come to agnosticism. We believe in the possibilities of reason. But we think your approach has overtaken time. While in science there is not even a recognition of the objectivity of information. Therefore, ideas of this type are perceived as metaphors.
Now we are implementing a startup project to develop a fundamental ontology for integrating various ontologies of subject areas. We are creating a digital platform for this integration. Perhaps we can even establish mutually beneficial cooperation with you.
We hope you find our essay interesting.
Truly yours,
Pavel Poluian and Dmitry Lichargin,
Siberian Federal University.