Dear Jon,
Thank you for your comments and for your questions. I will try to answer them as best as I can. Keep in mind that I have not yet completely worked out everything, but I like to think that I am close.
"Do you think in your model where you consider a quantum 2-D object which has the potential to become a 3-D object (instead of a 3-D object with one dimension consisting of every possible value) you could still account for interference patterns as seen in the double slit experiment?"
The mathematical model alone cannot do this because it doesn't explain 1) where the phase factor comes from and 2) How the double slit potential is to be modeled.
However, if you combine the mathematical model with the physical model I presented at my talk in Vaxjo, then I believe it is in principle possible because the physical model supplies basis from which one can get both: The phase factor comes out of a postulated mechanism that compares the passage of time in areatime to the passage of time in spacetime, and the potential corresponds to where you set your point of origin in the abstract plane formed out of the two proper times. I say "in principle" because I have not yet performed the calculation, as I have been still focused on much more basic issues in my framework.
"Have you seen Stephen Wolfram's work where he talks about enumerating different axiom systems and showing what theorems are provable under each?"
No, I was unaware of Wolfram's work in this area. I just read the linked talk and found it very interesting. Essentially, it appears he has taken this idea of exploiting the freedom of choosing axioms several steps further. I think that this work has the potential to be very useful in mathematics, but because the space of possible axioms is infinite, you will still need something like imagination to pick out the most useful ones. Reading his talk made me wonder whether it will ever be able to simulate imagination in machines. I think that would be both a profoundly exhilarating and terrifying prospect. HAL had imagination.
"There's a lot of overlap in what theorems are provable in each consistent axiom system, but there are still some differences. How do you think this relates to your view that the "freedom to choose one's axioms coupled with the requirement of consistency should naturally lead us to expect mathematics to be unreasonably effective in modeling reality, but that this unreasonable effectiveness only exists, as it were, as an actualizability until human imagination transforms (parts of) mathematics into an actually effective model of reality."?"
Remember, we are talking about modeling nature, that is, we are talking about something within the realm of physics. As a physicist, I really don't care that much which axiom system to use as long as it serves as a foundation that gives me the desired model of the world that has the qualities that I desire: Above all, predictions that match real world observations, but also conceptual clarity, relative simplicity of the calculations, and some elegance wouldn't hurt either. Most physics models are several layers separated from the axiomatic foundation of mathematics. In that sense, I would say my work is atypical. But that is the direct result of trying to incorporate a new distinction into mathematics which beyond the level of logic, and possibly some poorly explored models of set theory simply didn't exist. I must admit that until about 1.5 years ago, when I first started the effort of learning about these formal systems, I was not all that interested in mathematics for its own sake.
"You talk of ZFC (and your ZFCD)... Do you think the axiom of choice is reasonable axiom in a system that contains real numbers which are uncomputable?"
Well, I can live with non-constructive proofs, and I can live with the Banach-Tarski paradox. I think my pragmatic physicist side is showing its side when I admit that I prefer a more powerful formal system over a less powerful one, even if, as a side effect, it proves to be sometimes "too powerful" because it allows you to derive highly non-intuitive results. The cap on this is of course provided by consistency. I don't want a system that is so "powerful" that you can prove literally anything at all. Incidentally, I am certain that once the formal system I am working out is complete, there will be highly counterintuitive implications lurking in the background, waiting to be discovered.
"How could one actually choose one of these uncomputable reals, if they cannot be specified in a finite way."
Actually, Wolfram's work to which you pointed me might be a possible way to do it. If Wolfram's system could be used to enumerate many different very similar but not identical set theoretic models based on the enumerated axiom system, perhaps it might be possible to devise an algorithm which chooses the model in which you can approximate the number to the desired level of precision. The analogy that pops into my mind is that of traditional musicians and filmmakers, who use notes and individual frames, respectively, as a their basic building blocks, whereas "mash-up artists" use entire blocks of these as their basic building blocks.
"Maybe this is where your thoughts on "imagination" comes in? "
Most definitely, I believe devising any sort of algorithm from scratch for a particular purpose requires at least a modicum of imagination.
"Also, is ZFCD trying to be a meta-mathematical theory that tries to explore the ramifications of an incomplete system being consistently extended, from a general perspective?"
I don't think so. ZFCD is a set-theoretical model, and all the relevant propositions pertaining to incomplete systems are made within it, where I assume by "incomplete system" you mean objects such as incomplete pairs etc. However, ZFCD may well have metamathematical (as well as metaphysical) implications, because the lintroduction of the formal distinction between actuality and potentiality into established mathematics is (at least in my totally biased opinion) a very important and largely unexplored area.
"Is what you describe as "pro-actually" a form of determinism, where what you define as "actualizably" a more probabilistic view of the future?"
Well, you have the right idea, but I wouldn't put it quite that way. I would say, as a conjecture, a world is deterministic iff in this world every actualizability at every moment is a proactuality. Actualizability is the more general concept which captures the ontological distinction; pro-actuality is a more specific concept which applies it to the context where the actualizability could most easily be confused for actuality. For instance, someone thinking that measuring some property of one of an entangled pair of particles causes the other to "have" the corresponding property at that moment reflects, in my opinion, such confusion.
"How would you view the question of whether a 10,000 digit number was prime or not from this perspective?"
As you asked it, without any further qualification, I would interpret the question to ask about an actuality because numbers, without further qualification, are abstractions of objects in the inner domain. If you had asked me "suppose this number was in the outer domain", then my answer would be that you are asking about a proactuality, because numbers in the outer domain are abstractions of objects in the outer domain, and no elements of the outer domain are "actual".
"Is this related to the idea that something may be considered probabilistic until a proper fully-predictive theory is found?"
No, I meant to simply point out that what distinguishes probability from non-probabilistic measures is this aspect of "coming into existence" that the others lack. As you know, Kolmogorov's axioms are not sufficient to capture the concept of probability. There is no way you can conceptually frame a unit length or a unit mass as a measure of "coming into existence". But that is not the fault of probability theory, because it just works with the sets that it is given by the set theoretical model. That is why I think that the outer domain is an important addition to set theory: It is exactly the home of the probability measure. The claim that "something may be considered probabilistic until a proper fully-predictive theory is found" strikes me as far too metaphysical for mathematics.
Is it related to the mathematical fact that we don't know if we are up against a true and unprovable statement or if we just haven't done enough searching to find a proof?
This is a great question. Although the "coming into existence" could refer merely to one's certainty of belief, I am at this point only treating probability in an objectivist manner. And from an objectivist point of view, it seems to me that it is already a matter of fact whether a statement is true but unprovable, or whether it can be proved in finite time. And that means that if the proof exists, it is already determined to be a pro-actuality (ignoring facts of reality that the computers could catch fire, or that in a few billions years our planet will be destroyed etc.), and if it doesn't exist, then it is already a matter of fact that it doesn't.
"Is something that is undecidable, probabilistic from this point of view?"
I assume you are referring to undecidability within the context of propositions. I suspect there is more than one way of relating this sort of undecidability to my framework, and for this reason I am not quite sure. However, I think if one really set out to relate undecidability to probability in it, then it would be possible to do, but that would lead to a very unfamiliar conception of probability. On the other hand, there is much that is not well understood about probability. Who knows if we have really exhausted all the possible meanings it could have?
"Are you familiar with Gregory Chaitin's omega constant?"
I was not familiar with it, and even after reading the wikipedia article, I do not have a good intuition for it.
"I think you might have some interest in some of the questions I posed at the end of my essay. Somehow many of them seem very relevant to your work. Please consider taking a crack at answering one of them."
OK, you asked:"If quantum mechanics is a world where things can be both "yes" and "no" at the same time, should experimental results be analyzed with Zen Koans instead of logical inferences?"
I would say that it is not the case that "quantum mechanics is a world where things can be both yes and no at the same time" but rather that within the domain of objects described by quantum mechanics, there are simply no things with actual properties describable in terms of "yes" and "no" until they are "measured". Zen Koans, I think, while often bringing home illuminating insights, do not seem very efficient (or even workable) as deductive systems to me (but then, I know very little about them).
I like the question in particular because it demonstrates how the confusion between actuality and actualizability can create a much more pervasive confusion in our worldview. If I had a fair coin in my hand, not yet flipped, and you asked me "is the outcome the flip heads?" and I suffered from the same confusion with respect to actuality and actualizability, of course I would say "Yes and no". Since we know what is really going on, does that not seem silly?
Thank you again for your questions.
Best,
Armin