Dear Jerzy,
I want to give you a fuller response, but my work week starts today (I work midnights 7 nights on and 7 nights off while going to school) so a more elaborate version will have to wait until next week.
For now, let me just say that you are basically correct. The QM operator corresponds in my analogy to the operation "add an interval of length z" and the eigenvalue corresponds to the length of the side of the cube wherever it 'actualizes'. I did not mention probabilities in my paper, except very indirectly when I said that being 'actualizable' corresponds to an intermediate state of existence. Some time ago, I replied to another person what I meant and for the sake of time I will just paste my response because it may help understand better. (The person to whom I responded was an educated layperson, not someone like you who understands the implications at a very deep mathematical level, so the tone of my exposition was meant for a different audience).
here it is:
"The more challenging concept to understand is what I have called 'actualizable'.
Before I attempt to explain it, let me acknowledge that it is not your fault for having this conceptual difficulty. In all of my papers about my theory and the talk, I have so far described the concept of actualizability only within a very limited context, namely how it is different from "actual". But to get a deep understanding of what this concept really means one needs more than an understanding in terms of what it does not mean. The fact that I have not been more specific is not entirely an accident.
You see, I have found that when in discussing my ideas with others I introduce too many unfamiliar ideas at once, the risk that they will be dismissed as being too far "out there" dramatically goes up (you can even see that in this thread), so I have tried to be strategic about it: I try to introduce just enough so that it becomes evident that one can reframe quantum mechanics in a novel way that no longer seems mystical (as in my talk), leaving more subtle clarifications of the conceptual basis which have truly radical implications for later, after the basic picture painted by my theory is at least somewhat understood and it becomes clear how the radical implications of the novel concepts are required in order to form a self-consistent worldview (which is different from the present one). Describing precisely what I mean by "actualizable" is one of these concepts (but unfortunately not the only one).
I take it that you have perused the references I provided and that therefore you are ready for the more precise definition:
My concept of 'Actualizablity' refers to an intermediate state of existence.
I mean it in the following way: According to our current worldview, existence is a binary concept, which means you can assign one of two values to the ontological status of anything
0- it does not exist
1- it exists
end of story
The notion that something could have an ontological status somewhere in between, which is what I mean by "intermediate state of existence", at first sight seems absurd. If one is going to claim such a thing, one better have a darn good reason for doing so. Well, my reason for doing so is that this definition is required to provide a consistent conceptual basis for a framework that seems to make sense out of a lot of the seemingly mysterious parts of QM.
So, does that mean that something could have an ontological status of, say, 0.3? Yep. 0.6? Yep. And that the latter in this sense twice exists "twice as much" as the former? Yep.
I can appreciate how bizarre this must seem to you, but I would argue that a large part of this is just due to the fact that since you were a little kid you have been conditioned to think of existence as binary and you are reading this for the first time. If this idea is generally accepted, future generations will find it a lot less strange. If you doubt this, just ask yourself how strange you find the idea that the earth goes around the sun? Well, today almost nobody finds this strange, even though it is exactly opposite to what our sense experience tells us. That is why if you had suggested that to someone in the 16th century before Copernicus, they would have considered it an extremely bizarre idea.
We actually already have way for quantitatively expressing actualizability, but we have not yet recognized it as such. It is called the Born Rule. I am certain that you don't see the connection, so I will try to be more specific.
First, let me review how the need for "squaring the wave function" arises in my theory. As you should recall, I postulate a symmetry that serves as a mechanism by which the passage of time for an areatime object (its proper time) can be matched or "translated" into the proper time for each actualizable object that traverses an actualizable path in space. Upon a simple transformation, the symmetry can be decomposed into two complex conjugate phase factors which are associated with each actualizable path, and upon appropriate substitution become e^plusminus(iS/hbar). Since the areatime object manifests itself in spacetime in terms of a superposition of all possible actualizable paths, and each is associated with the phase factors, the proper representation of the areatime object in spacetime is the Feynman path integral.
Now, in transforming from the Lagrangian to the Hamiltonian Formulation, the exponent of the phase factor changes but there is still a direct link between it and the phase factor of the Wave function Psi. This implies that Psi only represents the square root of all the spacetime manifestations of the areatime object in a specified region of space (Each phase factor represents 1/2 of the symmetry associated with the angle in the exponent, and 1/2 in the exponent is the square root). To represent it fully, you must multiply it by its complex conjugate, which is to say that you must take the absolute square.
But just as in my Euclidean analogy a point in 2-space manifests itself as an infinite line in 3-space, the representation of the areatime object in terms of the squared wave function extends over all of space (in the non-relativistic limit at least. In the relativistic limit, I believe, it extends only to the boundaries of the light cone originating from where the paths started).
So if you integrate the absolute square of the of the wave function over all of space, you have finally obtained a complete spacetime representation of the underlying areatime object under the Hamiltonian formulation. Under the Born rule, this is set equal to one and interpreted as a probability.
Let us suppose that the the wave function represents a particle. One often finds a statement to the effect that the above reflects the fact that the particle is certain to be somewhere in space. Under my interpretation it means that if a "measurement" is performed everywhere in space, one is certain to detect a particle somewhere (Since a "measurement" is the mechanism by which a spacetime object emerges out of areatime).
At first glance, the two statements might seem equivalent but they are not: The first assumes that there is a particle out there, independent of whether you are trying to measure it, whereas the second does not. Prior to a measurement, you still have merely the representation of an areatime object in spacetime, not a particle in space. You can hopefully see my interpretation comes closest to the Copenhagen interpretation, but unfortunately the CI tends to substitute mysticism for genuine gaps in understanding.
Alright, after this basic review, let me now get down to how the Born rule can be interpreted as a reflection of "partially existing objects" ('actualizable' sounds much better to me) .
Suppose a quantum state in a particular basis consisted of only two eigenstates. Each of the eigenstates has a coefficient which tells you how much it contributes to the total state. In standard QM, the coefficent has a purely operational interpretation. What I mean is this: The coefficient is ideally determined by running measurements on a large number of identically prepared states, and the frequency of the two different possible outcomes is recorded. Since the calculation of the expectation value for the measurement outcome involves both the wave function and its complex conjugate in a product, the coefficients are the square roots of the relative frequencies. For example, if both outcomes are equally likely, then, the coefficients become sqr (1/2)=1/sqr(2). Since, as far as I know, there is in standard QM no "deeper" interpretation of this, the coefficients must be interpreted purely operationally, as mentioned.
In my framework, the coefficients have an ontological interpretation: The coefficients tell you how much each actualizable eigenstate contributes to the total representation of the areatime object in spacetime, and their contribution is a measure of the extent to which the areatime object "fractionally" exists in spacetime in that particular manifestation.
The problem is that when you do 'measurement', you cannot detect "fractionally existing" objects, only ones that fully exist in spacetime, hence the ontological status of the eigenstate you detect upon a measurement must change from some value less than 1 to 1. This is consistent with the fact that if you immediately repeat a measurement, you will obtain the same outcome, and directly connects this to the probability interpretation, since under a the latter, being certain of obtaining a particular result is equal to a probability of 1.
So let us now examine the bizarre notion that one eigenstate could exist "twice as much" as the second. Well, it just means that the coefficient of the first is sqr (2/3) and the coefficient of the second is sqr(1/3). Because both states are associated with some form of existence, in a small number of runs you might measure one or the other in some different proportion, but in the limit in which the number of runs on identically prepared systems goes to infinity you recover the fractional existence of each state. This is essentially the definition of the (frequentist interpretation of) probability.
"Conservation of probability" then is really conservation of existence. Unfortunately, existence is not currently considered a physics concept but I strongly believe it needs to be. As you might imagine, this makes the idea all the more difficult to accept. I had written a paper a while back called "Ontology and the Wave Function Collapse" where I hinted at this problem.
Alright, if you have really read my papers and watched my talk, I hope that you will see how this fits in with everything and have a better understanding of what I mean by actual vs. actualizable, but if you have not done so, I doubt that the above will make much sense to you. "
hope you found this useful,
Armin