Thank you to Dr. Gerard Rinkus for suggesting we open a discussion of their recent essay A Physical Theory Based On Sets, Not Vectors.

Abstract:

This essay questions what is perhaps the most fundamental assumption of quantum theory, which is that states should be represented as vectors in a Hilbert space. As the essay explains, an alternative formalism is possible in which states (and indirectly any higher-level particles observed to be part of the states) are represented as sets, specifically, as extremely sparse sets of fundamental units of far smaller scale than any particles of the Standard Model. This physical theory is borrowed over from an information-processing theory. The connections between the original information processing theory, Sparsey, and my proposed physical theory are further elaborated in an earlier essay, The Classical Realization of Quantum Parallelism.

Hi , This work is very intriguing, innovative and interesting because it is general and new. I am interested in this generality due to fact that it considers codes and sets.I will ask some questions because it can be correlated with the 3 ethers and the superfluidity in my model. Congrats ,

    An important link between information processing and cognizance. I agree with the set theory foundation (see attachment). If we introduce the identity, time = information, does that not bind states to a time parameter that obviates superposition of states?

    Just thinking out loud.

      happy to see you again on FQXi, I agree also about his set theory wich is very relevant , the time, the information and the states with the numbers, I was intrigued by this work innovative for me .Regards

      Trying to make the point that if sets are real, i.e., physical, superpositions are not. If superpositions are real, physical sets are not. After reading a little way into the most recent essay, I realized I have to get grounded in the 2015 essay and up. Working my way through. Great reading.

      https://www.researchgate.net/publication/326066949_Measuring_the_Complexity_of_SImplicity

        the good works in logic and intepretations must converge like the born rule, the manyworlds, the qbism, the copenaghian interpretation, the relational QM. So ll good extrapolations converge but the real big questions are about what are the foundamental mathematical and phyaical bjects and what is the philosophical origin of the universe. Furthermore we come from fields or particles in a superfluidity? Must we consider only the GR and photons and the strings oscillating vibrating to explain our topologies, geometries or must we consider deeper logic added like this dark matter and dark energy and must we consider particles coded in a superfluidity instead of fields ? we don t know in fact .

        I have remarked that there is like a crisis inside the theoretical sciences community, many consider only this GR and photons and after points or strings at this planck scale oscillating connected with a 1D cosmic field of the GR. And with the geometricql algebras like hopf, clifford,Lie they try to explain with extradiemnsions the topologies, geometries, properties of matters. But all this is a philosophical assumption. That is why I like this idea with the sets instead of vectors. That changes. I ask me if I can converge with my spherical geometrical topological algebras that I have invented with the 3D spheres , 3 ethers and the non associativity and non commutativity for the subgroups. I beleive that yes.

        ps , the work of the Dr Rinkus can be applied with relevance for the AI and if the free will is considered , I believe that it was Ian Durham who has made a good work about the free will, that can give roads relevant about the microtubules to reach maybe , I tell maybe , the consciousness if my 3 series primoridal of 3D spheres are on the good road.

        Hi Tom, I don't understand what it would mean to say time = information. I realize that you and many others here have been thinking a lot about questions like these and much for a long time, so I probably need to be educated as to what equating time and information would mean, formally and physically.

        That said, let me just say, that for me, time is discrete and the state of what I call a corpuscle is updated in one time step. There are lot of fundamental operations that constitute that update (in the information processing version of the theory, this is Sparsey's Code Selection Algorithm, which involves all units computing their input summations, and then a couple more simple operations, but all of which has constant time complexity), but in the physical realm, the execution time for all of those operations (most of which occur simultaneously in parallel) determines the length of that discrete time step. Of course, it doesn't even really make sense to talk about the "length" of that time step. It's just the update rate of state. I haven't thought at all about how this would relate to time in relativity theory.

        Tom, Steve,

        1. Tom, I don't understand when you say...if superpositions are real, sets are not, and vice versa. For example, in the info processing version of this theory, the coding field is just a field of bits (not qubits), organized into Q WTA CMs...so just a block of computer memory. Further, every set that ever becomes active in the field will have exactly Q active bits. But since all the sets that have previously been stored in that field (e.g., during a learning period), can intersect, when any one of those sets is active, say representing state X, ALL other sets (that have been stored) will be partially active in proportion to their intersections with the single fully active set. So we have that multiple sets (each one representing a particular state) are all simultaneously (though to partial degrees) active in the coding field. So the superposition is real then too. In particular, consider one of those previously stored states, Z, that has an intersection size of only one, with the current fully active code, X. Even though Z is minimally active, it has a physical mechanism for affecting the next state of the coding field, namely its single active unit will send out signals via its outgoing wt set just as the Q-1 active units do. And when those signals (form that one unit) are included in the input summations of the coding field units on the next step, they will influence the choice of winners in the Q WTA CMs.

        2. Steve, I'm also going to need being brought through your statements much more slowly...you're covering a LOT of ground in them, and again, not being a physicist by training, I'll need to catch up.

        In any case, I'm very happy that you guys are interested in my work and look forward to continued discussion.

        Hi Rod,

        Physical sets. Physical superpositions. Pick one. Either you have an independent mathematical model of a physical phenomenon, or you have a physical representation of a mathematical result. The latter begs the question, for if the physics precedes the mathematics, one can only prove what one has assumed.

        On the other hand, if one has constructed a mathematical model that can be tested against the phenomenon, one can say with certainty what is real, and what is mathematical artifact.

        As it is, when modeling by machine, one runs up against the either/or dilemma: either the results are correlated by physical criteria, or mathematical criteria--not both (as your example shows; real-real). And not as an "If-then" relation; i.e., if the superposition of elements is the mathematical model, the physical result corresponds within a bound. I'm suggesting that bound is a nonlinear time parameter, identical to information availability at a decision point (Herbert Simon, bounded rationality). Lev Goldfarb is one, and perhaps there are more, who share my view that time is information, from which it follows that time is nonlinear.

        This assumption has the advantage of casting time in a causative role, and therefore physical. And it fits, I think, within the context of distributed computing and multi-scale variety (Bar-Yam). In my set-based theory, where prime-valued n-gons represent relative sizes of information nodes, I can imagine an algorithm in which nodes combine according to some rule, forming a new node with different complexity (so "length" of time would have no meaning, as you suggested in an earlier post. Nevertheless it has a field of influence).

        Sparse distributive representation has a great potential, I believe, both for brain science and cosmology. Wow. I have to catch up with my reading,

        All best,

        Tom

        I should have noted Lev Goldfarb's web page: http://www.cs.unb.ca/~goldfarb/

        He is a mathematician by training, and I am sure he could contribute substance to this discussion.

        Hello, yes indeed I don t understand too why time =information. First of all we don t know really what is an information in its pure meaning and what is its origin. Of course we have invented our computers with the binar systems and the algorythms and boolean algebras, but it is our invention, it is not really how acts this universe. The informations of this universe are still beyond our understand , the same for the foundamental objects and the philosophical origin of the universe, we don t know if the fields, the GR and the strings are the key or if we must consider particles in a superfluidity and maybe my spheres. For the time, it is in function of changes and motions like with the quaternions and the rotations become relevant but there still we are linmited philosophically and ontologically. Maybe the confusion about this time is due to this GR and we complicate a thing simple. It is a paramter like the charge, the lenght, the mass, a scalar, we measure it simply and the evolution seems essential to encircle this time. But I don t see why we must consider it with the informations like = ??? if you coulod develop generally this idea and with the correlated works, it could be interesting , we could see what you mean exactly Tom, Regards

        Tom, Thanks for pointer to your "Measuring the complexity of simplicity" paper. I will try to work through it....probably will take some time. Partly because I have to be spending most of my time working on my main theory right now...in hopes to find funding :)

        Anyway, I am not understanding your either/or of physical sets and physical superpositions. I'm wondering if it is because you are using a different def of superposition from me...i.e., one somehow more specific QT...not sure.

        So, I'm not defining superposition just as linearity, as in linear systems, f(x+y)=f(x)+f(y), or at least, where f is either scalar-valued or vector-valued (or even where the vector is a vector of functions). Perhaps my use of superposition would reduce to linearity, but where f is set-valued. Not sure.

        When I say that two codes (sparse distributed codes), X and Y, which are both sets of cardinality Q, are simultaneously physically active in a coding field, I just mean that some of X's elements are active and some of Y's elements are active (assuming they have some intersection). The total number of active elements is still Q (that's mandated by my model's rules). So for example if X and Y share Q/2 elements in common, then when X is fully active, Y is half active, and vice versa. X and Y are both simultaneously active, just to different degrees (strengths). So, if one's model has a way to interpret strength of activation (defined that way) as the probability of the represented item, then we have a physical realization of the probability distribution. One can physically make a draw from that distribution, i.e., collapse the superposition (that's what Sparsey's code selection algorithm does).

        However, a Sparsey coding field (can and generally does) have a recurrent full matrix, and when signals are sent out from the active code (of Q elements) at T, via that recurrent matrix, and arrive back at the coding field at T+1, a new code (possibly the same code, but in general a new one), but also of cardinality Q, will be activated. But just as the code active at T was a distribution over multiple stored codes (here, just X and Y), the code at T+1, is also a distribution over those codes. To emphasize, every particular set of Q elements represents not just a particular item, but also the distribution over all items (specifically, in Sparsey, over all items that have been stored in the coding field during a learning phase).

        You might say, but why should we believe that each new distribution that becomes active is in any sense a good one, or a correct one, or a reasonable one given the statistics of the input space from which the items (that were stored during the learning phase) were drawn? The answer is that if, when you store each new item, you statistically preserve similarity, i.e., simply cause more similar input items to be mapped to more highly intersecting codes, then you create (embed) the appropriate intersection structure over the stored codes, so that, during a recognition test phase, or just while "free-running", after learning, the distributions that arise will respect the statistics of the input space. Moreover, because not just pairwise intersections, but intersections of intersections, etc., are also physically active whenever one code is fully active, the intersection structure reflects not just the pairwise statistics, but in principle, the statistics of all orders present in the input set.

        Also, I want to emphasize that all the learning algorithm does is statistically preserve that the *size* of intersections correlates with input similarity. It does not choose *which* elements will be in intersections. However, merely by imposing that *size* correlation, over time, subsets (of various sizes less that Q) will emerge as representations of various higher-order statistics over the inputs. More precisely, the code assigned to the *first* input item stored will be chosen completely randomly (i.e., one unit chosen at uniformly randomly in each WTA CM). But thereafter, the choice of winner (in each CM) will be biased by the learning that occurred for each prior item stored: these biases accumulate in the patterns of increased input wts to the coding field units.

        Does that clarify my meaning at all? I'm not sure if I'm really getting your point. But I am interested that you have a set-based theory and will try to grok it. And, thanks for pointer to Lev Goldfarb. Of course, I've heard of him, but I don't know his work.

        Ah, yes, Rod, I was taking it as linear superposition. If it reduces to linear superposition, I think I would have the same problem--because working on the assumption that time and information are identical, elements are nonlinear and remain so. If the function is set-valued, sets interacting nonlinearly produce more variety than linear sets which compel collapse of the wave function. That's why I say one has to choose between physical sets and physical superpositions--between mathematics and physics.

        I don't think the added assumption of time = information harms your method; indeed, I think that it closes a judgment on the space in which you are working, i.e., makes it clear that you are talking about a physical spacetime. It would be a breakthrough result to prove that spacetime is real, physical, as Einstein claimed.

        You can't do that with a collapsing model. Yes, I know that Everett's many-worlds is also non-collapsing, but you don't have to subscribe to that theory. Maybe it applies to many self-similar worlds, too. Many differentiable scales.

        I like to think that my research has modified Einstein's belief that "time is an illusion" to "linear time is an illusion." This is in sync with your complex systems model of many-interactive sets and intersecting sets. Why would you want to normalize it? You're actually killing your result, because you don't have confidence that spacetime is a real thing. You fall back on old comfortable methods, while you're pushing a radical new computing framework.

        Just my opinion, of course.

        And I'm not criticizing Sparsy. I just want to suggest another, er, dimension. It is my opinion that If you are going to have a set-based theory, you'll have to give up probabilistic measure criteria. It will actually simplify the model.

        I could change my mind as I learn more. I do think you're on to something.

        All best,

        Tom

        Are you conscious Tom that the spacetime is not probably just a photonic one ? are you conscious that if we cannot solve the constant cosmological problem, the quantum gravitation with this general relativity there are reasons. Why the photons like primordial essence of this universe ? The confusions and crisis inside the theroretical sciences community come from this , the fact to only consider this GR and the photons and the correlated spacetime. Einstein has just improved our knowledges about the photons and the observations with his EFE , but why it is taken like the only one truth. Why photons like primordial essence of this physicality ?

        since when the GR is the cause of our standard model also ? It is not because with have the EFE and the GR and that we have this standfard model with bosonic fields that we must conclude that the standard model is from this GR ? The space vacuum of this DE seems the answer for the main codes and if this vacuum encodes the cold dark matter and the photons to create the ordinary matter, it seems more logic.

        We have in logic 3 spacetimes, One that we observe , the photonic one with the GR , and the two others also , the DE and the DM. They interact but we don t know still cosmologically speaking and in the standard model also they are important in logic.

        The big problem is the unification of the standard model and the GR, we see quickly that we have problems . The macro and micro scales cannot converge correctly and the real cause for me is to take the fields like main cause instead of particles. The fields are emergent due to a kind of merging between particles. The GR and the EFE and the fact that we have the bosonic fields have created a confusion about the philosophical origin. See that if you take the dirac seak and that you consider my idea of the space vacuum possessing the main codes withy series of spheres for the DE with a negative system balancing the whole, that solves in encoding the photons and the cold dark matter .