Austin,
My apologies! Because I had a couple of earlier threads from you, I kept losing this post even after having read it briefly back on that Monday. I kept thinking I was missing something, but then kept not finding it in the earlier threads. You have lots of interesting points, so I'll try to go through a few:
----------
>> ... If a standard model elementary particle has to be brought to a [point-approximating state in xyz space] at a measurement, then if there are preons within the particle, wouldn't all the individual preons need also to be brought to points at the same time? ...
Good question!
Interestingly, the answer is a definite and well-defined no.
The reason is the natural hierarchy of size and energy scales in matter. Think for example of both atoms (bundles of nuclei and electrons) and nucleons (bundles of quarks). You can very narrowly localize an atom by using nothing more than phonons, quasiparticles of sound and heat, since at that scale these carry pretty impressive momentum kicks, comparable to X-rays but with far less kinetic energy (and thus less destructive). But to see inside the atom, to force its electron clouds into more point-like wave functions, requires dramatically more energy and momentum. The same is true for nucleons like neutrons. Even at nuclear power levels, the energy levels needed to resolve (collapse) the quarks, even (only!) very briefly to more point-like entities that show classical motion, are enormously higher than what the neutron typically encounters.
Overall, this available energy resolution barrier (I'm inventing a term, I don't know if one exists) is what keeps the entire universe persistently quantum (for the most part) at its lowest levels of detail. And that's and a good thing, too since otherwise both volume and chemistry would disappear and the universe would be nothing but multi-scale black holes!
This is also the point at which my perspective on how the universe works at its deepest levels has flipped literally 180 degrees over the past couple of years. For most of my life I believed as devoutly as most folks in the concept of a positive-image quantum universe, the idea that every quantum wave function was an infinitely detailed superposition of every possible configuration that could exist. How could I not? Feynman's QED in particular takes exactly this approach, and is one of the most precisely predictive algorithmic approaches ever devised in physics, nailing all sort of experimental results spot on! So obviously the universe must be positive image, with quantum wave functions being incandescently hot, broad-spectrum collages of every possible history available to them.
But I am at heart an algorithmist, and from very early one I've known that the most obvious representation of a problem is almost never the best representation of the problem, either logically or computationally. And that left me with a nagging hole (heh!) regarding exactly the kind of question you just asked: If the universe never shows more detail than exactly the level of resolution you put into it -- if atoms never become electrons and protons until you rip them apart with information-toting momentum packages (X-rays) with wavelengths that always define the new level of resolution they make available -- then why do we persist in saying that those details even exist until that act of adding sufficient energy and momentum to make them real?
The result, of course, is the negative-image universe: A universe full of dark holes, spots that we call quantum, that in effect say nothing more than "land available, build to suit!" to any phonon or photon or W particle or whatever that comes by and offers enough energy-momentum cash to make the new construction happen.
It is so much simpler! And, from an algorithmic perspective, almost unimaginably more efficient, at least conceptually. Instead of a quantum function being an incandescent infinity of infinitesimal pure states -- multiplying infinities is never a good thing algorithmically -- you just get an empty spot with number of unforgiving constraints (its selection and superselection rules), on top of which the added energy becomes responsible for adding all of the needed details. And if you pay very close attention to pairwise entanglement, even that mystery of "where does all of that new wave collapse detail come from, then?" has an unexpected resolution: It comes from the other end of the energy particle, e.g. the almost infinitely complex thermal-matter momentum shattering of the still-entangled momentum of the photon from when it was launched, for example, by a hot tungsten wire. It's not the quantum world that provides the almost perfect randomness of the wave function collapse, it's the thermally hot classical world which it is entangled, the web of selection rules that must all be satisfied, and that must all have entangled root in some classical environment. Nothing else is possible, since without the classical context, the quantum wave function has no history from which it can be created.
Algorithmically, a negative-image universe with dark wave functions not only is hugely more efficient, it literally just makes more sense: the entities that seem uncertain, the quantum bits, are that way because they do not exist yet, not because they have uncounted infinities of things jammed into them. Once you start thinking this way, trust me, it very quickly gets addictive because it simply models better what we actually see: Lack of definition at the bottom, due simply and without complexity to simple energy starvation. It's not much different from looking at your smartphone screen with a microscope and realizing that eventually, every image has to run out of details. Our universe just does it a much smoother, craftier, and always oh-so-deceptively-smooth multi-scale fashion.
----------
>> ... I then noted that Bose Einstein Condensates can exist as multiple structures (collections of bosons) in a single state. This is just Penrose's Cyclic Conformal Cosmology where a single state/point for the universe can be allowed if all content is in bosonic form. This alleviates any need for all hypothetical preons to likewise be brought simultaneously to point(s?) themselves ...
Heh! You can tell I did not read this paragraph before replying to the previous one, since I spent all of my time there talking about how internal particles of any type do not also collapse! (And BTW, since T and V are fermions, I always assumed most preon theories used fermions, not bosons. But I've not looked carefully at the area, since the orthogonal Glashow cube vectors explain preon-like behavior (TV-like behavior at least) without invoking actual particles. It just works out a lot more cleanly.
---------
>> ... On the other hand in CCC the universe is not brought to nullity but merely to a single state at a node where it then recycles back to enormity ...
The cyclic universe model is delightful and one of Penrose's (many!) deeply intriguing speculations, and I read this as heading towards the idea of everything turning into a Bose condensate at the preon level to move between cycles. But astrophysics data just is not heading in that direction... more the other way around, with accelerating expansion. More than theory, you might want to consider that aspect of CCC in the current context.
And just to make sure I understand: Are you sure about using bosonic preons? Two fermions make a nice boson in e.g. rishons, but if you don't start with fermions, there is just no obvious way to construct them from bosons.
----------
>> ... I have struggled over your dark voids .... but I do think that almost anything is better than many worlds. ...
Well, I've struggled over the idea too. It certainly was not where I started. I agree about MWI; see my ramblings above for why.
I have an event coming up, so I'll leave it at that. I can see you have put a lot of thought into the CCC boson transition idea, including even how it might relate to dark functions. Interesting.
Good luck with your essay!
Cheers,
Terry