Austin,

Thanks, you have raised lots of interesting points. I've been short on time, but before midnight (in 5 minutes? oops) let me mention a couple of items quickly:

-- Alas, I cannot provide any specific references for the dark function (or negative image) interpretation of quantum mechanics because... well... this essay is where I invented it.

I'm not familiar with any papers or prior work that proposes quite this idea... or even remotely this idea? It's, um, a bit radical, I guess, though it surely does not feel that way to me, since it fits a lot of stuff together nicely once you go the dark route. No more infinite sums of infinitesimals, just large but always finite set of superselection rules.

The closest thing to a resource available is decoherence theory, especially (and oddly) some of the earlier work there, when folks seem to be a bit more open to saying radical things about the wave function being "defined" by its environment. But taking it to the extreme I did -- the literal inverse of MWI, total darkness where once lived an infinity of other states and universes -- well, it at least seems to be a new interpretation. That said, there's always something related buried in the deep literature. I've just not found anything yet.

I have brought up essentially the same idea over the past year on Backreaction, using different terminology such as a bit-first view. However, I've quickly become fond of "dark functions" because this phrase describes vividly exactly what the intent is: The absence of state within a region of space, constrained not by anything within it (there is nothing!), but by the full set of superselection rules in both spacetime and the immediate environment.

So, bottom line: For good or bad, the first and so far only reference on dark function is in this essay.

-- For the dual universe part, it's interesting that you mention yin-yang, because I so don't think of it that way that I didn't recognize what you meant the first time I read your sentence. The reason is, ironically, exactly the issue you mentioned: the absolutely critical importance of observer location.

No matter which universe you are in, the other universe will be the negative one to you, and exactly so, not "sort of" as with antimatter. So it's not black vs white, but purely where you are located.

This is something I didn't get into (um, I don't think? :) in the essay, but if both space and time are emergent forms of entanglement -- by which I mean a simpler, more direct, and coarser version than the complicated and insanely over-detailed Planck-scale holographic version of space as entanglement -- then this entanglement quite literally acts a net that drags you with it as you move along with the time of your universe. It drags you because you are a classical, information-defined entity -- that's what information is at a deeper level, part of this network. That's also why dark functions work so well for issues of context: With dark functions,all of physics is context, in particular all of classical physics.

This spacetime dragging effect also provides another delightfully simple way to interpret quantum physics: It's physics for which time has not yet been defined, for which due to constraints or due to designed safeguards against it, the network of classical physics has not yet "snared" the event and forced it to become classical, informational, a part of history. Wave collapse becomes snagging those who've been dragging, so to speak. It's not easy to hide, either.

Another bit that is fun with this view is that the original process of pair generation never ends. That is, other universe pairs are in effect trying to start all the time within ours, but cannot get very far before they too are captured by the entropic net of entangled information (the Boltzmann fabric as I like to call it) that is the evolving spacetime of our universe.

Ongoing pair production even of space and time also provides another way of explaining why things get ratty and weird at the quantum level. Since even (actually, especially) the direction of time is determined by the entangled, number-conserving consensus of the Boltzmann fabric, then at small enough scales even that direction starts getting ratty around the edges, at least until time is forced back into order by an encounter with causal history in the form of the Boltzmann fabric, the information that is classicality.

(Such virtual pair breakout attempts are not necessarily completely random, since they too are constrained. For example, want to know why protons and neutrons are small, and the color force is spatially constrained? Psst, a little secret: It involves multiple axes of time competing with each other to try (and fail) to get to be "real" times. One result is the 3-space of strong and electric charge displacements that we call the color force. But only one causal time can emerge into the broader universe from such any such competition, just as only the electric force can emerge from baryons into unlimited xyz space. The txyz fabric is a very unforgiving taskmaster, forcing the little temporal rebellions constantly taking place within nucleons to be both crystallized into a precise 3-space of color charge displacements, and severely limited in xyz size so that quantum uncertainty can wash out any attempts to create truly alternative causal timelines.)

Argh, did I say quick?? It's almost 1am. Later!! Sometime this week...

Cheers,

Terry

Yutaka,

Thank you, I will take a look at your essay.

While I didn't get into it in this essay, I have a very specific interpretation of black holes that is based on recent papers by 't Hooft. His view is that particles become momentum waves that travel to the antipodal side of the black hole, rather than into a singularity at the center or to some other universe.

I just rephrase that idea a bit by saying that the particles enter momentum space. This is equivalent to saying that a black hole is equivalent to a spherical mirror, only one in which it's not just the electrons that are delocalized over the entire surface, but all particles. No information is lost. There is just a switch from mostly-in-xyz to mostly-in-momentum-space, which scrambles everything but does not destroy anything.

Ironically, given my independent use of "dark functions" in this essay, I've more than once called this my "dark mirrors" interpretation of black holes. I need to work on finding some cheerier adjectives!

Cheers,

Terry

4 days later

Dear Prof Terry Bollinger

your starting words are very good. You are proposing another model of Universe. For a change see someother model Dynamic Universe Model as described in my essay "A properly deciding, Computing and Predicting new theory's Philosophy"

It is an N-body problem solution which solves many present day unsolved problems give many predictions that came true. I developed with the help of Maa Vak about 40 years back. You can see my blog

" https://vaksdynamicuniversemodel.blogspot.com/ "

Best wishes to your essay

=snp

Dear snp,

Thank you for your interest. To be honest, I keep hoping no one notices my very last-minute and not-properly edited essay, though I'm very fine with the ideas in it. They just need to go into a real paper somewhere.

I will take a look at your essay today.

Cheers,

Terry

    Dear Prof Terry Bollinger ,

    Why do you think like that? All the good work will be noticed , some take more time.................

    Best wishes to your essay

    I am waiting to see your learned comments on my essay...

    Best

    =snp

    Hi Terry. Nice explanation there on decoherence,ilove your expansion into quantum computing. rated you well.i have something on cognitive bias here https://fqxi.org/community/forum/topic/3525.hope you see my take on how QM arises in the simple diagrams.Thanks to you,All the best in the essay contest.

      Dear Terry,

      Thank you so much for answering my comment.

      I am not familiar with the t'Hooft recent paper. On the new definition of the black hole, what do you think about the Hawking and Ellis textbook? Should we rewrite this book on the conceptual issue of the black hole as the "dark mirror"?

      Best wishes,

      Yutaka

      Terry

      I have been idly thinking about some of your ideas over the last week.

      Multi-level structures interfering with a return to nullity ...

      I did have this thought years ago about my preon model. If a standard model elementary particle has to be brought to a point (or a single state) at a measurement then if there are preons within the particle, wouldn't all the individual preons need also to be brought to points at the same time?

      I then noted that Bose Einstein Condensates can exist as multiple structures (collections of bosons) in a single state. This is just Penrose's Cyclic Conformal Cosmology where a single state/point for the universe can be allowed if all content is in bosonic form. This alleviates any need for all hypothetical preons to likewise be brought simultaneously to point(s?) themselves.

      On the other hand in CCC the universe is not brought to nullity but merely to a single state at a node where it then recycles back to enormity.

      I have struggled over your dark voids .... but I do think that almost anything is better than many worlds. (Although to me every elementary particle is like a universe full of content, and that is enough worlds for me.) In my model, the dimensions of the universe are inbuilt into the elementary particles (and even more fundamentally into the preons). So a boson has dimensions inside it even when it has not been brought to a point. So for example the higgs fields contains dimensions and not just the higgs boson. This means that for me every time a new universe is made or recycled the dimensions are pre-existing, so they are the same dimensions in all structures. And like you, as you know from our discussions elsewhere, I think that there are colour dimensions over and above the normal four.

      If one thinks of a dimension without matter being a void then I can give an opinion about that. I have often in a previous pre-physics existence used dimensions, for instance see my paper at https://vixra.org/pdf/1609.0329v1.pdf . The final line of Table 3 in that paper shows a metric being unable to be formed in certain circumstances: when the data points are too far apart for communication to hold. This is IMO very like the metric of the universe falling apart in the final stage of a cycle in Penrose's CCC model when fermions are very few and far spaced compared to speed c communication. I think that for most people this is the least convincing part of the CCC model but for me it is very convincing. But does the dimension cease to exist when the metric fails. Is a metric more fundamental than the dimension (void) or vice versa?

      So at a node in the CCC where everything in the universe is bosons spread out to almost infinite range with no metric and suddenly there is a fermion and a new metric of a very small-in-spatial-range universe. But did the old far-extended dimension really vanish? Can that dimension/void continue without its metric. Did all the bosons re-locate (in field format) to the vicinity of the new metric? Or did bosons stay put in a metric-less extended dimension/void? Is the new metric expanding (due to the exclusion principle) within itself somehow in the usual picture of the expanding universe. Or are any bosons outside the expansion, or providing stuff to feed the expansion within it?

      There is no energy in the extended void at the end of a CCC cycle. But in my preon model the entire contents of the universe is there in an aggregation of a finite number of preons. So in that preon model that void is not truly empty. Here is where I differ significantly from standard idea. I think that energy is necessary but not sufficient for particle pair creation. It is of primary importance to have the preons available and secondary to have the energy necessary to force the interaction/measurement/event. So some of what I think you called failed (?) creations might have been formed on an unequal playing field.

      Also, in my preon model, the colour dimensions are part of the structure of the preons. So at the start of the universe/CCC node the dimensions/voids pre-exist within the preons contained within the BEC condensate of a big universe-worth of bosons in a single state. So I do not think new dimensions/ voids can be created which are not pre-existing within particles. Likewise any new dimensions within a big bang must be the same as those outside the BB. Although there is the idea that spatial dimensions can become temporal and vice versa wrt a BB which therefore IMO means that spatial and temporal are qualities associated with the metric and not of the void-like metric-less properties of a dimension/void.

      No time left. I never wrote about whether a field is in the void but a particle is in the metric. Where does a particle go when it is in the form of a field ....?

      Just ignore (or void it!) this if it is too much on a Monday morning.

        Dear snp,

        I enjoyed reading your essay, but I also have a self-imposed rule that when an essay makes non-trivial mention of a specific deity of any religious tradition, I thereafter avoid making specific comments on its contents. I believe it is an important duty to us as fellow humans to respect each other's religious traditions, so I do not feel comfortable critiquing an essay that includes such beliefs. I will note, however, that I found looking up वाच् fascinating. I gather she plays a somewhat similar role in Vedic traditions to the creative aspects of the Spirit of God in Judaic traditions. There are some fascinating and ancient histories there! I wish you well on your essay.

        Cheers,

        Terry

        Dear Yutaka,

        I found both your 2013 essay and this 2020 essay interesting, and also your remarks above about black holes. 't Hooft's papers can be found pretty easily by looking up his name on either Google Scholar or directly at arXiv. Your perspective had for me some unanticipated common themes with my essay, so I ended up leaving longer (maybe too long!) comments on your essay threads.

        Good luck on your essay!

        Cheers,

        Terry

        Hi Michael,

        Thank you for your kind comments. To be honest, I submitted it so late and with so little review (one of my old hats was senior technical magazine editor) that I've been afraid to read it myself!

        I had fun reading your essay, which was certainly broad ranging! In cases like this where I can tell my poor brain is just not following the thread, even after a couple of reads of some sections, I just enjoy the read and avoid making critical assessments of valuations. So thanks again for letting me know about your essay, and I wish you the best on the FQXi contest this year.

        Cheers,

        Terry

        Dear Prof Terry Bollinger,

        Thank you for reading my essay. Name of the Goddess is Vak, Pronounce "k" as in coffee or cow, but not "Ch" as in Chess. Maa Vak is Hindu God of Knowledge, and education.....

        People asked me "why are you entering Physic field, what degrees you got, what knowledge you got??? We are sufficient to study physics."

        I told about Maa Vak, I have to. That is true for many people for them accuracy and truthfulness not important. What results you show are not important. 40 years I faced lots of humiliation. No recognition, no back patting nothing No help was available, except Maa Vak, what shall I do? They dont give seat in PhD even. For them the power and position are important. I know many people in our community are believers or followers. One Example is Prof Stephan Hawking, when he met Pope , pope appreciated him.

        Its Ok . Maa gave me mental support, reduced my pain, answered my questions through my consciousness.......

        I am just nobody. This difference with you cant be solved by me..........

        Hope you will rate my essay ignoring the difference. i will reciprocate ....

        Best Regards

        =snp

        Dear snp,

        Since you are OK with me assessing your essay purely in terms of its scientific and theoretical content, I'll go ahead and make some comments on that part of it. Since I have been an editor for a technical magazine, I have ethical considerations about how folks should do FQXi mutual reviews. Here are some guidelines I posted three years ago for FQXi reviews.

        (1) Overall, I liked the various assertion you made about the scientific method in the first and larger part of your essay, though I was a bit baffled about why you do not like imaginary and complex numbers. Complex numbers are both very self-consistent and extraordinarily useful for applications such as expressing 2-dimensional angles and vectors.

        (2) Your second shorter section was on your Dynamic Universe Model that uses "21000 linear [tensor] equations ... in an Excel sheet". Computer modeling is of course a great way to explore phenomena that change too slowly for direct observation, and spreadsheets provide a more powerful programming language than I think a lot of folks realize. So there's nothing wrong with using such a model per se.

        However, you also mentions features of your model such as "Independent x,y,z coordinate axes and Time axis [exhibit] no interdependencies between axes". That is a problem, because it contradicts the extraordinary amount of not just evidence but application of special relativity, including for example in GPS systems. A computer model can only be predictive of the real universe if the initial assumptions built into it have been verified experimentally. Otherwise, you just has a model that may give interesting results, but those results will have no correlation to or predictive power about the real universe. Not having special relativity for example immediately isolates the model from making predictions that have much to do with the real universe.

        So, if I rated your essay, following my own ethical guidelines of not caring one whit whether or how you might rate mine -- the incentive to care is a very unfortunate feature of the FQXi community review model -- I would give you a 3. The credits would be for the good assertions about science, the debits for giving a model that I'm sure has lots of good work in it, but which does not adequately attempt to model actual, well-validated outcomes of real experiments. Making strong assertions about the real universe based on the computational results of such a model is a big debit.

        At the same time I would rate your efforts much higher than almost half a century of extremely costly work on superstring theory, which was quite recently (March 2020) experimentally shown to be flatly incorrect by a HAWC Consortium paper on high energy gamma implications. You, at least, have a working model of the universe! They have nothing executable after that half century and likely hundreds of millions of dollars total of direct and indirect costs, not to mention innumerable research careers wasted on papers that discuss experimentally disproven formalisms that cannot be run on a computer and cannot predict anything about the actual universe.

        I will not actually enter the 3, in part because I don't think it's fair to downgrade your significant efforts at creating a very real, predictive computer model, even if flawed, when so much money and time has been wasted for decades on the supposedly more "mainline physics" discipline of superstrings. At least you took the time and effort to create a real model capable of making real predictions! That never happened with superstrings, which from the start chose to explore only topic they (incorrectly, as it turns out) would be safe because they could never be disproven.

        -----

        You are free to grade me as you see fit, although I would again encourage you first to read my guidelines on FQXi review ethics. Don't hesitate to give a low grade if you truly feel that is what I deserve! I would much, much prefer to get an honest low grade than any kind of grade the felt like a "favor".

        The other factor you might want to consider regarding FQXi mutual ratings is that, at least three years ago, they seemed to matter very little in terms of actual selection of winners.

        I recall that I was quite disappointed when the essays that I and many others thought were the most innovative, insightful, well-written, and science-focused -- essays that scored well in reviews like this (I was not in this group) -- nonetheless ended up getting at best a few lower-level awards.

        Meanwhile, authors who other essayists had not noticed much during the internal reviews somehow ended up not just winning the big prizes, but getting heaps of praise for their dedicated repetition of themes that were far more traditional and predictable, and whom in at least some cases had been previously supported by the same groups that fund FQXi. An unfortunate appearance of conflict, that, although it was surely unintentional.

        Cheers,

        Terry

        Dear Terry,

        I never expected that you will reply because our differences. My work is a pure scientific work, not a devotional work as you correctly stated. I read your wonderful guidelines you posted in FQXi three years back, I am just following them.

        Thank you very much for your very long observation and well study on my essay. Since this is related my essay, I will post it there. Further reply to many technical points I will reply there. So that others will also read.

        That study is enough for me, I will give you 10, the best.After all you yours is a wonderful essay from your experience!!!

        After 40 years of long work without ANY recognition, now I lost interest what some one gives 3 or 1. For me no problems........

        I hope some one will recognize my good work after my death. If nobody does recognize also no problems, I will never be knowing it is it not??? :)

        Best Regards

        =snp

        Dear Terry

        I made 4 posts to reply your post, I made them smaller to improve readability. This is the last post there

        ...............................

        4.

        ................. Your words................

        The other factor you might want to consider regarding FQXi mutual ratings is that, at least three years ago, they seemed to matter very little in terms of actual selection of winners....................

        Let FQXi take their own decisions, I am not worried

        ................. Your words................

        I recall that I was quite disappointed when the essays that I and many others thought were the most innovative, insightful, well-written, and science-focused -- essays that scored well in reviews like this (I was not in this group) -- nonetheless ended up getting at best a few lower-level awards..........

        Yes correctly said.........

        ................. Your words................

        Meanwhile, authors who other essayists had not noticed much during the internal reviews somehow ended up not just winning the big prizes, but getting heaps of praise for their dedicated repetition of themes that were far more traditional and predictable, and whom in at least some cases had been previously supported by the same groups that fund FQXi. An unfortunate appearance of conflict, that, although it was surely unintentional.

        I dont worry, I could meet some people like you thro' this contest.

        I hope you will publish some papers of me like on GRBs using Dynamic Universe model....

        Best

        =snp

        Dear snp,

        Thank you for such kind words after me providing a fairly tough review! You are a good person, and I too am delighted by meeting folks like you and other here.

        I like your point that the greatest value of FQXi is the interaction, not the prizes. If someone gets an FQXi prize... well celebrate! Great gravy train in the morning, why should you not? But if you don't get an FQXi prize after putting in so much work... well, meh, is it really that big of a deal?

        While FQXi admirably attempts to probe a bit deeper than many groups, it is by its very nature also very deeply intertwined with the "standard" perspectives of physics, which as I noted shows up in some of its prize assessments. And that affects how seriously individuals should take its assessments.

        We are speaking here of a broader research community that for the past half century has been betting the majority of its theoretical money and researcher careers (whether the researchers wanted it or not) on the idea of Planck-scale superstrings. All of that work has now been soundly shown to be irrelevant by the superb experimental data from the HAWC Consortium, which showed that tiny Planck-scale superstrings -- which from the very first paper were an enormous and very weakly justified leap of faith from quite real hadronic Regge strings -- are far too huge and gloppy to meet the experimentally verified constraints of poor old special relativity no less... the delightfully simple Poincare/Lorentz/Einstein/Minkowski symmetries that were first postulated over a century ago, back when even calculators were mechanical only.

        From that costly little half-century faux pas alone (there have been other analytical strategy missteps), I think it's safe to say that the track record in modern physics for assessing and predicting which ideas will truly become the future of physics has been... well, somewhat less than stellar? Instead, it was instead those amazing mathematicians (Poincare especially) and physicists from over a century ago, the ones who had minimal tools, simple ideas, and an absolute acceptance of the need for experimental validation of everything they did, who even now, over a century later, are proving to have been the true prophets for predicting where physics must go in the future.

        Cheers,

        Terry

        ----------

        And an addendum: I was serious in my comment above about century-old simplicity still being predictive of where physics must head in the future. For example, the recent HAWC Collaboration data seems to imply that special relativity is never violated. So why not make this into a fundamental hypothesis? That is, what would be the full implications to physics and mathematics of hypothesizing that at their deepest roots both are based not on Euclidean spaces, but on Minkowski spaces?

        That may sound easy, or even "of course" like what we are already doing, but I assure you it is not. For survival purposes our brains are hardwired to think in terms of Euclidean spaces, and those are at best narrow, local-only, and unavoidably approximate subsets of the Minkowski spaces of special relativity.

        Taking Poincare symmetries as a starting point would require us to abandon the primacy of Euclidean spaces. But to take the idea to its logical limit, this would need to apply not just to physics, but to the mathematics we use to describe physics. That is because the Euclidean concepts that we toss about so freely and without thought in mathematical fields and such are necessary approximations created by the hard-wiring of our brains to take advantage of the narrow, low-energy environment in which we must think quickly to survive. So just how radical might such a transformation be?

        One impact is much of the mathematics of physics would suddenly and necessarily become part of a much broader context, since any Euclidean space -- even those implied by simple arrays of numbers in a computer -- would be newly understood as local-only approximates of some much larger Minkowski space, one that only looks Euclidean to our small-scale, limited-range, biologically enforced perceptions. If you play with such ideas seriously for a while, you will discover they are a bit mind-bending. Minkowski himself glimpsed this a century ago in his famous talk on the merger of space and time into a single entity. Yet even Minkowski struggled with the idea a bit, as seen in the infinities that creep into his discussion of how to define Euclidean space as a limit Minkowski space.

          Dear Terry

          I replied your post on my essay, please check there....

          Best regards

          =snp

          Dear Terry,

          Glad to read your work again.

          I greatly appreciated your work and discussion. I am very glad that you are not thinking in abstract patterns.

          While the discussion lasted, I wrote an article: "Practical guidance on calculating resonant frequencies at four levels of diagnosis and inactivation of COVID-19 coronavirus", due to the high relevance of this topic. The work is based on the practical solution of problems in quantum mechanics, presented in the essay FQXi 2019-2020 "Universal quantum laws of the universe to solve the problems of unsolvability, computability and unpredictability".

          I hope that my modest results of work will provide you with information for thought.

          Warm Regards, `

          Vladimir

          Dear Vladimir,

          Thank you for your interest in and kind words about my poor scribblings!

          You've got a lot of work there, but I have a rule: I always like talking to folks about their ideas, but if I find the ideas irreconcilable with my best (but poor, always) understanding of physics, I must also say that.

          So, here's the big problem: I just don't see a link in any of the formulations of gravity and the de Broglie / Bohm pilot wave model. Now admittedly, that's not saying that such a connection could be made and mathematically quantified. In fact, if you consider pilot waves as a sort of "essence of entanglement" factoring of the wave function, and then look at recent work by some pretty famous physicists on space-as-entanglement and gravity-as-entanglement (Verlinde, especially)... well shoot, you kind of end up with the very real possibility that a pilot wave model at astronomical distances might very well be connected to gravity, if the space-as-entanglement folks are right. (I am one of them, thought my model is non-holographic and particle vs Planck foam based... simpler and much lower total energy is involved.)

          So, hmm, I guess I just convinced myself that you may have a point when it comes to the general assertion (the devils is always in the details) that at astronomical distances at least, the fabric of all pilot waves for all particles could be (maybe even should be mathematically equatable to e.g. Verlinde's holographic gravity model... hmm, what an intriguing thought, there could be some papers in that avenue of thought... and of course a connection to MOND, since space-as-entanglement unavoidably will (eventually) have to say something about what happens when the entanglements start to become relatively sparser in e.g. the intergalactic gaps... hmm. In my first personal version of space-as-entanglement I postulated just that idea for exploration, that the large-scale topology of direct entanglement could perhaps also impact gravity drop off. Holographic entanglement makes that possibility less apparent... anyway, interesting idea areas in any case, but horribly in need of quantification and specific predictions, both for the holographic case (thought I've not checked lately) and absolutely my more direct version, on which I never bothered to follow up (still have not).

          But with that said, a huge caution: Even if there is a quantifiable, experimentally meaningful way to cross-line pilot-wave-entanglement with gravity-as-entanglement (Verlinde et al), I have to be blunt: I know of nothing in physics that could be compatible with invoking gravity, and especially gravity waves, with ordinary, local-scale quantum mechanics. Nothing in quantum mechanics needs gravity at that scale, and any attempt to redefined, say, pilot waves or electromagnetic waves as "gravity" becomes just that: Redefinition, and a very confusing form of it, also.

          And again, since my personal pledge is to talk constructively but not just let issues slip: There is nothing in the unique protein signature of COVID-19 that is sufficiently unique to enable it to be distinguished from living tissue or other viruses by any phenomena with wavelengths less than that of atomic masses... that is, by other molecules.

          This is a deep and extremely intransigent issue, because it deals with absolutes of resolution and thus identification. I can tell from your writing you are sincere in your assessment that such wave-based (whatever kinds of waves) identifications of COVID-19 are possible, and I am asking you to at least consider the other possibility: That in the absolute best case, what you are really reaching for is spatially isolated sterilization, aggressive broad-spectrum destruction, of COVID, which will also inevitably destroy any human living tissue in that same region of space.

          How do you get around that?

          Persistent templates with high enough mass-frequencies (and low enough free energies -- this must be mostly rest mass, not electromagnetic energy) in their largely classical details to identify and match unique features of the COVID-19 strains. The high-rest-mass, low-free-energy qualifier is extremely important because, for example, even though it is possible in principle that one could create a pure electromagnetic resonance capable of demonstrating the structural signature of key COVID-19 proteins, the need for atomic-diameter detail, and thus wavelengths, would place the entire holographic template smack in the middle of the X-ray band. Besides being, er, difficult to create, that level of free energy would of course vaporize everything it touches, leaving no opportunity for any differentiation between viruses and living flesh.

          The only remaining option is to bind the resolution energy into rest mass, and that option means atoms, which can form templates at energies low enough to identify viruses without directly annihilating them or any surrounding living or non-COVID-19 organic molecules and systems.

          In other words... antibodies, exactly the solution that biology has developed.

          So, my bottom line what physics says about identifying and differentially destroying any type of virus is this:

          There is a deep and very pointy argument for why we will never, even in the very distant future, achieve and energy-based approach to differentially identifying and destroying viruses without instead simply fatally annihilating all organic tissue in that (possibly quite small) region of space. Also, if that latter case of simply sterilizing a surface is the goals, say for both human skin and interior surfaces, then anyone interested should look at far-UV spectrum (eximer) devices. This band is hugely more effective at destroying viral RNA and DNA than is the much more widely used (and dangerously, ozone-producing) UVC band, and more importantly, it has extremely low penetration depth for skin and cannot pass through even the outermost layer of the cornea.

          Austin,

          My apologies! Because I had a couple of earlier threads from you, I kept losing this post even after having read it briefly back on that Monday. I kept thinking I was missing something, but then kept not finding it in the earlier threads. You have lots of interesting points, so I'll try to go through a few:

          ----------

          >> ... If a standard model elementary particle has to be brought to a [point-approximating state in xyz space] at a measurement, then if there are preons within the particle, wouldn't all the individual preons need also to be brought to points at the same time? ...

          Good question!

          Interestingly, the answer is a definite and well-defined no.

          The reason is the natural hierarchy of size and energy scales in matter. Think for example of both atoms (bundles of nuclei and electrons) and nucleons (bundles of quarks). You can very narrowly localize an atom by using nothing more than phonons, quasiparticles of sound and heat, since at that scale these carry pretty impressive momentum kicks, comparable to X-rays but with far less kinetic energy (and thus less destructive). But to see inside the atom, to force its electron clouds into more point-like wave functions, requires dramatically more energy and momentum. The same is true for nucleons like neutrons. Even at nuclear power levels, the energy levels needed to resolve (collapse) the quarks, even (only!) very briefly to more point-like entities that show classical motion, are enormously higher than what the neutron typically encounters.

          Overall, this available energy resolution barrier (I'm inventing a term, I don't know if one exists) is what keeps the entire universe persistently quantum (for the most part) at its lowest levels of detail. And that's and a good thing, too since otherwise both volume and chemistry would disappear and the universe would be nothing but multi-scale black holes!

          This is also the point at which my perspective on how the universe works at its deepest levels has flipped literally 180 degrees over the past couple of years. For most of my life I believed as devoutly as most folks in the concept of a positive-image quantum universe, the idea that every quantum wave function was an infinitely detailed superposition of every possible configuration that could exist. How could I not? Feynman's QED in particular takes exactly this approach, and is one of the most precisely predictive algorithmic approaches ever devised in physics, nailing all sort of experimental results spot on! So obviously the universe must be positive image, with quantum wave functions being incandescently hot, broad-spectrum collages of every possible history available to them.

          But I am at heart an algorithmist, and from very early one I've known that the most obvious representation of a problem is almost never the best representation of the problem, either logically or computationally. And that left me with a nagging hole (heh!) regarding exactly the kind of question you just asked: If the universe never shows more detail than exactly the level of resolution you put into it -- if atoms never become electrons and protons until you rip them apart with information-toting momentum packages (X-rays) with wavelengths that always define the new level of resolution they make available -- then why do we persist in saying that those details even exist until that act of adding sufficient energy and momentum to make them real?

          The result, of course, is the negative-image universe: A universe full of dark holes, spots that we call quantum, that in effect say nothing more than "land available, build to suit!" to any phonon or photon or W particle or whatever that comes by and offers enough energy-momentum cash to make the new construction happen.

          It is so much simpler! And, from an algorithmic perspective, almost unimaginably more efficient, at least conceptually. Instead of a quantum function being an incandescent infinity of infinitesimal pure states -- multiplying infinities is never a good thing algorithmically -- you just get an empty spot with number of unforgiving constraints (its selection and superselection rules), on top of which the added energy becomes responsible for adding all of the needed details. And if you pay very close attention to pairwise entanglement, even that mystery of "where does all of that new wave collapse detail come from, then?" has an unexpected resolution: It comes from the other end of the energy particle, e.g. the almost infinitely complex thermal-matter momentum shattering of the still-entangled momentum of the photon from when it was launched, for example, by a hot tungsten wire. It's not the quantum world that provides the almost perfect randomness of the wave function collapse, it's the thermally hot classical world which it is entangled, the web of selection rules that must all be satisfied, and that must all have entangled root in some classical environment. Nothing else is possible, since without the classical context, the quantum wave function has no history from which it can be created.

          Algorithmically, a negative-image universe with dark wave functions not only is hugely more efficient, it literally just makes more sense: the entities that seem uncertain, the quantum bits, are that way because they do not exist yet, not because they have uncounted infinities of things jammed into them. Once you start thinking this way, trust me, it very quickly gets addictive because it simply models better what we actually see: Lack of definition at the bottom, due simply and without complexity to simple energy starvation. It's not much different from looking at your smartphone screen with a microscope and realizing that eventually, every image has to run out of details. Our universe just does it a much smoother, craftier, and always oh-so-deceptively-smooth multi-scale fashion.

          ----------

          >> ... I then noted that Bose Einstein Condensates can exist as multiple structures (collections of bosons) in a single state. This is just Penrose's Cyclic Conformal Cosmology where a single state/point for the universe can be allowed if all content is in bosonic form. This alleviates any need for all hypothetical preons to likewise be brought simultaneously to point(s?) themselves ...

          Heh! You can tell I did not read this paragraph before replying to the previous one, since I spent all of my time there talking about how internal particles of any type do not also collapse! (And BTW, since T and V are fermions, I always assumed most preon theories used fermions, not bosons. But I've not looked carefully at the area, since the orthogonal Glashow cube vectors explain preon-like behavior (TV-like behavior at least) without invoking actual particles. It just works out a lot more cleanly.

          ---------

          >> ... On the other hand in CCC the universe is not brought to nullity but merely to a single state at a node where it then recycles back to enormity ...

          The cyclic universe model is delightful and one of Penrose's (many!) deeply intriguing speculations, and I read this as heading towards the idea of everything turning into a Bose condensate at the preon level to move between cycles. But astrophysics data just is not heading in that direction... more the other way around, with accelerating expansion. More than theory, you might want to consider that aspect of CCC in the current context.

          And just to make sure I understand: Are you sure about using bosonic preons? Two fermions make a nice boson in e.g. rishons, but if you don't start with fermions, there is just no obvious way to construct them from bosons.

          ----------

          >> ... I have struggled over your dark voids .... but I do think that almost anything is better than many worlds. ...

          Well, I've struggled over the idea too. It certainly was not where I started. I agree about MWI; see my ramblings above for why.

          I have an event coming up, so I'll leave it at that. I can see you have put a lot of thought into the CCC boson transition idea, including even how it might relate to dark functions. Interesting.

          Good luck with your essay!

          Cheers,

          Terry