Dear Terry

I made 4 posts to reply your post, I made them smaller to improve readability. This is the last post there

...............................

4.

................. Your words................

The other factor you might want to consider regarding FQXi mutual ratings is that, at least three years ago, they seemed to matter very little in terms of actual selection of winners....................

Let FQXi take their own decisions, I am not worried

................. Your words................

I recall that I was quite disappointed when the essays that I and many others thought were the most innovative, insightful, well-written, and science-focused -- essays that scored well in reviews like this (I was not in this group) -- nonetheless ended up getting at best a few lower-level awards..........

Yes correctly said.........

................. Your words................

Meanwhile, authors who other essayists had not noticed much during the internal reviews somehow ended up not just winning the big prizes, but getting heaps of praise for their dedicated repetition of themes that were far more traditional and predictable, and whom in at least some cases had been previously supported by the same groups that fund FQXi. An unfortunate appearance of conflict, that, although it was surely unintentional.

I dont worry, I could meet some people like you thro' this contest.

I hope you will publish some papers of me like on GRBs using Dynamic Universe model....

Best

=snp

Dear snp,

Thank you for such kind words after me providing a fairly tough review! You are a good person, and I too am delighted by meeting folks like you and other here.

I like your point that the greatest value of FQXi is the interaction, not the prizes. If someone gets an FQXi prize... well celebrate! Great gravy train in the morning, why should you not? But if you don't get an FQXi prize after putting in so much work... well, meh, is it really that big of a deal?

While FQXi admirably attempts to probe a bit deeper than many groups, it is by its very nature also very deeply intertwined with the "standard" perspectives of physics, which as I noted shows up in some of its prize assessments. And that affects how seriously individuals should take its assessments.

We are speaking here of a broader research community that for the past half century has been betting the majority of its theoretical money and researcher careers (whether the researchers wanted it or not) on the idea of Planck-scale superstrings. All of that work has now been soundly shown to be irrelevant by the superb experimental data from the HAWC Consortium, which showed that tiny Planck-scale superstrings -- which from the very first paper were an enormous and very weakly justified leap of faith from quite real hadronic Regge strings -- are far too huge and gloppy to meet the experimentally verified constraints of poor old special relativity no less... the delightfully simple Poincare/Lorentz/Einstein/Minkowski symmetries that were first postulated over a century ago, back when even calculators were mechanical only.

From that costly little half-century faux pas alone (there have been other analytical strategy missteps), I think it's safe to say that the track record in modern physics for assessing and predicting which ideas will truly become the future of physics has been... well, somewhat less than stellar? Instead, it was instead those amazing mathematicians (Poincare especially) and physicists from over a century ago, the ones who had minimal tools, simple ideas, and an absolute acceptance of the need for experimental validation of everything they did, who even now, over a century later, are proving to have been the true prophets for predicting where physics must go in the future.

Cheers,

Terry

----------

And an addendum: I was serious in my comment above about century-old simplicity still being predictive of where physics must head in the future. For example, the recent HAWC Collaboration data seems to imply that special relativity is never violated. So why not make this into a fundamental hypothesis? That is, what would be the full implications to physics and mathematics of hypothesizing that at their deepest roots both are based not on Euclidean spaces, but on Minkowski spaces?

That may sound easy, or even "of course" like what we are already doing, but I assure you it is not. For survival purposes our brains are hardwired to think in terms of Euclidean spaces, and those are at best narrow, local-only, and unavoidably approximate subsets of the Minkowski spaces of special relativity.

Taking Poincare symmetries as a starting point would require us to abandon the primacy of Euclidean spaces. But to take the idea to its logical limit, this would need to apply not just to physics, but to the mathematics we use to describe physics. That is because the Euclidean concepts that we toss about so freely and without thought in mathematical fields and such are necessary approximations created by the hard-wiring of our brains to take advantage of the narrow, low-energy environment in which we must think quickly to survive. So just how radical might such a transformation be?

One impact is much of the mathematics of physics would suddenly and necessarily become part of a much broader context, since any Euclidean space -- even those implied by simple arrays of numbers in a computer -- would be newly understood as local-only approximates of some much larger Minkowski space, one that only looks Euclidean to our small-scale, limited-range, biologically enforced perceptions. If you play with such ideas seriously for a while, you will discover they are a bit mind-bending. Minkowski himself glimpsed this a century ago in his famous talk on the merger of space and time into a single entity. Yet even Minkowski struggled with the idea a bit, as seen in the infinities that creep into his discussion of how to define Euclidean space as a limit Minkowski space.

    Dear Terry

    I replied your post on my essay, please check there....

    Best regards

    =snp

    Dear Terry,

    Glad to read your work again.

    I greatly appreciated your work and discussion. I am very glad that you are not thinking in abstract patterns.

    While the discussion lasted, I wrote an article: "Practical guidance on calculating resonant frequencies at four levels of diagnosis and inactivation of COVID-19 coronavirus", due to the high relevance of this topic. The work is based on the practical solution of problems in quantum mechanics, presented in the essay FQXi 2019-2020 "Universal quantum laws of the universe to solve the problems of unsolvability, computability and unpredictability".

    I hope that my modest results of work will provide you with information for thought.

    Warm Regards, `

    Vladimir

    Dear Vladimir,

    Thank you for your interest in and kind words about my poor scribblings!

    You've got a lot of work there, but I have a rule: I always like talking to folks about their ideas, but if I find the ideas irreconcilable with my best (but poor, always) understanding of physics, I must also say that.

    So, here's the big problem: I just don't see a link in any of the formulations of gravity and the de Broglie / Bohm pilot wave model. Now admittedly, that's not saying that such a connection could be made and mathematically quantified. In fact, if you consider pilot waves as a sort of "essence of entanglement" factoring of the wave function, and then look at recent work by some pretty famous physicists on space-as-entanglement and gravity-as-entanglement (Verlinde, especially)... well shoot, you kind of end up with the very real possibility that a pilot wave model at astronomical distances might very well be connected to gravity, if the space-as-entanglement folks are right. (I am one of them, thought my model is non-holographic and particle vs Planck foam based... simpler and much lower total energy is involved.)

    So, hmm, I guess I just convinced myself that you may have a point when it comes to the general assertion (the devils is always in the details) that at astronomical distances at least, the fabric of all pilot waves for all particles could be (maybe even should be mathematically equatable to e.g. Verlinde's holographic gravity model... hmm, what an intriguing thought, there could be some papers in that avenue of thought... and of course a connection to MOND, since space-as-entanglement unavoidably will (eventually) have to say something about what happens when the entanglements start to become relatively sparser in e.g. the intergalactic gaps... hmm. In my first personal version of space-as-entanglement I postulated just that idea for exploration, that the large-scale topology of direct entanglement could perhaps also impact gravity drop off. Holographic entanglement makes that possibility less apparent... anyway, interesting idea areas in any case, but horribly in need of quantification and specific predictions, both for the holographic case (thought I've not checked lately) and absolutely my more direct version, on which I never bothered to follow up (still have not).

    But with that said, a huge caution: Even if there is a quantifiable, experimentally meaningful way to cross-line pilot-wave-entanglement with gravity-as-entanglement (Verlinde et al), I have to be blunt: I know of nothing in physics that could be compatible with invoking gravity, and especially gravity waves, with ordinary, local-scale quantum mechanics. Nothing in quantum mechanics needs gravity at that scale, and any attempt to redefined, say, pilot waves or electromagnetic waves as "gravity" becomes just that: Redefinition, and a very confusing form of it, also.

    And again, since my personal pledge is to talk constructively but not just let issues slip: There is nothing in the unique protein signature of COVID-19 that is sufficiently unique to enable it to be distinguished from living tissue or other viruses by any phenomena with wavelengths less than that of atomic masses... that is, by other molecules.

    This is a deep and extremely intransigent issue, because it deals with absolutes of resolution and thus identification. I can tell from your writing you are sincere in your assessment that such wave-based (whatever kinds of waves) identifications of COVID-19 are possible, and I am asking you to at least consider the other possibility: That in the absolute best case, what you are really reaching for is spatially isolated sterilization, aggressive broad-spectrum destruction, of COVID, which will also inevitably destroy any human living tissue in that same region of space.

    How do you get around that?

    Persistent templates with high enough mass-frequencies (and low enough free energies -- this must be mostly rest mass, not electromagnetic energy) in their largely classical details to identify and match unique features of the COVID-19 strains. The high-rest-mass, low-free-energy qualifier is extremely important because, for example, even though it is possible in principle that one could create a pure electromagnetic resonance capable of demonstrating the structural signature of key COVID-19 proteins, the need for atomic-diameter detail, and thus wavelengths, would place the entire holographic template smack in the middle of the X-ray band. Besides being, er, difficult to create, that level of free energy would of course vaporize everything it touches, leaving no opportunity for any differentiation between viruses and living flesh.

    The only remaining option is to bind the resolution energy into rest mass, and that option means atoms, which can form templates at energies low enough to identify viruses without directly annihilating them or any surrounding living or non-COVID-19 organic molecules and systems.

    In other words... antibodies, exactly the solution that biology has developed.

    So, my bottom line what physics says about identifying and differentially destroying any type of virus is this:

    There is a deep and very pointy argument for why we will never, even in the very distant future, achieve and energy-based approach to differentially identifying and destroying viruses without instead simply fatally annihilating all organic tissue in that (possibly quite small) region of space. Also, if that latter case of simply sterilizing a surface is the goals, say for both human skin and interior surfaces, then anyone interested should look at far-UV spectrum (eximer) devices. This band is hugely more effective at destroying viral RNA and DNA than is the much more widely used (and dangerously, ozone-producing) UVC band, and more importantly, it has extremely low penetration depth for skin and cannot pass through even the outermost layer of the cornea.

    Austin,

    My apologies! Because I had a couple of earlier threads from you, I kept losing this post even after having read it briefly back on that Monday. I kept thinking I was missing something, but then kept not finding it in the earlier threads. You have lots of interesting points, so I'll try to go through a few:

    ----------

    >> ... If a standard model elementary particle has to be brought to a [point-approximating state in xyz space] at a measurement, then if there are preons within the particle, wouldn't all the individual preons need also to be brought to points at the same time? ...

    Good question!

    Interestingly, the answer is a definite and well-defined no.

    The reason is the natural hierarchy of size and energy scales in matter. Think for example of both atoms (bundles of nuclei and electrons) and nucleons (bundles of quarks). You can very narrowly localize an atom by using nothing more than phonons, quasiparticles of sound and heat, since at that scale these carry pretty impressive momentum kicks, comparable to X-rays but with far less kinetic energy (and thus less destructive). But to see inside the atom, to force its electron clouds into more point-like wave functions, requires dramatically more energy and momentum. The same is true for nucleons like neutrons. Even at nuclear power levels, the energy levels needed to resolve (collapse) the quarks, even (only!) very briefly to more point-like entities that show classical motion, are enormously higher than what the neutron typically encounters.

    Overall, this available energy resolution barrier (I'm inventing a term, I don't know if one exists) is what keeps the entire universe persistently quantum (for the most part) at its lowest levels of detail. And that's and a good thing, too since otherwise both volume and chemistry would disappear and the universe would be nothing but multi-scale black holes!

    This is also the point at which my perspective on how the universe works at its deepest levels has flipped literally 180 degrees over the past couple of years. For most of my life I believed as devoutly as most folks in the concept of a positive-image quantum universe, the idea that every quantum wave function was an infinitely detailed superposition of every possible configuration that could exist. How could I not? Feynman's QED in particular takes exactly this approach, and is one of the most precisely predictive algorithmic approaches ever devised in physics, nailing all sort of experimental results spot on! So obviously the universe must be positive image, with quantum wave functions being incandescently hot, broad-spectrum collages of every possible history available to them.

    But I am at heart an algorithmist, and from very early one I've known that the most obvious representation of a problem is almost never the best representation of the problem, either logically or computationally. And that left me with a nagging hole (heh!) regarding exactly the kind of question you just asked: If the universe never shows more detail than exactly the level of resolution you put into it -- if atoms never become electrons and protons until you rip them apart with information-toting momentum packages (X-rays) with wavelengths that always define the new level of resolution they make available -- then why do we persist in saying that those details even exist until that act of adding sufficient energy and momentum to make them real?

    The result, of course, is the negative-image universe: A universe full of dark holes, spots that we call quantum, that in effect say nothing more than "land available, build to suit!" to any phonon or photon or W particle or whatever that comes by and offers enough energy-momentum cash to make the new construction happen.

    It is so much simpler! And, from an algorithmic perspective, almost unimaginably more efficient, at least conceptually. Instead of a quantum function being an incandescent infinity of infinitesimal pure states -- multiplying infinities is never a good thing algorithmically -- you just get an empty spot with number of unforgiving constraints (its selection and superselection rules), on top of which the added energy becomes responsible for adding all of the needed details. And if you pay very close attention to pairwise entanglement, even that mystery of "where does all of that new wave collapse detail come from, then?" has an unexpected resolution: It comes from the other end of the energy particle, e.g. the almost infinitely complex thermal-matter momentum shattering of the still-entangled momentum of the photon from when it was launched, for example, by a hot tungsten wire. It's not the quantum world that provides the almost perfect randomness of the wave function collapse, it's the thermally hot classical world which it is entangled, the web of selection rules that must all be satisfied, and that must all have entangled root in some classical environment. Nothing else is possible, since without the classical context, the quantum wave function has no history from which it can be created.

    Algorithmically, a negative-image universe with dark wave functions not only is hugely more efficient, it literally just makes more sense: the entities that seem uncertain, the quantum bits, are that way because they do not exist yet, not because they have uncounted infinities of things jammed into them. Once you start thinking this way, trust me, it very quickly gets addictive because it simply models better what we actually see: Lack of definition at the bottom, due simply and without complexity to simple energy starvation. It's not much different from looking at your smartphone screen with a microscope and realizing that eventually, every image has to run out of details. Our universe just does it a much smoother, craftier, and always oh-so-deceptively-smooth multi-scale fashion.

    ----------

    >> ... I then noted that Bose Einstein Condensates can exist as multiple structures (collections of bosons) in a single state. This is just Penrose's Cyclic Conformal Cosmology where a single state/point for the universe can be allowed if all content is in bosonic form. This alleviates any need for all hypothetical preons to likewise be brought simultaneously to point(s?) themselves ...

    Heh! You can tell I did not read this paragraph before replying to the previous one, since I spent all of my time there talking about how internal particles of any type do not also collapse! (And BTW, since T and V are fermions, I always assumed most preon theories used fermions, not bosons. But I've not looked carefully at the area, since the orthogonal Glashow cube vectors explain preon-like behavior (TV-like behavior at least) without invoking actual particles. It just works out a lot more cleanly.

    ---------

    >> ... On the other hand in CCC the universe is not brought to nullity but merely to a single state at a node where it then recycles back to enormity ...

    The cyclic universe model is delightful and one of Penrose's (many!) deeply intriguing speculations, and I read this as heading towards the idea of everything turning into a Bose condensate at the preon level to move between cycles. But astrophysics data just is not heading in that direction... more the other way around, with accelerating expansion. More than theory, you might want to consider that aspect of CCC in the current context.

    And just to make sure I understand: Are you sure about using bosonic preons? Two fermions make a nice boson in e.g. rishons, but if you don't start with fermions, there is just no obvious way to construct them from bosons.

    ----------

    >> ... I have struggled over your dark voids .... but I do think that almost anything is better than many worlds. ...

    Well, I've struggled over the idea too. It certainly was not where I started. I agree about MWI; see my ramblings above for why.

    I have an event coming up, so I'll leave it at that. I can see you have put a lot of thought into the CCC boson transition idea, including even how it might relate to dark functions. Interesting.

    Good luck with your essay!

    Cheers,

    Terry

    Terry

    "Heh! You can tell I did not read this paragraph before replying to the previous one, since I spent all of my time there talking about how internal particles of any type do not also collapse! (And BTW, since T and V are fermions, I always assumed most preon theories used fermions, not bosons. But I've not looked carefully at the area, since the orthogonal Glashow cube vectors explain preon-like behavior (TV-like behavior at least) without invoking actual particles. It just works out a lot more cleanly."

    Glad you agree independently that collapse is not required at all levels.

    In my preon model I have three fermionic preons and one bosonic. I looked at Rishons first but did not like them. Needless to say, my reasons for not liking it soon became more tolerant after struggling with my own creation of a model, but by them I had finished the development of my own model, which I preferred. My bosonic preon, C, is my favourite one as it splits into three separate-colour parts and I am not sure how I could manage to construct all colour quarks and gluons without it.

    "The cyclic universe model is delightful and one of Penrose's (many!) deeply intriguing speculations, and I read this as heading towards the idea of everything turning into a Bose condensate at the preon level to move between cycles. But astrophysics data just is not heading in that direction... more the other way around, with accelerating expansion. More than theory, you might want to consider that aspect of CCC in the current context."

    An accelerating expansion of the universe is pro-CCC. A far flung collection of less and less dense fermions is exactly what CCC requires. I have always though that Dark Energy is good for the idea of CCC. It ensures the eventual breakdown of the space metric by isolating individual fermions so that they cannot communicate with one another to maintain the metric.

    Another idea ... combining DE (as maybe antiparticles) and my suggestion in my essay that antiparticles are travelling backwards in time ... then from the point of view of the DE antiparticles they 'think' of themselves as normal particles travelling forwards in time, but decelerating, from their own BB (which is actually 'our' Big Crunch). So maybe DE is merely our future rushing towards us head on. A nice crazy idea to maybe just leave on its own!

    "And just to make sure I understand: Are you sure about using bosonic preons? Two fermions make a nice boson in e.g. rishons, but if you don't start with fermions, there is just no obvious way to construct them from bosons."

    Well, I have a mix of fermionic and bosonic preons and so I can easily construct any fermionic or bosonic Standard Model particle from my preons. Your issue would only be a problem if there were only bosonic preons in existence. And I do not need to worry (within my preon model) about where the preons come from. They are assembled from Hexarks which is at the next layer of fundamentality (or of turtles!)

    I can see what you mean though and SUSY has IMO a similar issue. SUSY has a (super)field which can turn a fermion into a boson and a boson into a fermion. And, like you have said, you cannot turn a fermion into a boson or a boson into a fermion using a bosonic operator field. But this is not a problem for my preon model. Nor for SUSY either.

    My bosonic preon, C, is unlike any Standard Model fundamental particle [except W of course] as it is a boson with electric charge. But I have a paper at https://vixra.org/pdf/1907.0038v2.pdf

    which shows the preonic structure of hypothetical leptoquarks and charged higgs which are also bosons with electric charge. And also see:

    CERN Courier (2 May 2019) "The flavour of new physics". https://cerncourier.com/the-flavour-ofnew-physics/ .

    So my preon C may one day be recognised to have similar entities/big brothers. W is not much like a big brother.

    Best wishes

    Austin