Hi Terry Bollinger

Very nice idea about "binary conciseness lies behind the most powerful and insightful equations in physics and other sciences. The principle is that if two or more precise descriptive models (theories) address the same experimental data...." is very progressive for understanding of consciousness, very good... Bythe way....

Here in my essay energy to mass conversion is proposed................ yours is very nice essay best wishes .... I highly appreciate hope your essay and hope for reciprocity ....You may please spend some of the valuable time on Dynamic Universe Model also and give your some of the valuable & esteemed guidance

Some of the Main foundational points of Dynamic Universe Model :

-No Isotropy

-No Homogeneity

-No Space-time continuum

-Non-uniform density of matter, universe is lumpy

-No singularities

-No collisions between bodies

-No blackholes

-No warm holes

-No Bigbang

-No repulsion between distant Galaxies

-Non-empty Universe

-No imaginary or negative time axis

-No imaginary X, Y, Z axes

-No differential and Integral Equations mathematically

-No General Relativity and Model does not reduce to GR on any condition

-No Creation of matter like Bigbang or steady-state models

-No many mini Bigbangs

-No Missing Mass / Dark matter

-No Dark energy

-No Bigbang generated CMB detected

-No Multi-verses

Here:

-Accelerating Expanding universe with 33% Blue shifted Galaxies

-Newton's Gravitation law works everywhere in the same way

-All bodies dynamically moving

-All bodies move in dynamic Equilibrium

-Closed universe model no light or bodies will go away from universe

-Single Universe no baby universes

-Time is linear as observed on earth, moving forward only

-Independent x,y,z coordinate axes and Time axis no interdependencies between axes..

-UGF (Universal Gravitational Force) calculated on every point-mass

-Tensors (Linear) used for giving UNIQUE solutions for each time step

-Uses everyday physics as achievable by engineering

-21000 linear equations are used in an Excel sheet

-Computerized calculations uses 16 decimal digit accuracy

-Data mining and data warehousing techniques are used for data extraction from large amounts of data.

- Many predictions of Dynamic Universe Model came true....Have a look at

http://vaksdynamicuniversemodel.blogspot.in/p/blog-page_15.html

I request you to please have a look at my essay also, and give some of your esteemed criticism for your information........

Dynamic Universe Model says that the energy in the form of electromagnetic radiation passing grazingly near any gravitating mass changes its in frequency and finally will convert into neutrinos (mass). We all know that there is no experiment or quest in this direction. Energy conversion happens from mass to energy with the famous E=mC2, the other side of this conversion was not thought off. This is a new fundamental prediction by Dynamic Universe Model, a foundational quest in the area of Astrophysics and Cosmology.

In accordance with Dynamic Universe Model frequency shift happens on both the sides of spectrum when any electromagnetic radiation passes grazingly near gravitating mass. With this new verification, we will open a new frontier that will unlock a way for formation of the basis for continual Nucleosynthesis (continuous formation of elements) in our Universe. Amount of frequency shift will depend on relative velocity difference. All the papers of author can be downloaded from "http://vaksdynamicuniversemodel.blogspot.in/ "

I request you to please post your reply in my essay also, so that I can get an intimation that you replied

Best

=snp

Hi Satyavarapu Naga Parameswara Gupta,

First I must make you aware that because I was a magazine editor for many years, I took the following pledge just to allow myself to participate in this phase of the FQXi evaluation:

goo.gl/KCCujt

This pledge explicitly requires that I not engage in any form of reciprocity when evaluating essays, since reciprocity unavoidably would mean I am not giving my honest opinion based only on what I see in the essay. I simply cannot work any other way. For that reason, I will not yet promise to point-score your essay even if I comment on it, because I cannot keep the promise you just requested.

That is lot of material you cover! I do promise to take a look. My warning in advance is that I look for deep continuity in every essay, and have yet to see one that introduced that many concepts in which I saw that kind of continuity. But I will try hard to make any positive comments that I think might help.

Alas, I already see indications that my honest, no-inflation-allowed point score could come out low... which is why I might choose not to assign a point score to it. You are being very up front, and I appreciate that very much. I regret if you gave me a strong rating if you do not truly feel that rating justified.

Cheers,

Terry

It is an interesting essay.

I am thinking that there are infinite sequence of trascendental numbers, that admit a convergent series like an approximation; so that the optimal approximation using pi number is a compact way to saying that a generic sequence is an extraction of a string in a convergent series with simple terms description; so that if the terms have a simple, compact, description then the Kolmogorov complexity is low, if the convergent series have a complex description, then the Kolmogorov complexity must include the terms description complexity.

The pi number could contain each sequence, for each length, then a single trascendental number could contain the minimum description if the starting point is not too high (for example greater of the string description).

I am thinking that the minimum Kolmogorov message for a trajectory and the principle of least action could have a connection if the Kolmogorov complexity of the trajectory and the measure of the Lagrangian was proportional.

A good essay make think.

Regards

Domenico

    Domenico,

    Thank you for your kind words, and I am glad my essay gave you some interesting ideas to pursue.

    I just downloaded your essay, which is almost surely the shortest essay submitted! I did not realize that a one-page essay would be allowed, but in retrospect the FQXi rules only prescribe maximum size limits, not minimums.

    Please be aware that I have taken the following pledge:

    goo.gl/KCCujt

    If you do not wish me to review your essay, please let me know quickly and I will gladly just skip over it. If I do comment on any essay, I always try to add some positive or constructive strategy remarks, even if I do not see the essay as strong. For point scoring, alas, I do not do inflation, so I can be pretty tough.

    Again, thanks for your comment.

    Cheers,

    Terry

    I read your essay for interest in new ideas: everyone here, in the contest, are curious; my interest in the my score is close to zero.

    Every opinion, critical or benevolent, on my essay is welcome.

    Regards

    Domenico

    Mr. Bollinger

    In your essay you cherish simplicity (and for good reasons) and I admire the elegance of your style. Let me ask something though: how do you recognize that simplicity in this contest? Should that simplicity be interpreted in just one way?

    I am asking this because I noticed a confusing comment of yours regarding an essay that more or less is pointing (in terms of that simplicity recalled by you), in the same direction.

    Thank you again for the elegant and personal approach of simplicity emergent from your essay (please don't tell me is not emergentпЃЉ) )

    Joyfully and respectfully Silviu

    Dear Terry Bollinger,

    Thanks for a most enjoyable essay. I first fell in love with information theory in 1967 when I encountered Amnon Katz's "Statistical Mechanics: an Information Theory Approach". Later I realized that ET Jaynes had done this circa 1951, and had noted that equating thermodynamic entropy to information entropy led to a lot of nonsense theorems being proved.

    Your Kolmogorov approach reminds me somewhat of 'Software Physics', circa 1977, relating the complexity of software to the number of operations and data types. I discovered the literal truth of this when I designed and built a prototype for Intel and Olivetti while they kept adding functionality (more operations) to the original spec.

    Anyway, I found your essay easy to read and understand and significant for the question "What is fundamental?" Congratulations.

    You say "the defining feature of the message is that it changes the state of the recipient." I have in a number of comments remarked that energy flows through space. If it crosses a threshold and changes the structure (or state) of a physical system, it 'in'-forms that system and the information, or record, or number of bits, comes into being. It has meaning only if a codebook or context is present: "One if by land, two if by sea." The proof of information is in the pudding, in my opinion it is energy that flows across space, there is nothing "on top of" that energy that is 'physical information'. If the frequency or modulation or what have you causes the change of state, that change is information (if it can be interpreted). While messages carrying "information" provide a very useful way to formulate the problem, I believe many physicists project this formulation on to reality and then come to believe in some physical instantiation of information aside and apart from the energy constituting the message. In that sense I enjoyed your section "Physics As Information Theory" in terms of "foundation messages".

    I believe your second and third challenges have essentially the same answer, but current theories based on false assumptions get in the way - one reason I am currently working to uncover the false assumptions.

    I like that you mention Pauli's treatment of spin. His projection of the 'qubit' formulation onto Stern-Gerlach data was one of the first (after Einstein's projection of a new time dimension onto every moving object) instances of physicists accepting the utility of projecting mathematical structure onto physics, and then coming to believe in the corresponding physical structure. My belief is that until the current belief in the structure of reality imposed by mathematical projections is overcome, your second and third challenges will not be satisfied. For brief comments on related topics, review three or four comments prior to yours on my essay page.

    You conclude by suggesting new insights are most likely to come from data. My belief is that the only solution to our current problems (your three challenges for instance) is to unearth the false assumptions that are built into our current theories and have been for generations. That is not a popular proposition. Almost all working physicists would prefer new ideas to add to their toolbox, not ideas that contradict things they have taught and published and that got them where they are -- another reason to value FQXi. I hope your experience is such that you will come back year after year. And I hope you enjoy your newly gained retirement. It's a good time in life.

    Best regards,

    Edwin Eugene Klingman

      Dear Terry Bollinger,

      Thank you for an interesting essay.

      I was wondering about the relationship between Kolmogorov Complexity and Occam's razor? Do simpler things really have lower KC? Also what about Bennett's logical depth? Why is KC better than logical depth?

      Please take a look at my essay.

      Thank you again for great read.

      All the best,

      Noson

        Dear Noson,

        Thank you for your excellent and insightful question! The answer is yes. If you translate a solution that has survived Occam's Razor into binary form (that is, into software), then the binary form of that solution will exhibit both the brevity and high information of a (near) Kolmogorov minimum.

        One can think of the "side trips" of a non-compact message as the information equivalents of the various components of a Rube Goldberg contraption. Simplifying the message thus becomes the equivalent of redesigning an information-domain Rube Goldberg contraption to get rid of unnecessary steps. The phrase "Occam's Razor" even suggests this kind of redesign, since for both messages and physical Rube Goldberg contraptions the goal is to cut away that which is not really necessary.

        One point that I think can be a bit non-intuitive is that solutions near their Kolmogorov minima are information dense -- that is, they look like long strings of completely random information. The intuitive glitch comes in here: If the goal of Occam's Razor is to find the simplest possible solution, how can a Kolmogorov minimum that is packed to the gills with information be called "simple"?

        The explanation is that to be effective, messages -- strings of bits that change the state of the recipient -- must be at least as complex as the tasks they perform. That means that even an Occam's Razor solution must still encode non-trivial information, and depending on the situation, that in turn can translate into long messages (or lengthy software, or large apps).

        If the desired state change in the recipient is simple in terms of how the recipient has been "pre-programmed" to respond (which is a very interesting issue in its own right), then the Kolmogorov minimum message will also be very short, perhaps as short as just one bit. But even though a single bit "looks" simple, it still qualifies as having maximum information density if the two options (0 or 1) have equal probability.

        The other extreme for Occam's Razor extreme occurs when the state of the recipient requires a major restructuring or conversion, one that is completely novel to the recipient. That can be a lot of bits, so in that case Occam's Razor will result in a rather lengthy "simplest possible" solution. Notice however that once this new information has been sent, the message recipient becomes smarter and will in the future no longer need the full message to be sent. A new protocol has been created, and a new Kolmogorov minimum established. It's worth pointing out that downloading a new app for your smart phone is very much an example of this scenario!

        We see this effect all the time in our modern web-linked world. As globally linked machines individually become more "aware" of the transformations they are likely to need in the future -- as they receive updates that provide new, more powerful software capabilities -- then the complexity of the messages one needs to send after that first large update also shrinks dramatically.

        This idea that Kolmogorov messaging builds on itself in a way that over time increases the "meaning" or semantic content of each bit sent is a fascinating and surprisingly deep concept. It is also deep in a specific physics sense, which is this: The sharing-based emergence of increasingly higher levels of "meaning" in messages began with the emergence of our specific spacetime and particle physics, and then progressed upwards over time across a spectrum of inorganic, living, sentient, and (particularly in the last century) information-machine based message protocols. After all, how could we know some of the elements in a distant quasar if the very electrons of that quasar did not share the design and signatures of the electrons within our detection devices? We assume that to be so, but there is no rule that says it must be so. It is for example certainly conceivable that some distant quasar might be made of a completely different particle set from matter in our part of the universe. But if the universe did not provided these literally universally shared examples of "previously distributed" (by the big bang e.g.) information baselines, then such transfers of information would not even be possible.

        So here's an important insight into the future of at least our little part of the universe: Meaning, as measured quantitatively in terms of observable impacts on physical reality per bit of Kolmogorov minimum messages sent, increases over time.

        This idea of constantly expanding meaning is, as best I can tell, the core message of this year's deeply fascinating essay (topic 3088) by Nobel Laureate Brian Josephson, of Josephson diode fame. His essay is written in a very different language, one that is neither physics nor computer science, so it is taking me some time to learn and interpret it properly. But reading his essay has already prompted me to reexamine my thoughts a year or so ago (on David Brin's blog I think?) regarding the emergence over the history of the universe of information persistence and combinatorics. Specifically, I think focusing on "meaning," which I would define roughly as impact on the physical world per bit of message sent, may provide a better, cleaner way to interpret such expanding combinatoric impacts. When I reach the point where I think I understand Professor Josephson's novel language adequately, I will post comments on it. (I should already note that I am already deeply troubled by one of his major reference sources, though Professor Josephson does a good job of filtering and interpreting that extremely unusual source.)

        Please pardon my overly long answer! You brought up a very interesting topic. I'll download your essay shorty and take a look. Thanks again for your comments and question!

        Cheers,

        Terry

        Dear Edwin Eugene Klingman,

        Thank you also for your thoughtful comments and generous spirit. I am pleased to see that I was reasonably on target in understanding several of your key points, since we seem to share a number of views that are definitely not "standard" according to prevalent physics perspectives.

        I'm going to cut to the chase on one point:

        May I suggest that what you seem to be proposing is that our universe is a state-machine computer simulation?

        The "now" (my term) of the post-GR Einstein ether would be the current state of that simulation. But more critically, your essay concept of a single universal time would no longer be time as we measure it within our universe. Instead, it would be the external time driving this universe simulator. That is why it is perfect time, time that is never affected by the local clock variations seen within our universe. It would be a form of time whose source is not even accessible from within our universe! Within our universe, you are instead forced (as Einstein was) to use the physical oscillatory behaviors of matter and energy -- of clocks -- as your only available definition of time. And as physical objects, they are of course fully subject to the rules of special relativity.

        Given your impressive computer background, I suspect that you may already be thinking along these lines, and are just being cautious in how you present such an idea to a physics audience. But even if that is true, there is a very lively subset of physics that like this idea already, so you would not be alone.

        Also, if you define your universal time as external to our universe, folks who like the beauty and incredibly good experimental validation of every aspect of SR would breathe a lot easier when they read what you are saying. The SR concept of time remains just as Einstein defined it, using only physical clocks, so none of that is impacted. Instead, you would be introducing a new concept, an external, perfect, and truly universal time -- the clock of the simulator in which all these other clock are running as simulations.

        So, I just a thought and an idea for presenting your ideas in a way that might (?) help you get a bit more traction.

        I'm getting to your comments on my own essay thread, BTW, though likely not this evening. You do bring up lots of interesting points!

        Cheers,

        Terry

        Dear Terry,

        Thanks for your response. I think you're beginning to see how valuable FQXi comments are. For example, I learned from your response to Noson Yanofsky, in the comment that follows mine. The essays have a nine page limit, but there is no limit to how much information we can exchange in the comments!

        My dissertation, "The Automatic Theory of Physics" was based on the fact that any axiomatic formulation of physics (including special relativity) can be reformulated as an automaton. So I appreciate your suggestion that "universal time" is the 'external' trigger to the state sequencer that yields the (simulated) universe. However the theme of "Universe as a simulation" shows up every so often in these contests, and I always argue against it. My point, repeated in my comment above, is that "physics", our models of reality, are based on projection of mathematical structure onto physical reality - the more economical the better, whether Kolmogorov or Occam's razor. But I do not believe these structures are actually replicated in reality. Rather, I believe that the root of our current problems is based on structures imposed early, and now accepted as gospel. Clearly GR and QM are correct, so far as they go, but I believe they can be physically reinterpreted (retaining almost all mathematical structure, since it works) and a better theory would result.

        Your major point, if I'm reading you correctly, is that we don't measure time per se. We measure duration, based on imperfect clocks. Einstein imagined perfect clocks, and distributed them profusely, but they don't exist, and his space-time symmetry leads to nonsense that is not supported by reality. You mention experimental validation of "every aspect of SR", but reference 10 in my essay argues that length contraction has never been measured. And the space-time symmetry of SR is asymmetric in the Global Positioning System. So I question this "every aspect".

        Nevertheless, your informative comment gives me more to think about and can only improve my approach.

        Best regards,

        Edwin Eugene Klingman

        Dear Terry

        Thank you for your feedback on my essay, '

        I will use a similar marking system to that used by you.

        What I liked:

        Easy to read. Well set out. Your core idea came through well.

        What I thought about as I read it:

        The initial idea seemed quite a lot like Occam's razor, shifted from a philosophical stance to a mathematical stance.

        Einstein's derivation of E=mc^2 was moderately complex at certain points, and each step represented the last, using more letters, yet each says the same thing, so must express the same level of fundamentality. The difference is only in he or she who reads it. Fundamentality would then be in the eye of the beholder?

        There is an implicit assumption in your example of replacing a gigabyte file. Say I write this program, but before it ends, I pull the plug! The program is time dependent while the gigabyte file is more space dependent. This was a cool idea, but I don't think the file and the system are equivalent, only potentially equivalent. Also, reproducing the original file by a computer, even a perfect computer, would require an expenditure of energy under Landauer's 'information is physical' correlation that would require more effort to retrieve than simply keeping the original file. I mention this because it hints at hidden aspects that make some things seem more fundamental than they really are.

        I thought the reference to pi was quite fun. It occurred to me that given pi is irrational, if one had some way of referencing a point in its decimal expansion, then every finite number would be expressed within it and the larger the number, the more efficient it would be to specify the start and end point? I see this would likely have some limit due to a trade-off, but it is fun to think about. I suppose the same would apply to any irrational number.

        You say, 'the role of science is first to identify such pre-existing structures and behaviors, and then to document them in sufficient detail to understand and predict how they work.' For philosophical reasons, there is a well-known (Hume, Kant, Popper) strong epistemic schism between science and reality. We can never identify, from an empirical standpoint, pre-existing structures. We can only guess at them from hints provided by experiment. But you probably know this already. My essays were written exactly to span the gap from a rationalist perspective.

        On the Spekkens Principle. I haven't read that essay, but the terminology suggests his work echoes that of Raphael Sorkin and his Causal Set Theory, only interpreted in an informational universe context (I think). It is exactly the dynamics, the ultimate cause, which my work addresses.

        On your 'Challenge 3'. I am a bit surprised that you didn't mention in your comments on my essay argument for an increasing baryon mass (whether due to intrinsic change of the baryon or extrinsic change due to the 'shape of space', for which my model is not yet sufficiently advanced) which would answer at least one part of this challenge, namely why the ration of electron to proton mass is at it is.

        'The Standard Model qualifies overall as a remarkably compact...framework.' Oi! Are we reading the same model? That said, I see you point that compared to SUSY and string theory and so forth, it is relatively compact, but with all those parameters, might they be hiding a very large set of other theories?

        I hope you take the time to read my previous essay in 'It from Bit'. In it there may be a gemstone of simplicity. My whole model comes from a single principle, and that principle is no more than an expression of our idea of equivalence.

        I am providing a short response to your comments on my paper.

        Best wishes,

        Stephen

          Stephen,

          Thank you for your thoughtful and intriguing comments! I will look at your comments on your essay thread. A few quick responses:

          -- Ironically, I'm not a big fan of E=mc2, since I immensely prefer the energy-complete, Pythagorean-triangle-compatible form:

          [math](E)^2=(\overrightarrow{p}c)^2(mc^2)^2[/math] E=mc2 addresses only the very special case of mass at rest, and so is not very useful in any actual physics problem. I also find it deeply amusing that Einstein's original paper in which he announced his mass-equals-energy insight, "Does the Inertia of a Body Depend on Its Energy Content?", uses L in place of E, and for some odd reason never actually gives the equation! Instead, Einstein uses a sentence only to describe an equivalent equation with c on the other side:

          "If a body gives of the energy L in the form of radiation, its mass decreases by L/c2."

          -- Your points about the subtleties of the gigabyte example are correct, very apt, and interesting! In some of my unpublished work I am strong on distinguishing very carefully between "potentiality" (yor word) versus "actuality", and not just in physics, but especially in mathematics. The very ease with which our brains take potentials to their limits and then treat them as existing realities is both an interesting efficiency aspect of how capacity-limited biological cognition works, and a warning of the dangers of being sloppy about the difference. Computer science flatly forces one to face some of these realities in ways that bio-brain mathematical abstractions do not. This is why I think it is very healthy for mathematicians to contemplate what happens if they convert their abstract equations into software.

          In any case, issues such as the ones you mention are where the real fun begins! While a short essay is great for introducing a new landscape to a broader audience, there is a lot more going on underneath the hood, as you have just pointed out with your comments on both Einstein's mass equation and my gigabyte file example. An essay is at its best more like a billboard enticing viewers to visit new land, including at most a broad, glossy, overly simplified view of what that new land looks like. The real fun does not begin until you start chugging your vehicle of choice over all the &%$^# potholes and crevices that didn't show up in that enticing broad overview!

          -- I would note that while in principle it may be possible to find any random sequence of numbers somewhere in pi (wow, that would be an interesting proof or disproof...), there is a representation cost for the indices themselves that must be also be taken into account. A full analysis of the potential of pi or other irrational numbers as data compression mechanisms would be mathematically fun, and who knows, might even end up pointing to some subset of methods with actual compression value. The latter to me seems unlikely, though, since the computation cost is likely to get huge pretty quickly.

          -- As an everyday passionate Hume in being, I Kant speak all that knowledgeably about the biological structures behind how our minds perceive reality. But I try hard to avoid the subconscious assumptions of truth that Popper up so often in physics and even math, when all one can really do is prove that at least some of these assumptions are false. Thus when writing for a broad audience, I try hard to make it easier for the reader to follow an argument and stay focused on it by simply explaining any necessary philosophical point "in line" and as succinctly as possible. The less satisfying alternative would be to give them a hyperlink to some huge remote body of literature that would then require a lot of reading on their part before they could even get back to the original argument... which by that point they have likely forgotten... :)

          -- I am also surprised that I did not mention your baryon mass idea in my comment, since I certainly was thinking about them when I wrote the comment! I guess I was just more focused on the experimental constraints on the idea?

          -- On the Standard Model being "relatively compact", that is, say, in comparison to string theory. But oi indeed, my emphasis was very definitely on the word "relatively"! The Standard Model as it stands is a huge, juicy, glaringly obvious target for some serious Occam outtakes and Kolmogorov compressions.

          By the way, as someone who is deeply convinced that space, a very complex entity if you think about it, is emergent from simpler principles, I found your constructive approach to it interesting.

          I will try to read It from Bit soon.

          Cheers,

          Terry

          Edwin Eugene Klingman,

          I am delighted and more than a little amused at how badly I misunderstood your intent! I would have bet that your answer was going to be "yes, I was just being subtle about simulation"... and I was so wrong!

          I'll look more closely to figure out why I got that so wrong. I may even look up your thesis, but no guarantee on that -- theses tend to be long in most cases!

          I downloaded your ref [10] and definitely look forward to looking at that one! I would say immediately that a lot of particle and especially atomic nuclei folks would vehemently disagree, since e.g. things like flattening have to be taken into account when trying to merge nuclei to create new elements. But that's not the same as me having a specific reference hand, as you do here.

          So: More later on that point. Thanks for an intriguing reference in any case!

          Cheers,

          Terry

          Dear Terry,

          With great interest I read your essay, which of course is worthy of the highest praise.

          I found in the forum thread of Kadin your questions, which are much more interesting and relevant than the questions of FQXi.

          My opinion on these issues:

          (1) Entanglement - is the only remote mechanism in the Universe for forming the force of interaction between the elements of matter, which is realized as a result of the interaction of the de Broglie toroidal gravitational waves at the common frequencies of the parametric resonance.

          This quantum mechanism of gravity is shown in a photo of phenomena observed in outer space (essay 2017) "The reason of self-organization systems of matter is quantum parametric resonance and the formation of solitons" (https://fqxi.org/community/forum/topic/2806).

          For example, a molecule is a state of entanglement (interaction) of atoms at common resonant frequencies of the de Broglie toroidal gravitational wave complex (including tachyon waves) belonging to different levels of matter.

          (2) Full fundamental fermion zoo - is described by simple similarity relations of the fractal structure of matter, on the basis of the parameters of the electron and the laws of conservation of angular momentum and energy.

          Fermions of different levels of matter are neutrinos for each other. All this is given in the essay 2018 "Fundamental" means the underlying principles, laws, essence, structure, constants and properties of matter (https://fqxi.org/community/forum/topic/3080).

          Also given are the ratios for the deterministic grids of all the main resonance frequencies of the zoo of toroidal gravitational waves (fundamental fermions), and comparisons are made with known observed resonant frequencies.

          (3) Recreating GR predictive power - is possible only after understanding the fact of the existence of potential stability pits in all fundamental interactions, both in strong interaction.

          After such an understanding, logically easily is solved the paradox of electrodynamics, when the orbital electron does not radiate.

          Potential stability pits (de Broglie toroidal gravitational waves, orbital solitons) are formed due to quantum parametric resonance in the medium of a physical vacuum.

          With understanding of potential pits comes an understanding of inertia and mass.

          (4) Clarifying waves vs superposed states - This is the result of the interaction of the toroidal gravitational waves of de Broglie (fundamental fermions), it can be determined by solving classical quantum parametric resonance problems, for example, using the Mathieu equations (as in radio engineering).

          The solutions of these equations can be represented as a Fourier series, which is actually a set of real toroidal gravitational waves interacting (entangled) in a system on deterministic grids of a set of resonance frequencies.

          I'm sorry that everything is wrong, «how very much like space curvature could create such observed effects».

          Instead of curvature of space-time, there is a derivative of spatial coordinates in time. Equivalent of "curvature of space" is the speed of propagation of gravitational interaction.

          I hope that my modest achievements can be information for reflection for you.

          Vladimir Fedorov

          https://fqxi.org/community/forum/topic/3080

            Vladimir,

            Thank you for your kind remarks and comments. I will take a look at your essay sometime today, Friday Feb 16.

            Cheers,

            Terry

            Hi Edwin Eugene Klingman,

            Let's get to the main point: You surely realize that the Curt Renshaw paper (your ref 10) contains has no data whatsoever disproving special relativity? I assume you do, since you worded your description of the paper as "arguing" that SR length contraction does not exist, versus saying that the paper actually provides data contradicting SR.

            The Crenshaw paper instead only asserts that when the NASA Space Interferometry Mission (SIM) satellite is launched in 2005 (it is an old paper), it will disprove SR, because the author says it will:

            "The author has demonstrated in several previous papers that the Lorentz length contraction likely does not exist, and, therefore, will not be found by SIM."

            SIM was supposed to be launched in 2005 but was cancelled. Its nominal successor, SIM Lite, was also cancelled. Thus no such data exists, either for or against SR. The title of the paper, "A Direct Test of the Lorentz Length Contraction", is at best misleading, although it could perhaps be generously interpreted as a paper about a proposed test that never happened.

            In sharp contrast to this absence of data, all of the effects of SR, including time dilation, relativistic mass increases, and squashing of nuclei, is unbelievably well documented by hundreds or thousands of people who use particle accelerators around the world. Particle accelerators easily explore velocities very close to the speed of light, and so can produce extremely strong signals regarding the effects of SR. Shoot, even ordinary vacuum tubes prove SR if you crunch the numbers, since the electrons must accumulate energy under the rules of SR. Denying the existence of this gigantic body of work, engineering, and detailed SR-dependent published papers is possible only by saying "I don't like that enormous body of data, so I will just ignore it."

            Bottom line: You gave me a data-free reference with a title that fooled me into thinking it had actual data in it. I like you and your willingness to explore alternatives, but I sincerely wish you had not done that. My advice: Go look at the thousands of papers from the particle accelerator community, and stop focusing on a single deceptively titled non-paper (or author, since Crenshaw has other papers).

            Sincerely,

            Terry Bollinger

            Terry,

            There is a lot to like in your essay. It gives good guidance for simplistic discovery, including your 3 challenges, which add to a relatively out-of-the-box perception of simplistic processes of investigation. We all marvel over Einstein's equation, it's simplistic epiphany of the duality of energy and mass. Euler's identity is intriquing to all and fermion-boson spin baffling. And if we programmed in our careers we staggered over the mind-numbing immensity of mishmash of recursive equations years of coding piled on. I speak of new approach and discovery as well in my essay. Fundamental does involve fewer bits but also new discovery in following a more simplistic thread as you mention. I rate your essay high on several points. Hope you get a chance to look at mine.

            Jim Hoover

              Thanks Terry,

              There are questions that arise concerning our interests in simplification that are not commonly admitted. For example:

              1. Is simplification 'simply' a means of reducing complexity to a level of understanding that is acceptable (i.e. comfortable) and thereby communicable to others?

              2. Is the search for simplification acknowledgement that the subject under consideration is beyond the capacity of a person to comprehend in its totality?

              3. Is simplification a means by which one can get connected to people operating at a higher (or lower) level of consciousness?

              4. If simplification is assumed to promote a common cause, the purpose of which is to unite one's interests with those of others, at what point does the process of simplification become too simple and thereby confuse rather than clarify issues?

              5. Is the FQXi question so simple that it stimulates multiple lines of enquiry rather than serving to unite people in a common understanding?

              At issue is how many people are reasonably expected to benefit from any process of simplification. If that family is limited to professional physicists, mathematicians, or people that happen to speak a particular 'foreign' language, then is the quest for simplification really justified?

              Does being 'more fundamental in the sense of having the deepest insights' really contribute to understanding, or was Einstein the only person that truly understood what he was saying at the time?

              Thank you Terry for inviting us along your chosen path. You carry my best wishes.

              Gary.

                Jim,

                Thank you for your positive and thoughtful remarks! I look forward to seeing your essay, and will download a copy of it shortly.

                Cheers,

                Terry