Essay Abstract

Relativity was a huge hit to predictability. Time was now dependent on velocity and an absolute zero for velocity is hard to find. Everything is moving relative to something else and our energy based universe lost its foundation. Energy is inverse time (E=hv) but there are many conflicting theories regarding time. Confidence in science is also being shaken by difficulties in cosmology regarding dark matter and dark energy. Relativity, of course, is correct but the author's mass model of the proton provides new insight into how nature creates and accounts for energy. In the model, energy is created through an information based separation process. Mass plus kinetic energy is equal and opposite field energy (E-E=0). No one can doubt the famous Einstein relativity equation E^2= M^2(PC)^2= E^2. But the model adds another equality; the right hand side total field energy E^2. A component of the constant right hand side field energy allows the gravitational constant to be calculated and appears to define space and time. This suggests that time throughout the universe is everywhere the same. But relativity is proven and accepted universally. This conflict allows us to learn something new about how nature accounts for energy. There is no doubt that space is curved and orbiting bodies follow the curvature but mass alone is the primary effect. The famous light and train thought experiment, gravitational time shift, flat galaxy rotation curves, black holes, gravitational lensing and Hubble's constant are discussed.

Author Bio

Independent Researcher, retired Eastman Kodak (Research Council), work experience in aerospace and nuclear industry.

Download Essay PDF File

14 days later

Hi Gene Barbee,

I don't really understand how you extract the Info from your model that you've been developing for a decade, but you say several things that we agree on. And we've picked the same topic this year.

For example, you say: "this suggests that time throughout the universe is everywhere the same."

That is the conclusion my essay reached. It is the (3+1)-ontology: 3 space dimensions and one universal time dimension. The relativity model is 4D-ontology where space and time are 'mixed' and time is not the same throughout the universe.

Also, in discussing the 'escape velocity of light', I believe that your explanation that "the light follows the curvature of the event horizon" is essentially correct.

I also like one measure of energy that you presented: "enough energy to get out of my chair and go for a run." That's an energy scale everyone can relate to, or could, before we got arrested for going for a run!

I invite you to read my essay and comment.

Best regards,

Edwin Eugene Klingman

8 days later

Hi Gene,

Well focussed essay. Wel done. Like Edwin I commend you particularly for seeing through the variable time 'space/time' description. I recently did some trips with an atomic oscillator, East - West and round the world, confirming Hafele & Keatings findings, originally announced as NOT supporting relativity, though the faithfull soon swooped to get the paper in line!

Of course changing the rate the metronomic machines we call 'clocks' tick, and an atom oscillates, which is easy with ALL acceleration, does NOT change time itself, only apparent periods of emitted signals from same.

Well done.

I think and hope you'll like mine again this year, on even more foundational issues.

Very best

Peter

Requirements for a Unified Theory (including a proposal)

Gene H. Barbee, April 11, 2020

Abstract

It seems reasonable to propose "strawman" requirements for a Unified Theory to determine if researchers are working toward a common vision.

It seems that many physicists are wondering if a unified theory will converge using ideas of the 20th century. Some are working toward a neo-classical reformulation of physics. Others are questioning QM and some aspects of special relativity. Cosmologists use general relativity but are frustrated with attempts to reconcile large scale gravity with small scale physics. There are unresolved problems related to dark matter and dark energy.

Proposed set of requirements for a unified theory

Data: The Particle Data Group, maintained by University of California at Berkeley and NIST (National Institute of Standards and Testing).

Discussion

It is suggested that items near the bottom of the above list are important but may not represent mainstream physics concerns. For example, why should a unified theory support evolution of life? Consider this: We exist and if a physics theory is inconsistent or even agnostic regarding life, it doesn't unite us with nature. We need a unified theory that provides a plausible beginning consistent with our existence.

Another perhaps controversial requirement has to do with "a plausible cause and purpose of nature". Science is the source of answers regarding difficult questions and society needs a firm philosophical foundation. Everyone seeks meaning and if science is not going to provide it, society will seek meaning elsewhere.

The hard yards (agreement with measurements)

The "softer" requirements above are the low hurdles. Once a theory clears these hurdles, there are "hard yards" ahead. Even one disagreement between prediction and measurement can discredit a theory. Features of nature have been measured for centuries. Instruments have become more sophisticated and more accurate. The Particle Data Group and NIST listings are impressive and voluminous. The mass of the neutron, proton and electron have been measured to within 1e-6 MeV. When the neutron decays, its half-life is 881 seconds and results in the proton, electron and anti-electron neutrino. Nature consists of the neutron, proton and electron. If a theory does not help understand what we are made of it will ultimately be rejected.

Beyond that, high energy experiments have been carried out at labs throughout the world characterizing mesons and baryons. They produce data consistent with particles/energies known as bosons, like the Higgs, W+, w- etc. Theories exist regarding why the particles appear and how they decay. A unified theory must understand them.

The particles that appear at high energy are known as the "particle zoo". A standard model has been assembled over many years that categorize the particles in families according to their properties, energies and how they decay. Dr. Richard Feynman and more recently Dr. Frank Wilczek published extensively about these "entities", including their high energy families. An understandable unified theory needs to explain them.

Some of the entities in the standard model are quarks and gluons. Gluons are force carrying entities and the subject of physics of four interactions. There are three quarks inside each neutron and proton but they not individually observed. The four interactions have been characterized by the Dirac equations. Very basic stuff, perhaps a bit complicated. Why are there four? Are your "hard yard" numbers the same as the Particle Data Group/NIST data?

Time and Space

We assume that there is time and space containing light, protons and neutrons. A unified theory must explain exactly what time and space are. Beware, some will say that time is block time. In this view, we move around in four dimensions that pre-exist. Furthermore, some want to banish time. We need someone we believe like Albert Einstein. His E=mC^2 reputation is solid, right? Respectfully, his special relativity theories work but according to some they don't make sense and I believe that time needs to be uniform. Why can't we add velocities near the speed of light? How can anything like a proton have the same energy as another proton without a fundamental time standard? But it is all relative in AE's view and every moving particle has its own different time frame. If the unified theory doesn't understand time, space falls in the same category. Everyone agrees that time and space are related by a constant.

Large scale gravity--small scale QM

Planck found a relationship between C and G at a scale of about 1e-35 meters. That is a surprising low scale. Anything to do with gravity becomes very uncertain if you scale it up. Do we have to deal with quantum foam? Some physicists say, "we won't try to unite physics, we will just deal with physics that has different characteristics at different scales". Others will question "is that a unified theory".

Binding Energy

The "water drop" model, is a theory of the binding energy curve. It is not accurate and physicists have searched for a quantum mechanical model. Why is the limit release 10.15 MeV?

Quantum Mechanical Confusion

Physicists have been learning, teaching and discussing the items above for a century. Even the founders of the theories were confused. Einstein never accepted some of it even though Bohr, Heisenberg and others convinced us it was right. Many said it would never make sense but it didn't deter us. We discussed it ad infinitum. There are now very smart people with very different thoughts about it. This creates chaos for new theories. A unified theory will need to resolve the confusion.

Astronomical observations

Sophisticated equipment supported by international budgets and impressive consortiums are producing amazing observations. But interpretation of observations are a problem. Expensive satellites called WMAP and PLANCK measured the cosmic background radiation (CMB). Data analysis agreed and supported earlier work by Peebles and others. But they agreed that about 98% of the mass in the universe is missing. If you throw rocks into the air (model of the big bang), the rocks will slow down....opps, they might be speeding up. Of course we all know how to measure the velocity of mass moving around a central body, we just use redshift. Opps again, stars don't appear to obey Newtonian gravity. A unified theory must stand up against those that back calculate dark matter to explain the discrepancy. There are devotees to Einstein's cosmological constant that will want to keep looking for dark energy. Their research budgets are dependent on no one agreeing on the real cause.

Nevertheless

I know all of you love a challenge. I am no different but somewhat disadvantaged. I am an intrepid Colorado State University Mechanical Engineer. (I didn't know this was nearly impossible when I became interested 30 years ago but I want to help.)

The hard yards

Using data from the Particle Data Group (PDG) and other sources, reverse engineering and information theory I found a pattern in the left hand column below if I defined N=ln(E/e0), where e0 was 2.02e-5 MeV (E=e0*exp(N)). The quarks form a series. The repeating suffix 0.4319= 1/3+0.0986. I found that 0.0986 equals (ln(3)-1) and later understood why. The electron N was 10+1/3-2*0.0986= 10.136 and mass E=e0*exp(10.136)= 0.511 MeV. Dr. Klingman pointed out several years ago that E=e0*exp(N) was unknown to physicists. I suspected that nature might be information based but knew that E=e0*exp(N) works. I had used it for 30 years and it never failed to correlate data. I now justify this by defining e0/E=probability p, where p=1/exp(N). This makes N information by Shannon's definition, N=- ln(p).

I looked for a relationship between high energy bosons and lower energy quarks. I found that N consisted of base 10 numbers, the value 1/3 and a basic value (ln(3)-1)=0.0986. To date these are the only information quanta. Information components used to construct a mass model of the neutron add to N=90.098. This means that its components are very improbable p=1/exp(90). The neutron mass in the model, decays to the proton mass by ejecting a proton and anti-electron neutrino. The same e0 value allows both nucleons and the electron mass to be calculated. The table below is a comparison with NIST and PDG published values. The models are in the appendix and one shows the quarks that have decayed to their PDG measured values (Down 4.36 MeV and two Ups at 2.49 MeV). The models are within experimental error.

I started to think about nature as two levels:

Level 1---information level (N= - ln p).

Level 2---reality level computed from Schrodinger's equation. P=1=exp(iEt/H)*exp(-iEt/H).

The Atomic binding energy curve

I found a very important values in the proton model. Specifically the strong residual kinetic energy 10.15 MeV that changes as atoms fuse. This led to work on the binding energy curve.

Barbee, Gene H., A Simple Model of Atomic Binding Energy, vixra:1307.0102, revised Feb 2014. Reference spreadsheet atom.xls.

Binding energy release/proton & neutron=10.15*exp(-2/number of protons)+ 10.15*exp(-2/number of neutrons). Electrostatic repulsion causes the atom to retain some energy and there were two smaller affects. NIST binding energy data was matched within average=0.0012 MeV.

The above work strongly supports the value 10.15 MeV as the residual strong energy. The binding energy model was a simple probability model following the same form as the proton model: (See Appendix).

I extended this theory: Barbee, Gene H., Semi-Fundamental Abundance of the Elements, vixra:1308.0009, revised June 2014. The binding energy model provided constants for a probability based fusion model (haven't look at it for years).

Number of neutrons in nature

Based on the neutron model the components of mass plus kinetic energy add to N=90.0986. I used N=90 in early work and haven't resolved the 0.0986 difference. With P=1/exp(90) and equally improbable field energy components, the probability of the neutron is 1/exp(180) since probabilities multiply. If P=1, there are exp(180) neutrons in nature. These are apparently placed outside of each other to prevent nature from occurring as one large superposition. Is this the origin of the Pauli exclusion principle? The value exp(180) agrees with estimates of critical density but P=1 is difficult to accept. Does this mean there is one neutron expressed as exp(180) low probability duplicates throughout nature? If so, this pushes us further toward an information based universe. I consider it a system but know this is difficult to accept.

What is energy?

If we prefer a unified theory that starts from zero, we avoid the question "where did that come from?" The neutron model is based on separating zero into two equal and opposite energies, one for mass plus kinetic energy and the other for field energy (0=E-E), suggesting that the beginning was an energy separation process based on "unknown" information processes. The other starting point I used was P=1. I discovered that information for energy (N) is hidden in a series of P=1's. The Schrodinger equation is a linear equation. It allows superposition and the neutron is represented by P=1*1*1*1. The four 1's are a set of probabilities. Furthermore, each 1 is a sub-set of probabilities I call a quad. The sub probabilities multiply to 1 for each quad (example P=1=exp(-15.43)*exp(-12.43)/(exp(-17.43)*exp(-10.43)) and energy zero (101.95+5.08+646.96)- (753.23+0.69)= 0. This is the top box in the neutron model in the appendix. I believe it represents a quark. Each box in the quad diagram below has a specific meaning. Below the meaning is the operation that yields E-E=0 for that quad and P=1 for that quad. These are uniform for P=1*1*1*1 but quad 4 has two forms, one for the neutron and one for the proton.

Overall E1+E2+(E3+E4-E1-E2)-E3-E4=0.

I call energy zero and probability 1 constraints. With these constrains the Schrodinger equation is relativistic.

This model has been documented numerous times starting in the year 1998 in a "worst seller" book entitled Pattern Physics. More recently: Barbee, Gene H., How Nature Computes, Academia.edu, March 2019. Also, Prespacetime Journal, Vol 10, No 3 (2019). The model has been revised as I learned more.

Quarks

The table below is from: Barbee, Gene H., Schrodinger Fundamentals for Mesons and Baryons, October 2017, https://vixra.org/abs/1710.0306, I think I updated it later.

This is an excerpt from the document above:

The diagram below shows the relationships between quark mass, kinetic energy and their field energy. The sum is zero for each line considering mass + kinetic energy as positive and the fields as negative. The mass column is from my work but within Particle Data Group data error. The Strange mass (101.95 MeV) agrees exactly with the PDG value.

The Up quark is 4*0.622=2.49 MeV (the value 1.87= 3*0.622 MeV and 1.87=e0*exp(11.432). This quark is not inside the nucleons but is important for decay of mesons and baryons. Increasing the accelerator power, additional quanta are activated. Each quark consists of mass and kinetic energy "quanta" that originate in the proton model. The quanta are differences between quark energies but can be combined. The quanta are 651.3, 88.15, 11.9 and 0.622 MeV. The Down quark (4.36 MeV) found in the proton and neutron models decays from the Strange quark at 101.95 MeV while conserving mass+kinetic energy. The Up quark (2.49 MeV) decays in a similar manner. Each quark has an N value for its mass and a different N for its field energy.

Meson and baryon masses and decay times

This falls in the category of very hard yards.

The above document showed that all mesons and baryons can be modeled in a similar manner. They use N values from the neutron model plus the N series for higher energy quarks. The document correlates all of their mass, properties (spin, parity, etc.) and decay times. I used the concept of "tunnelling " to explain the vast array of energies measured for mesons and baryons that contain the same quarks. The blue line below is the Quark mass + its ke and the red line below is the total measured mass.

The difference can be modeled by the quanta mentioned above but I couldn't find a pattern. Masses could be matched but admittedly are empirical.

When all the properties were correlated the result indicated that all properties are separations from zero, consistent with the possibility that zero was a starting point. An example for the proton and electron is included in the appendix. The topic entitled "Time, space and gravity" identifies fundamental time as 1.47e-21 seconds below. The half time for decay = 1.47e-21 seconds*exp(Nsum). For example, mesons have two quarks. The total N for mass of the two quarks might be 13.43+15.43 but their fields have total N=15.43+17.43. Nmass-Nfields= -4. Decay time for this quark combination is 1.47e-21*exp(-4)= 2.69e-23 seconds. Decays are very simple but only accurate to about +/- 20%. About half of the mesons and baryons decays have not been measured within +/- 20%.

The quark mass plus kinetic energy values in the neutron model transition to their lowest mass state while conserving mass plus kinetic energy. They do so based on quanta from the neutron model. I found it significant that the Particle Data Group values for the Up and Down quarks are simple multiples of the energy value 0.622 MeV (N=10+1/3), Down=4*0.622=2.49 MeV, Up=7*0.622=4.36 MeV. Meson and baryons (except protons) often decay from quarks along a path that goes through 1.87 MeV. This is the quark associated with N=11.432. From there they become values associated with 1.87яГа3*0.622 MeV. Final decays involve electron pairs, electrons, neutrinos, neutrino pairs, gamma rays and combinations.

Consistency with Standard Model

Barbee, Gene H., How Nature Computes, Academia.edu, March 2019. Also, Prespacetime Journal, Vol 10, No 3 (2019).

Standard model [Appendix 1of Reference]: The above table is from: https://www.penguinrandomhouse.com/books/312435/a-beautiful-question-by-frank-wilczek/9780143109365, entities 1, 2 and 3 are Up quarks and 4, 5 and 6 are Down quarks.

The corresponding proton model entities with the same numbers are rearranged for comparison with the Standard model. Each entity is an energy 0, probability 1 construct. (The mass+ kinetic energy minus the two fields= zero.) Each energy has an N value and p=1/exp(N). Overall P= p*p/(p*p) =1) This means they are entities independent of the proton as a whole. This becomes important when analyzing mesons and baryons since they are combinations of entities from the models.

I did the hard yards based on Dr. Wilczek's characterization of the Standard Model. The proton model including my study of quarks discussed above are consistent with the Standard Model.

The four interactions of nature

Values from the proton model after the quarks transition to 4.36, 2.49 and 2.49 MeV are shown below.

Physicists publish values for force carrying bosons, coupling constants and many other constants such as the electromagnetic field energy, the electric constant, etc. Some data are shown below compared with values from the proton model.

I was able to compare data with values for the weak and electromagnetic interactions with the proton model but found no data that I could understand from the PDG articles for the strong force. Gravity is discussed below.

Time, space and gravity

The proton model provides a field energy value 2.801 MeV. Et/H=1 determines the radius associated with this field energy. R=hC/E and the value was 7.045e-14 meters (h is hbar). I also noted that the kinetic energy is related to E=2.02e-5*exp(12+1/3+(ln(3)-1)= 5.08 MeV. If it was somehow related to gravity, I needed to determine the velocity around a small circle of 7.045e-14 meters to determine inertial force. It turns out that ke= 2*5.08=10.14 MeV (V=4.3e7 meters/sec) would define the inertial force. This led to:

This result is very close to the measured gravitational constant. But it depends on the scaling factor 1/exp(90). I suspected that the radius 7.045e-14 meters associated with the energy 2.801 MeV was related to gravity. The proton model is based on one proton and measurement for G can only be performed for large masses. I needed to scale G back to one proton because I knew the field energy values for one proton. The reasoning led to what I call the cellular model of cosmology.

A model with no preferred position places the mass on the surface of a sphere. But it doesn't have to be a large sphere. It can be many small identical spheres that have the same surface area. The author developed a concept called cellular cosmology that defines space as N=exp(180) spherical "cells" each with a proton. Over time these cells merge and represent large scale orbits.

Gravitational relationships define geodesics that are surfaces where particles orbit. Equating a large surface area with many small surface areas yields the following relationships:

Area=4*pi*R^2

Area=4*pi*r^2*exp(180)

A/A=1=R^2/(r^2*exp(180)

R^2=r^2*exp(180)

r=R/exp(90) surface area substitution

M=m*exp(180) mass substitution

For gravitation and large space, we consider velocity V, radius R and mass M as the variables (capital letters for large space and lower case r, v and m for cellular space) that determine the geodesic (the curved surface where an orbiting body feels no force). G large space= G cellular space with mass substitution M=m*exp(180) and surface area substitution R=r*exp(90).

This was promising because it supported the scaling factor. I wrote "Barbee, Gene H., On the Source of the Gravitational Constant at the Low Energy Scale, http://www.viXra.org/pdf/1307.0085 , revised Sept 2019. Prespacetime Journal Vol. 5 No. 3 March 2014.

Conventional Planck gravitational relationship use a "coupling constant" but it is 1/exp(88) and I thought it was poorly justified, perhaps an empirical constant. I doubted the Planck scale was real and preferred to consider the coupling constant a scaling factor equal to 1/exp(90).

There is another way of calculating G but it involves cosmology.

As the universe expands kinetic energy is converted to potential energy because expansion is resisted by gravity. Most of the original kinetic energy from the proton model has been converted to potential energy. The model's potential energy change (10.15 MeV) gives G but it depends on 1/exp(90) scaling.

If one accepts this origination calculation for G, you can consider that field energy 2.801 MeV is fundamentally related to space and time. Is r= hC/2.801=7.045e-14 meters unit radius? Is time around the circle unit time? Time=2pi*r/C= 1.47e-21 seconds. This produces the ratio R/t=C. I believe time "ticks" (1.47e-21 seconds) with repeated revolutions at velocity C around the circle r=7.045e-14 meters.

Wave particle duality

Schrodinger equation derivation [MIT unitary solution]:

1=exp(i*1)*exp(-i*1). The imaginary number i is separation of -1 into two parts (i= (-1)^0.5)

1=exp(i*Et/H)*exp(-i*Et/H), where Et/H=1 means Energy*time/Planck's constant.

The left hand side (LHS) component exp(i*Et/H) and right hand side (RHS) component exp(-i*Et/h) are complex conjugates or wave functions. One energy in this equation is Et/H, where E= mass+kinetic energy (KE). The other energy in this equation is Et/H, where E is total field energy. This can be represented on a circle where E is the field energy. The y axis is imaginary and x axis is radius. With equal and opposite energy, light speed waves travel around this quantum circle in opposite directions and meet at Et/H=1.

The above circle is "non-physical" because it is an Argand diagram. But it has radius R=hC/E. If the Planck reduced constant, h (hbar) were zero it would have no radius. It is well known that h causes us to account for energy in quantum increments. There are two related views of nature. One view is the circle (a wave), the other view of nature is the value Et/H=1 (a particle). But if the Schrodinger equation represents P=1 every 1.47e-21seconds, there is only uncertainty about a particle if its complex conjugate components are in transit around the circle.

Apparently energy has to define itself out of time? Give it a little slack and everything is fine.

Astronomical measurements

Barbee, Gene H., Zero dark matter and zero dark energy, https://www.academia.edu, http://www.viXra.org/pdf/1805.0449 , March 2020.

The proton model becomes especially productive for cosmology. It provides the following:

1. The kinetic energy/proton associated with the big bang (10.15 MeV).

2. The original radius of the universe (exp(60)*7.045e-14 meters).

3. The way cell kinetic energy (KE) changes with time: KE'=10.15*(t/t')^(2/3).

4. The way cell radius changes with R=7.045e-14*KE/ke meters.

5. The temperature at which primordial nucleosyntheses occurs (0.11 MeV), the kinetic energy difference between 0.622-0.511 MeV in the model.

6. The fusion release and the spike temperature that gives the required baryon/photon ratio (consistent with isotopes that are uniformly measured throughout nature and associated with the HяГаHe4 transition.)

7. The mass density at equality and decoupling, leading to consistency with WMAP analysis of CMB.

8. It allows us to calculate the forces involved in late stage expansion. Energy release by stars seems to account for recent flattening of the expansion curve.

Note: I find the WMAP analysis very obscure. I believe they assume priors in their Monte Carlo model that are un-necessary.

Observed velocity profiles of galaxies

Reference and excerpts: 2020 FQXI essay entitled "Fundamental time and relativity".

The analysis below is for a galaxy similar to our Milky Way. It has 2e41 Kg mass and has a flat rotation curve. The radius 2.58e20 meters is where a proton with velocity 2.27e5 m/sec orbits according to V=(GM/R)^0.5. But luminosity measurements indicate that there is mass in "improper" orbits if redshift (gamma measurement) is interpreted as Newtonian velocity. Many have said that the velocity measurement is correct and to justify the orbits observed "missing mass" must exist in the galaxy.

The analysis below is for a proton falling due to gravitation toward a galaxy of 2e41 Kg. The proton will gain kinetic energy by falling from its expansion determined position toward the galaxy.

The fall starts well above the eventual orbit. The proton has been dominated by expansion and is losing expansion kinetic energy and gaining potential energy. These will be reversed by the 2e41 Kg mass. If the proton falls to the radius where the velocity is V= (GM/R)^0.5 it will orbit there. This was reviewed above when the following equation was introduced:

(R=r0*10.15/KE*(Mgalaxy/1.67e-27)*(1/exp(90)) where r0=7.045e-14)

Again, this is another way of writing the Newtonian relationship R=GM/V^2 and r=7.024e-14*10.15/KE. The table above contains values V, KE and cell radius r for this orbit.

Total energy is diagrammed below: Kinetic energy is increasing as the proton falls following the radial line giving up potential energy. Kinetic energy + potential energy =10.15 MeV, but kinetic energy is around the circle.

The red orbit is in the table above; V= 2.27e5 m/sec. With this velocity a proton will follow curvature R=2.6e20 meters. It is a proper (Newtonian) orbit and the curvature is due to central mass. There will also be mass in the outer circles and we would expect the Newtonian velocity to decrease with distance away from the center (V=GM/R). But measurements indicate that velocity curves around galaxies are almost constant. Our goal is to understand these measurements.

The analysis below does not assume dark matter, nor does it violate Newtonian gravitation. In the table below the distance from the center of the galaxy to edge is shown vertically. For the analysis below, we will consider potential energy to be zero at the red (inner) radius. The bottom line is the orbit at 2.6e20 meters that everyone agrees should have 227000 m/sec. The kinetic energy column is for Newtonian orbits. If the potential energy is zero for that orbit increasing radius increases potential energy. In fact, since there is no friction, the potential energy plus kinetic energy is constant. We can assign gamma to energy with the equations: gamma1=M/(M+KE) and gamma2=M/(M+PE).

The potential energy is proportional to the radius of the circles, represented by the line to B. Less curvature (toward the outside) increases potential energy and decreases kinetic energy. We measure velocity Doppler affect velocity (kinetic energy) along a vector representing velocity around the galaxy. But gamma must be measured from zero.

Gamma 1= to/t1 and gamma 2=t1/t2. Overall gamma= gamma1*gamma2=t0/t2.

The potential energy component must be considered. This means that the kinetic energy gamma is a time ratio of a time ratio already modified by potential energy. If we are trying to measure the height of a mountain we start from ocean level. The direction of the velocity vector for the other side of the galaxy becomes a blue shift but is measured from the same potential energy time base.

In the graph below, mass has fallen into orbits outside the 2.6e20 meter position. But there are many possible Newtonian orbits. When the proton (star) is in orbit around a distant star we measure gamma not realizing that there are two components represented by the top line. The velocity associated total gamma is interpreted as a flat velocity curve.

Barbee, Gene H., Analysis of Five Galaxies with Flat Rotation Curves, Academia.edu, March 2020.

The observer "problem"

I was pushed further toward an information based universe by work on color vision.

Barbee, Gene H., Information Storage for Life Processes, DNA Decipher Journal, Vol 9, Issue 3, December 2019.

The above article is my latest thoughts but I also wrote an FQXI essay on this. It is quite apparent that the brain computes what we perceive as color vision.

The Feynman equation of interest is:

Width of the color peak is associated with differential energies.

For example Width wavelength = 0.00124 MeV-nanometers/2.02e-5= 61.23 nm.

Where, WL is the input wavelength in nanometers

Each line of the probability series p*656 multiplies D*M. If the input wavelength listed on the right side of the diagram below matches WL Color, Pf/PF=sin(2DM)^2/(2DM)=1. If it does not match Pf/PF will be a lower value. Just like the proton calculations, the components are added together to produce a system result. In the chart below, the meaningful result is perception of white light. If the individual spectrums are incomplete, the result can be other hues.

This allows us to mathematically simulate color vision with our computer. The peak responses and off peak responses is shown below compared with color vision data on the human eye.

Probabilities neatly represent white light when three colors are added and normalized to unity. The fourth color is scotopic (black and white vision). This value is shown in the calculation table as Sum=peak/4.

Your mental experience of color vision is proof of a link between the Feynman probability ratio Pf/PF and perception. This is important because it shows that quantum mechanical computations are associated with specific meaningful experiences inside the brain. We found probabilities that define energy at the proton level but we did not know that color vision is a similar system, using the same probability code (N) to modify and store energy (wavelength). The value N= 0.0986 is used extensively in our vision system. Probability P=0.906=1/exp(0.0986)) and 0.906^n are the modifiers applied to the wavelength 655.9 nm. Our vision system also uses width 61.2 nm associated with e0=2.02e-5 MeV in the Feynman equation. This is a huge clue regarding nature.

The diagram below indicates that the brain stores a deficit I call dPE. It responds only to a matching kinetic energy input (light at a specific color). The match fires the Feynman Pf/PF above. The brain automatically adds these responses and we see white light.

This is important because it underlies brain function, learning and DNA evolution. The reference explains this further but I believe that proton-electrons that we adopt allow us to become observer/participants in nature. The expectation is so strong that once the probability fires, it turns off other alternatives (perhaps the reason that measurement is crucial in the double slit experiment).

Consistent with evolution

Another excerpt from "Information for life processes":

The body develops from a single cell (combination of two cells). The concept of enfoldment indicates that the whole body is represented in the DNA. As cells divide, the replicated DNA "unfolds" to represent the next cell, etc. for the whole body. It unfolds from "Wholeness and Implicate Order" in Bohm's words. DNA is in every cell but only very specific segments are expressed. The processes of unfolding depends on "stored" instructions that specify when, which and where to express the associated proteins.

What does "stored" mean? DNA is passed down from generation to generation. Previous copies of it have produced billions of bodies in the past. This is an evolutionary process. "Expressed" for the unfolded body and "stored" for enfolded DNA are different representations of the same thing (level 1 and level 2).

If each cell division contains a new instruction for the next cell, correlated spin coordinate systems (based on the Standard Model) could build small and enlarge via cell division. There does not have to be explicit long range instructions for a large body. Since the body has been build billions of time in the past with relatively minor variations there is no doubt that a specific set of instructions exists. When cell division occurs, a new instruction is read (from the Hox gene bonds inside DNA). The instruction includes what to express and the orientation based on proton spins. The cell then is precisely placed by the instruction. Each location within the body is manufacturing and folding proteins according to specific DNA instruction. On timed signals, stem cells are manufactured and then differentiated into muscle cells, etc. At first a baby image appears, followed by childhood, etc. as adjacent cell division enlarges the image proportionally.

Exactly how is the information stored? An information core will be described that, when added with a Schrodinger computation, represents the body in time and space. The concept below is that a 4 letter codons represents a specific set of probabilities, each of which is stored by proton--electron bonds. Probability defines the energy in Et/H for wave-function components of a body system. The probabilities for the wave-function energies are stored in DNA, as a level 1 structure (not space and time, like Bohm says). The "explicate or unfolded order" is level 2 "body in space and time".

Each molecule (A G C or T) consists of many proton-electron bonds. The bonds store probability (information) that define energy wave-function components of the molecules. They are represented below as a learning system. Combinations of bonds in the molecules that work (match) have been selected evolutionary. Evolution is the "learning" mode...kind of cruel but probabilities that work are passed on.

Provides a plausible purpose for nature

The abstract of "Information for Life Processes" contains the following diagram:

Information яГа Schrodinger based neutrons that define the laws of nature

Information яГа Schrodinger/Feynman based evolution of body/vision/brain

The repetitive use of information unit N=0.0986 is evidence that the information level probabilities are stored with the same code. This value is part of the information that existed in the beginning. Information for our vision system is stored by probabilities based on p= 0.906=1/ exp(N)=1/exp(0.0986). It is not coincidence that our vision system uses exactly N=0.0986 in the energy series E=1.89e-6 MeV/0.906^n, where n=0, 1, 2, 3, or 4. (Wavelength 656 nm is multiplied by 0.906^n). This means our mind uses N= 0.0986, one of the fundamental N values in the neutron computation. The brain must learn and store probabilities that modify energy components in the Feynman absorption equation to create memory and conscious thought. If nature can recreate a body from information, there is no reason that it cannot recreate your brain with high level thought. In fact, many animals have behavioral instincts built in from birth. This is proof that brain function can be recreated from information.

Ongoing creation

Energy was apparently created through separation with overall energy zero and P=1. This appears to have been an information process. Some may prefer to use the respectful generic phrase "Mind of creation". Sub-components of p=1 that I call Fundamental N values (N=- ln p) represent nature through the Schrodinger equation. The neutron and proton are manifestations of the computation. Life processes use information but appear to compute probabilities with the Feynman probability ratio. Memory is stored information associated with electromagnetic shifts modified by probabilities. This allows feedback that increases information in a network. Life processes result in development of the brain and body. It appears that an information level operates behind the reality we observe. Physical nature is represented by equations (computed) and we should prefer not to become too enamored with things. Instead we should be very fond of information because it can re-create living beings that perceive physical nature. Those that are so inclined may believe that our mind operates with Mind, a hopeful but unprovable conjecture. It also appears to me that we are participating in ongoing creation. I believe this provides a plausible purpose.

A question for you:

Ladies and gentlemen. Where do we go from here? I believe this work is a place to start but I respect your opinions. It is especially important that we agree on requirements and goals.

Appendix

Note: N for component 0.671 is 11-6*(ln(3)-1). 90+(ln(3)-1)=90.0986.

N for component 0.740 is 11-5*(ln(3)-1).

The decay of the neutron is shown above. It results in the proton below. Decay is initiated by the mu neutrino 0.671 MeV being ejected from the neutron. Another split occurs: 0=0.622-0.622 MeV (associated with N split 0= -10.33+10.33). The ke value 0.622 MeV leaves the neutron and becomes the electron 0.511 MeV (N=10.136). The difference 0.622-0.511= 0.11 MeV is kinetic energy. It is ejected. Overall: N = P+ e+ anti-electron neutrino at energy E=e0*exp(0)= 2.02e05 MeV. This leaves the proton in the state below. Note that electromagnetic energy is another split. This positions the proton for fusion re-initiation when energy 0.11 MeV becomes available.

Fundamentals for the binding energy curve:

Properties for the proton and neutron

    6 days later

    I just read your comment to my essay and your comment here.

    I started the publication part of what became the STOE by formulating the general form - Sources (spiral galaxies) following the QSSC model and Sinks (elliptical galaxies) of the stuff of our universe. The "stuff" are hods (smallest matter particles) and plenum (aether, spacetime, vacuum energy, etc.). The stuff emerges to form all other things and observations in the universe. First, I considered tha astronomical observations - particularly those that the accepted models had difficulty (redshift and periodic redshift, galaxy rotation curves and asymmetric rotation curves, Pioneer Anomaly, etc.). Then the diffraction and interference of light following Newton. A "Universal equation" was developed to apply to both realms of observation. Then (the last few years) I considered there is only one force in the universe - magnetism which is a divergence of the plenum. Thus, photons form electrons and atoms are held together by a magnetic force. I think atomic structures are that photons magnetically hold electrons in position. Protons are held together by electrons magnetic character. The electric characteristic is a result of magnetic particles moving (a plenum effect, not part of a particle). But the magnetism holds photons in structures of all other particles. Here I'm still thinking. How to have structures of STOE photons yield structures that form the 8-fold way and the group structure. Further, the nuclear forces are considered to be particle structures like was suggested for atomic structure (no additional forces). This is where you may help. I note many of your papers are about this realm.

    a month later

    Dear Gene Barbee,

    Glad to read your work again.

    I greatly appreciated your work and discussion. I am very glad that you are not thinking in abstract patterns.

    "that time throughout the universe is everywhere the same. But relativity is proven and accepted universally".

    While the discussion lasted, I wrote an article: "Practical guidance on calculating resonant frequencies at four levels of diagnosis and inactivation of COVID-19 coronavirus", due to the high relevance of this topic. The work is based on the practical solution of problems in quantum mechanics, presented in the essay FQXi 2019-2020 "Universal quantum laws of the universe to solve the problems of unsolvability, computability and unpredictability".

    I hope that my modest results of work will provide you with information for thought.

    Warm Regards, `

    Vladimir

    Dear Gene,

    I found this paper you wrote most interesting. My research should be of great interest to you as 18 years ago I developed a working preon model I called gimli theory, that greatly simplifies the understanding of the Standard Model. I can easily calculate the energies of all baryon particles with some simple arithmetic based upon a structural model that tells an important story. I have been able to argue for the removal of the stong and weak nuclear forces, as everything can be explained by magnetic and electric forces only. I can cover all particle interactions with only three quantum numbers, and thus I suggest that the Particle data base needs revision in many areas (not experimentally).

    However after nearly two decades of research I now firmly believe we cannot divorce particles from their fields; the two need to be taken together.

    Check out my essay, although it doesnt cover this topic, it touches on issues that are pertinent.

    Regards

    Lockie Cresswell

    I wish I could be more generous Gene...

    This essay contains some good ideas and a little wisdom. You are correct that re-working the description of the proton can have wide benefits in both particle physics and cosmology. And you have some sound reasoning. I think there is good sense to the idea that a neutron might actually be a proton with an electron inside, and the anti-neutrino representing a flip or twist - to get it to fit.

    So much of your good work is obscured by your lack of dexterity with the equation editor, though, or some other means of creating a consistent appearance for all of your Maths in standard notation. Having to parse the abbreviated notation, and to iron out the ambiguities therefrom, makes it harder to read or follow. Admittedly; there are some like Lawrence Crowell who do use standard notation, but still force people to take time out to translate or interpret the Math.

    Nice job on reconciling the bubbles of space with Relativity via an 'accounting trick' but to some it will come across as 'epicycles' rather than a legit explanation. So this has to be a mixed review. I'd love to see a reprint of this paper with everything typeset properly. You didn't need to attribute the one equation (from Schwarzschild) to Wiki because it is common to a broader base. But if all of your equations used similar formatting, it would improve things a lot.

    All the Best,

    Jonathan

    Write a Reply...