"Why does MOND get any predictions right? It has had many a priori predictions come true. Why does this happen?" -- Stacy McGaugh

I say that Milgrom is the Kepler of contemporary cosmology -- on the basis of empirical evidence which now exists. Did the Gravity Probe B science seriously consider the possible implications of the many empirical successes of MOND?

According to Wikipedia, "Gravity Probe B (GP-B) was a satellite-based mission which launched on 20 April 2004 on a Delta II rocket; its aim was to measure spacetime curvature near Earth, and thereby the stress-energy tensor (which is related to the distribution and the motion of matter in space) in and near Earth. This provided a test of general relativity, gravitomagnetism and related models. The principal investigator was Francis Everitt."

Gravity Probe B, Wikipedia

"... Finally, during a planned 40-day, end-of-mission calibration phase, the team discovered that when the spacecraft was deliberately pointed away from the guide star by a large angle, the misalignment induced much larger torques on the rotors than expected. From this, they inferred that even the very small misalignments that occurred during the science phase of the mission induced torques that were probably several hundred times larger than the designers had estimated.

What ensued during the data analysis phase was worthy of a detective novel. The critical clue came from the calibration tests. Here, they took advantage of residual trapped magnetic flux on the gyroscope. (The designers used superconducting lead shielding to suppress stray fields before they cooled the niobium coated gyroscopes, but no shielding is ever perfect.) This flux adds a periodic modulation to the SQUID output, which the team used to figure out the phase and polhode angle of each rotor throughout the mission. This helped them to figure out that interactions between random patches of electrostatic potential fixed to the surface of each rotor, and similar patches on the inner surface of its spherical housing, were causing the extraneous torques. In principle, the rolling spacecraft should have suppressed these effects, but they were larger than expected. The patch interactions also accounted for the "jumps": they occurred whenever a gyro's slowly decreasing polhode period crossed an integer multiple of the spacecraft roll period. What looked like a jump of the spin direction was actually a spiraling path--known to navigators as a loxodrome. The team was able to account for all these effects in a parameterized model.

The original goal of GP-B was to measure the frame-dragging precession with an accuracy of 1%, but the problems discovered over the course of the mission dashed the initial optimism that this was possible. Although Everitt and his team were able to model the effects of the patches, they had to pay the price of the increase in error that comes from using a model with so many parameters. The experiment uncertainty quoted in the final result--roughly 20% for frame dragging--is almost totally dominated by those errors." -- Clifford M. Will

Viewpoint: Finally, results from Gravity Probe B, 31 May 2011, physics.aps.org

Are the unfortunate "interactions between random patches of electrostatic potential fixed to the surface of each rotor" merely a post-hoc explanation (which has never been confirmed by laboratory experiments on gyroscopes similar to those used by Gravity Probe B)?

I suggest that the 4 ultra-precise gyroscopes functioned correctly -- if dark-matter-compensation-constant really does equal zero then my guess is that the gyroscopes found evidence in favor of MOND-compatible-supersymmetry in the form of MOND-chameleon particles.

"One is left with the uneasy feeling that even if supersymmetry is actually false, as a feature of nature, and that accordingly no supersymmetry partners are ever found by the LHC or by any later more powerful accelerator, then the conclusion that some supersymmetry proponents might come to would not be that supersymmetry is false for the actual particles of nature, but merely that the level of supersymmetry breaking must be greater even that the level reached at that moment, and that a new even more powerful machine would be required to observe it!" -- Roger Penrose

Chapter 1, Mathematical Elegance as a Driving Force, pp. 102-103, "Fashion, Faith, and Fantasy in the New Physics of the Universe", 2016

Fashion, Faith, and Fantasy in the New Physics of the Universe, Wikipedia

Does supersymmetry need to be replaced by MOND-compatible-supersymmetry?

"MOND makes unique predictions about the relation between the mass distribution and kinematics. These predictions are confirmed with surprising accuracy in the observed rotation curves of observed galaxies, a phenomenon which is exceptionally well established observationally (Sanders & McGaugh 2002). I fail to see how this can be any more of an accident than the success of purely Newtonian gravity within the solar system. If dark matter is correct, this should not happen." -- Stacy McGaugh

Why Consider MOND? -- The MOND Pages

I say that my 3 most important ideas: (1) Milgrom is the Kepler of contemporary cosmology. (2) The Koide formula is essential for understanding the foundations of physics. (3) Lestone's theory of virtual cross sections is essential for understanding the foundations of physics. Are the preceding 3 ideas correct? My guess is that the 3 ideas can be explained both in terms of string theory with the infinite nature hypothesis and in terms of terms of string theory with the finite nature hypothesis. If Wolfram's cosmological automaton really explains the foundations of physics, then it should be possible to embed string theory with the finite nature hypothesis into string theory with the infinite nature hypothesis in various ways.

What are the 3 most likely ways that the "64 Particles Hypothesis" might fail? My guess would be that the answer to preceding question is: (1) the existence of magnetic monopoles in free space, (2) the existence of supersymmetry or MOND-compatible-supersymmetry in the form of MOND-chameleon particles, or (3) the existence of Majorana particles in the form of Majorana neutrinos.

Majorana fermion. Wikipedia

How does Wolfram's cosmological automaton makes sense in terms of physics? How can Wolfram's cosmological automation make empirical predictions? One answer might be MOND-compatible-supersymmetry, which can then yield Wolframian pseudo-supersymmetry by means of a limiting process. My guess is that Fredkin-Wolfram information explains the foundations of physics, but the explanation will always be somewhat dubious because such Fredkin-Wolfram information cannot, in principle, be directly measured.

Many of my posts on this thread might have drifted away from the question "How can mindless laws give rise to aims and intentions?" It seems to me that the question involves issues of string theory, the foundations of physics, and artificial intelligence. For information on artificial intelligence, google "bill gates kurzweil". My guess is that string theory crucially depends on the issue of the infinite nature hypothesis versus the finite nature hypothesis. I conjecture that MOND, the Koide formula, and Lestone's heuristic string theory can be fully justified in terms of string theory with the infinite nature hypothesis -- although I still favor the finite nature hypothesis. At vixra.org, I posted the following:

"In his 2007 publication "Physics based calculation of the fine structure constant " J. P. Lestone suggested that "the photon emission and absorption area A of an electron is controlled by a length scale" where the length scale is near the Planck length. What might be some of the implications of Lestone's hypothesis? Renormalization in quantum electrodynamics deals with infinite integrals that arise in perturbation theory. Does Lestone's hypothesis have important implications for renormalization? I conjecture that, EVEN AFTER QUANTUM AVERAGING, Maxwell's equations might be false at the Planck scale, because Lestone's heuristic string theory might be empirically valid. Let ρ represent the electric charge density (charge per unit volume). I conjecture that, in equation (19b) on page 23 of Einstein's "The Meaning of Relativity" (5th edition), ρ should be replaced by the expression ρ/ (1 - (ρ^2 / (ρ(max))^2))^(1/2), where ρ(max) is the maximum of the absolute value of the electric charge density in the physical universe. Polchinski (2003) offered "two general principles of completeness: (1) In any theoretical framework that requires charge to be quantized, there will exist magnetic monopoles. (2) In any fully unified theory, for every gauge field there will exist electric and magnetic sources with the minimum relative Dirac quantum n = 1 (more precisely, the lattice of electric and magnetic charges is maximal)." It seems to me that Polchinski's two general principles are likely to be correct if and only if nature is infinite."

Joseph Polchinski, Wikipedia

Am I correct in my estimation of Milgrom's MOND? Please see:

Triton Station, A Blog About the Science and Sociology of Cosmology and Dark Matter -- Stacy McGaugh

According to a brief history of renormalization by Kerson Huang, "Particle theorists have a peculiar sensitivity to the cutoff, because they regard it as a stigma that expresses an imperfect theory. In the early days of renormalization, when the cutoff was put out of sight by renormalization, some leaped to declare that the cutoff had been "sent to infinity." That, of course, cannot be done by fiat. Only in QCD can one achieve that, owing to asymptotic freedom." (page 13 of the following arXiv.org article)

"A Critical History of Renormalization", 2013, arXiv.org

Kerson Huang, Wikipedia

According to an introduction to renormalization by Ling-Fong Li, "Roughly speaking, the program of removing the infinities from physically measured quantities in relativistic field theory, the renormalization program, involves shuffling all divergences into bare quantities. In other words, we can redefine the unmeasurable quantities to absorb the divergences so that the physically measurable quantities are finite." (Page 2 of the following arXiv.org article)

"Introduction to Renormalization in Field Theory", 2012, arXiv.org

According to a review of supersymmetry by Stephen P. Martin, "... when any realistic supersymmetric theory is extended to include gravity, the resulting supergravity theory is non-renormalizable as a quantum field theory." (page 49 of the following arXiv.org article)

"A Supersymmetry Primer" (1997 version 1; 2016 revised version 7), arXiv.org

Despite the fact that the preceding article lists 330 references, the name "Milgrom" seems to be missing.

According to Moore and Satheeskumar, "... The role of spacetime in string theory is totally different from that of any other theory. String theory has symmetries which equate symmetries of different dimension, geometry and topology. The number of dimensions is fixed by mathematical consistency and there is a provision for reducing the number of dimensions too. Bosonic and fermionic modes "see" different number of spacetime dimensions."

arXiv.org preprint -- Moore, D. G., and V. H. Satheeshkumar. "Spectral dimension of bosonic string theory." Physical Review D 90, no. 2 (2014): 024075.

At vixra.org I posted the following, "...Is spacetime 26-dimensional? Measurements of spacetime using clocks and surveying instruments demonstrate that spacetime is 4-dimensional. I say that, from one point of view, spacetime is 26-dimensional. 26 dimensions = 1 dimension of matter time + 1 dimension of antimatter time + 24 dimensions of (±, ±, ± )-space. What is (±, ±, ±)-space? For the measurement of space, employ 6 particle beams consisting of 3 electron beams and 3 positron beams. For each dimension of space, employ all 3-tuples of beams selected from the 6 beams. By definition, (±, ±, ±)-space consists of 3 dimensions of ordinary space, each of which is measured in 8 different ways by using all of the possible 3-tuples of the 6 beams. The 24 dimensions of (±, ±, ±)-space reduce to the 3 dimensions of ordinary space because quantum field theory is empirically valid -- however, (±, ±, ±)-space might be useful for representational redundancy (because of the role that the Leech lattice plays in the foundations of physics.)"

Is representational redundancy of 26-dimensional bosonic string theory profoundly related to the problem of generalizing renormalization to effective calculations based upon the string landscape? Have the vast majority of physicists underestimated MOND? Is the replacement of supersymmetry by MOND-compatible-supersymmetry the key to understanding the fundamental theory that explains the empirical successes of MOND? Is there a form of MOND-compatible non-renormalizability that underlies the foundations of physics? In string theory with the infinite nature hypothesis is the main challenge to figure out how renormalization in quantum field theory gets replaced by computational non-renormalization in the string landscape?

5 days later

QUESTION 1. What is the structure of reality?

"In ordinary quantum theory, an elementary particle such as an electron is a point particle, albeit one that obeys rather subtle laws of quantum mechanics and relativity. In string theory, the starting point is to reinterpret an electron as a little vibrating loop of string, again subject to quantum mechanics and relativity. This little change turns out to have amazingly far-reaching consequences. The most significant is that although gravity - understood in modern times in terms of Einstein's theory of General Relativity - is inconsistent with standard quantum field theory, it is forced upon us in string theory. String theory automatically generates quantum gravity while pre-string physics makes it, as far as we understand, impossible." -- Edward Witten

"ed witten's take on string theory" by Luca Mazzucato, Simons Center for Geometry and Physics, 8 November 2010

Is the following true? Fredkin-Wolfram information encodes quantum information encodes classical information -- strings are smooth approximations to accumulations of Fredkin-Wolfram information. The string theorists discovered a mathematical geometrization of quantum probability amplitudes encompassing quantum field theory and general relativity theory. However, the string theorists assumed that gravitational energy is conserved -- i.e. that dark-matter-compensation-constant = zero. This is likely to be wrong because Wolfram's cosmological automation needs a timing mechanism that operates by moving gravitational energy from the boundary of the multiverse into the (physically non-measurable) interior of the multiverse. The Koide cutoff prevents time from being infinite. The Lestone cutoff prevents gravitational singularities from forming.

QUESTION 2. What is the main value of string theory?

"... A problem with the stability of the compact spaces emerges and is fixed using D-brane ideas once again. However, the solution means that string theory proliferates into a vast landscape of vacua. The anthropic principle is introduced as a way of recovering the vacuum state corresponding to our world. This introduces acrimonious debates concerning the scientific status of string theory, which are still ongoing." -- Dean Rickles

page 7 of "A Brief History of String Theory: From Dual Models to M-theory", 2014

According to the World Bank, the gross world product (i.e. economic value of all goods and services produced) was about 76 trillion U.S. dollars in the year 2013. In 2017, let us assume that science, engineering, and technology are very roughly worth $10 trillion dollars per year. My guess is that string theory with the finite nature hypothesis (based upon the monster group, the 6 pariah groups and 3 copies of the Leech lattice) will make quantum theory slightly easier to understand. Let us suppose that string theory with the finite nature hypothesis makes quantum theory slightly easier to understand and this improvement in understanding leads to an increase of 1/10 of 1 percent in the net productivity of scientists, engineers, and technical specialists. Under the preceding supposition, string theory with the finite nature hypothesis might have an ongoing economic value of roughly $10 billion per year. However, string theory with the infinite nature hypothesis might be of far less economic value.

QUESTION 3. Is Edward Fredkin's digital philosophy correct?

"There is nothing as 'concrete' in the world as a (computer) bit--it's more concrete than a photon or electron. It's not a 'simulation' of reality; it's not something that 'pretends' to be reality. It is reality." -- Edward Fredkin

"Cosmic Computer--New Philosophy to Explain the Universe" by Keay Davidson, San Francisco Chronicle, 1 July 2002

5 days later

Is there a unified theory of mathematics, theoretical physics, and theoretical computer science?

According to Steven Weinberg, "Everyone knows that electronic computers have enormously helped the work of science. Some scientists have had a grander vision of the importance of the computer. They expect that it will change our view of science itself, of what it is that scientific theories are supposed to accomplish, and of the kinds of theories that might achieve these goals. ... Wolfram goes on to make a far-reaching conjecture, that almost all automata of any sort that produce complex structures can be emulated by any one of them, so they are all equivalent in Wolfram's sense, and they are all universal. This doesn't mean that these automata are computationally equivalent (even in Wolfram's sense) to systems involving quantities that vary continuously. Only if Wolfram were right that neither space nor time nor anything else is truly continuous (which is a separate issue) would the Turing machine or the rule 110 cellular automaton be computationally equivalent to an analog computer or a quantum computer or a brain or the universe. But even without this far-reaching (and far- out) assumption, Wolfram's conjecture about the computational equivalence of automata would at least provide a starting point for a theory of any sort of complexity that can be produced by any kind of automaton. The trouble with Wolfram's conjecture is not only that it has not been proved--a deeper trouble is that it has not even been stated in a form that could be proved. What does Wolfram mean by complex? ..."

"Is the Universe a Computer?" (review of "A New Kind of Science"), The New York Review of Books, 24 October 2002

What is a precise mathematical formulation of Wolfram's concept of complexity? What might be a decisive empirical test of Wolfram's cosmological automaton? My guess is that at the Planck scale the concepts of energy and spacetime somehow fail -- the failure is either in terms of higher mathematics (i.e. mathematical symmetries of the string landscape) or in terms of lower mathematics (i.e. Wolfram's cosmological automaton). I further speculate that the string landscape depends upon generalizing Heisenberg's uncertainty principle to include alpha-prime, while Wolfram's cosmological automaton depends upon modifying Einstein's field equations to include nonzero dark-matter-compensation-constant, Koide cutoff, and Lestone cutoff. Who might be the world's greatest living theoretical physicist? The answer might be Witten. Who might be the world's greatest living mathematician? The answer might be Mochizuki.

According to David Castelvecchi, "The overarching theme of inter-universal geometry, as Fesenko describes it, is that one must look at whole numbers in a different light -- leaving addition aside and seeing the multiplication structure as something malleable and deformable. Standard multiplication would then be just one particular case of a family of structures, just as a circle is a special case of an ellipse. Fesenko says that Mochizuki compares himself to the mathematical giant Grothendieck -- and it is no immodest claim. "We had mathematics before Mochizuki's work -- and now we have mathematics after Mochizuki's work," Fesenko says."

"The biggest mystery in mathematics: Shinchi Mochizuki and the impenetrable proof", Nature, 7 October 2015

How might the concepts of energy and spacetime be introduced into algebraic/arithmetic geometry? The answer to the preceding question might involve a unified theory of string theory and Mochizuki's IUT -- deformations of string vacua might have analogues within IUT. I have speculated that if my idea of dark-matter-compensation-constant = square-root((60±10)/4) * 10^-5 is empirically invalid, then MOND-chameleon particles might be the most likely alternative. Because individual gravitons cannot be detected, any suggested modification of Einstein's field equations might be interpreted in terms of weird particles combined with 100% truth of Einstein's field equations. I conjecture that there might be a theorem that, under plausible hypotheses, the following three assumptions imply that MOND-chameleon particles exist.

ASSUMPTION 1. Milgrom's MOND is approximately valid for a wide range of gravitational accelerations.

ASSUMPTION 2. Newton's 3 laws of motion are (non-relativistically) correct, and supersymmetry needs to be replaced by MOND-compatible-supersymmetry.

ASSUMPTION 3. String theory with MOND-compatible-supersymmetry explains the empirical successes of MOND.

Am I wrong about Fredkin-Wolfram information?

"So: Is Wolfram, as he plainly believes, the new Copernicus? Or is he merely a new Darwin or Einstein? Well, if it's comparisons you are seeking, the one that occurred to me was the astronomer in Dr. Johnson's Rasselas, who, after years of intense, solitary intellection, went quietly nuts." -- John Derbyshire

"Not Quite Copernicus", National Review, 16 September 2002

Is is true that Fredkin-Wolfram information somehow encodes quantum information?

13 days later

... Fredkin is one of those people who arouse either affection, admiration, and respect, or dislike and suspicion. The latter reaction has come from a number of professors at MIT, particularly those who put a premium on formal credentials, proper academic conduct, and not sounding like a crackpot. ... Fredkin doubts that his ideas will achieve widespread acceptance anytime soon. He believes that most physicists are so deeply immersed in their kind of mathematics, and so uncomprehending of computation, as to be incapable of grasping the truth. Imagine, he says, that a twentieth-century time traveler visited Italy in the early seventeenth century and tried to reformulate Galileo's ideas in terms of calculus. Although it would be a vastly more powerful language of description than the old one, conveying its importance to the average scientist would be nearly impossible. There are times when Fredkin breaks through the language barrier, but they are few and far between. He can sell one person on one idea, another on another, but nobody seems to get the big picture. It's like a painting of a horse in a meadow, he says. "Everyone else only looks at it with a microscope, and they say, 'Aha, over here I see a little brown pigment. And over here I see a little green pigment.' Okay. Well, I see a horse." -- Robert Wright

"Did the Universe Just Happen", Atlantic Monthly, April 1988, pp. 29-44

Digital Philosophy (DP) is a new way of thinking about things work. ... DP is based on two concepts: bits, like the binary digits in a computer, correspond to the most microscopic representation of state information; and the temporal evolution of state is a digital information process similar to what goes on in the circuitry of a computer processor. -- Edward Fredkin

"An Introduction to Digital Philosophy", International Journal of Theoretical Physics, February 2003

How might Fredkin's ideas on the foundation of physics be tested? I say that my 3 most important ideas are: (1) Milgrom is the Kepler of contemporary cosmology and the main problem with string theory is that string theorists fail to realize that Milgrom is the Kepler of contemporary cosmology. (2) The Koide formula is essential for understanding the foundations of physics. (3) Lestone's theory of virtual cross sections is essential for understanding the foundations of physics. How might the 3 preceding ideas be tested? The answer might be: (1) the Fernández-Rañada-Milgrom effect, (2) the Space Roar Profile Prediction, and (3) the "64 Particles Hypothesis". What is the explanation for dark matter? My guess is as follows: (A) String theory with the finite nature hypothesis implies that dark-matter-compensation-constant = sqrt((60±10)/4) * 10^-5 and that dark matter indicates that Wolfram's updating parameter transfers information within the Fredkin-Wolfram network of information. (B) String theory with the infinite nature hypothesis implies that MOND-chameleon particles have variable effective mass depending upon nearby gravitational acceleration and that dark matter is explained entirely by the existence of MOND-chameleon particles.

For the foundations of physics, should the concepts of spacetime and energy be replaced by the concepts of Fredkin distance, Fredkin time, and Fredkin digit transition?

In my previous posting at this website, the first phrase in the quotation from Fredkin should be "Digital Philosophy (DP) is a new of thinking about how things work ... " Note that my idea of Fredkin-Wolfram information is that such information cannot, in principle, be directly measured and that string theory with the infinite nature hypothesis can never be refuted (although such theory might be mathematically awkward).

On page 5 of the following article, when Fredkin refers to a "unit of Time" I suggest that "Time" should be understood as non-measurable "Fredkin time" in some as yet undescribed Fredkin-Wolfram network of information.

"Discrete Theoretical Processes" by Edward Fredkin

It seems to me that Fredkin's terminology should be based upon Fredkin time, Fredkin distance, and Fredkin digit transition with the true meanings of such concepts depending upon a hypothetical affirmative solution of what I call "Wolfram's Simple Rules Hypothesis".

2 months later

"The existence of exotic dark matter particles outside the standard model of particle physics constitutes a central hypothesis of the correct standard model of cosmology (SmoC). Using a wide range of observational data I outline why this hypothesis cannot be correct for the real Universe." -- Pavel Kroupa

"Lessons from the Local Group (and beyond) on dark matter", 2014

My guess is that Kroupa's analysis of the dark matter problem is correct. I say that my 3 most important ideas are: (1) Milgrom is the Kepler of contemporary cosmology. (2) The Koide formula is essential for understanding the foundations of physics. (3) Lestone's theory of heuristic cross sections is essential for understanding the foundations of physics. Are my speculations consistent with empirical reality? I suggest that there is a unified theory of mathematics and theoretical physics which consists more-or-less of a unification of Mochizuki's IUT with the string landscape. Consider 10 basic concepts relevant to theoretical physics: time, distance, space, curvature, torsion, energy, causality, randomness, logic, and information. Fredkin and Wolfram have suggested that quantum field theory and general relativity theory can be explained in terms of the network logic of Fredkin-Wolfram information. My guess is that string theory is essential for carrying out the Fredkin-Wolfram program. Is the introduction of time and energy into algebraic geometry the key to a unified theory of theoretical physics and pure mathematics? I conjecture that string theory with the finite nature hypothesis is empirically valid if and only if Wolfram's "A New Kind of Science" is one of the greatest books ever written. Google "kroupa milgrom" , "mcgaugh milgrom", "witten milgrom", and "einstein 3 criticisms".

"So, back to the telescope." -- Stacy McGaugh, 7 July 2017

Triton Station: A Blog About the Science and Sociology of Cosmology and Dark Matter

I say that Milgrom is the Kepler of contemporary cosmology.

Wikiquote for Pavel Kroupa

Wikiquote for Stacy McGaugh

Google "kroupa milgrom", "mcgaugh milgrom", "witten milgrom", and "einstein 3 criticisms".

My guess is that the Copenhagen interpretation is philosophically wrong but empirically irrefutable.

From Wolfram Alpha:

((reduced Planck mass) * (electron mass)^3)^(1/4) = 1.1 * (Higgs boson mass), where (Higgs boson mass) is approximated by 1.25 GeV/c^2

Is the preceding approximation a meaningless coincidence?

Consider a just-so story. A virtual electron-positron pair acquired a Big Bang mass. The anomalous acquisition created a 26-dimensional bosonic black hole that consisted entirely of virtual Higgs bosons and virtual electron-positron pairs. Within the 26-dimensional bosonic black hole, nonmeasurable Fredkin time equalled the reduced Planck mass, and nonmeasurable Fredkin spatial distance equalled the electron mass. The 26-dimensional bosonic black hole collapsed into a 10-dimensional GR model with measurable particles. Nonmeasurable Fredkin time and nonmeasurable Fredkin spatial distance somehow transitioned into measurable time and measurable distance with measurable particles. Shortly after the transition, each Higgs boson occupied approximately the same 4-volume as each electron, thus suggesting an approximation for the mass of a Higgs boson.

Is the preceding scenario complete nonsense?

"The failures of the standard model of cosmology require a new paradigm" by Kroupa, Pawlowski & Milgrom, 2013

Consider 7 conjectures:

(1) String with the finite nature hypothesis implies MOND and no SUSY.

(2) The Koide formula is essential for understanding string theory with the finite nature hypothesis,

(3) String vibrations are confined to 3 copies of the Leech lattice.

(4) There are 6 basic quarks because there are 6 pariah groups.

(5) The monster group and the 6 pariah groups allow energy to exist.

(6) There exists a (2/3)-Koide formula that allows some quarks to have charge ± 2/3.

(7) There exists a (1/3)-Koide formula that allows some quarks to have charge ± 1/3.

From Wolfram Alpha:

(muon mass) /(electron mass) = 206.7683

(tauon mass)/(electron mass) = 3477.48

(59^3 + 33 * 59^2 + 57 * 59 + 9 )^(1/27) - 1.59983643131952544 = 0 approx.

For a = 1.5998364, x = 206.7683, y = 3477.48,

calculate ( a^3 +(a^2) * x + a * y)/(a^3 + ( a^2) * x^.5 + a * y^.5)^2 Answer: .333333

For the polynomial x --> x^3 + 33 * x^2 + 57 * x + 9 my guess is that 33 + 26 = 59 is meaningful because of 26-dimensional bosonic string theory and the fact that the three primes 59, 59 ± 12 divide the order of the monster group. My guess is that the constant term 9 is meaningful because of Lestone's heuristic string theory.

10 days later

Monstrous moonshine, wikipedia

My guess is that there might be two infinite series, both involving the (1/3)-Koide polynomial, yielding a numerical expression for Wolframian pseudo-supersymmetry.

Note that:

(59^3 33 * 59^2 57 * 59 9)/78^3 - .6819568772231494 = 0 approx.

72^3 / (72^3 33 * 72^2 57 * 72 9) = .680571738... approx.

Here the number 78 represents 3 copies of bosonic string theory, and the number 72 represents 3 copies of the Leech lattice.

The percentage of dark energy is the universe is about 68.3 %.

Planck 2013 results

11 days later

From Wolfram Alpha:

(1 1/2.5 1/2.5^2 1/2.5^3 1/2.5^4 )^(1/24) / (1 1/47) = .99980215...

(1 1/2.5 1/2.5^2 1/2.5^3 1/2.5^4 1/2.5^5)^(1/24) / (1 1/47) = 1.0000600...

(1 1/3 1/3^2 1/3^3 1/3^4 )^(1/24) / (1 1/59) = .9999154432...

(1 1/3 1/3^2 1/3^3 1/3^4 1/3^5)^(1/24) / (1 1/59) = 1.00003006648...

(1 1/3.5 1/3.5^2 1/3.5^3 1/3.5^4 )^(1/24) / (1 1/71) = .99995403...

(1 1/3.5 1/3.5^2 1/3.5^3 1/3.5^4 1/3.5^5)^(1/24) / (1 1/71) = 1.0000108...

Do the 6 preceding estimates have something to do with the Leech lattice, the monster group, and the foundations of physics?

Leech lattice, Wikipedia

Monster group, Wikipedia

8 days later

I say that my 3 most important ideas are: (1) Milgrom is the Kepler of contemporary cosmology. (2) The Koide formula is essential for understanding the foundations of physics. (3) Lestone's theory of virtual cross sections is essential for understanding the foundations of physics.

The axion is coupled to the electromagnetic field in ways that should be testable.

Sikivie, P., 1983. Experimental tests of the" invisible" axion. Physical Review Letters, 51(16), p.1415.

Could most of the proton charge radius puzzle be explained by unexpected electromagnetic interactions caused by axions?

John P. Lestone has suggested that leptons might be quantum micro black holes according to his theory of virtual cross sections.

Possible Mechanism for the Generation of a Fundamental Unit of Charge (long version), Los Alamos Report LA-UR-17-24901, 16 June 2017

If Lestone is correct, then massive bosons might be (approximately) quantum micro black hole 1-spheres, leptons might be quantum micro black hole 2-spheres, and quarks might be quantum micro black hole 3-spheres.

The electromagnetic effects of axions (1-spheres with virtual cross sections) and leptons (2-spheres with virtual cross sections) might suggest that some form of electromagnetic uncertainty might be approximated by (axion mass)/(electron mass).

Can the proton charge radius puzzle be resolved in terms of electromagnetic effects from axions?

Proton radius puzzle, Wikipedia

Axion, Wikipedia

According to Graf and Steffen, "If the Peccei-Quinn (PQ) mechanism is the explanation of the strong CP problem, axions will pervade the Universe as an extremely weakly interacting light particle species."

"Thermal axion production in the primordial quark-gluon plasma" by Peter Graf and Frank Daniel Steffen, 2011

According to Derbin et al., "If the axions or other axion-like pseudo scalar particles couple with electrons then they are emitted from Sun by the Compton process and by bremstrahlung ...:

"Search for solar axions produced by Compton process and bremsstrahlung using the resonant absorption and axioelectric effect" by A. V. Derbin, et al., 2013

What is the magnitude of the uncertainty in determining the fine structure constant?

Fine structure constant, Wikipedia

Hanneke, Fogwell, and Gabrielse (2008) estimated 1/(fine structure constant) as: 137.035999084(51)

Hanneke, D., Fogwell, S., & Gabrielse, G. (2008). New measurement of the electron magnetic moment and the fine structure constant. Physical Review Letters, 100(12), 120801.

.000000051/137.035999084 = 3.72... * 10^-10

(electron mass) * 3.72 * 10^-10 = 1.901 * 10^-4 eV/c^2

P. J. Mohr, B. N. Taylor, and D. B. Newell (2015) estimated 1/(fine structure constant) as: 137.035999139(31)

"Fine structure constant" in CODATA Internationally recommended 2014 values of the fundamental physical constants, National Institute of Standards and Technology

.000000031/137.035999139 = 2.262... * 10^-10

(electron mass) * 2.262 * 10^-10 = 1.156 * 10^-4 eV/c^2

How might the proton radius puzzle be related to uncertainties in determining the fine structure constant?

In the article "The Rydberg constant and proton size from atomic hydrogen" Beyer et al. (2017) stated (p. 80), "Line shape distortions caused by quantum interference from distant neighboring atomic resonances have recently come into the focus of the precision spectroscopy community ... To the best of our knowledge, this effect has been considered in the analysis of only one of the previous H experiments and was found to be unimportant for that particular experimental scheme ... The effect was found to be responsible for discrepancies in the value of the fine structure constant α extracted from various precision spectroscopy experiments in helium ... The root of the matter is that natural line shapes of atomic resonances may experience deviations from a perfect Lorentzian when off-resonant transitions are present."

Beyer, Axel, Lothar Maisenbacher, Arthur Matveev, Randolf Pohl, Ksenia Khabarova, Alexey Grinin, Tobias Lamour et al. "The Rydberg constant and proton size from atomic hydrogen." Science 358, no. 6359 (2017): 79-85.

Are axions a likely source of off-resonant transitions? Could the older experiments for determining the proton charge radius have failed to eliminate axion interactions as a source of inflating the value of the proton radius? Could the 2017 experiment by Beyer et al. be failing to take into account axion interactions? What might happen if experiments for measuring the proton charge radius were performed at 4 different levels of shielding -- on the surface, in a slightly deep mine, in a moderately deep mine, and in a very deep mine -- to check for possible axion production of off-resonant transitions? By assuming a precise value for the axion mass could the 2017 experiment by Beyer et al. be modified for axion detection?

If most of the proton charge radius puzzle can be explained by uncertainty in estimating the finite structure constant due to unexpected axion detection, then (1.2 ± 1.0) * 10^4 eV/c^2 might be a plausible estimate for the axion mass.

    According to John P. Lestone, "If black holes (once thought to be point objects) are amenable to statistical mechanics, then why not fundamental particles like leptons? (1988)"

    "Possible path for the calculation of the fine structure constant", Los Alamos Report LA-UR-16-22121, 4 April 2016

    Is my basic theory (i.e. string theory with the finite nature theory) wrong? (Actually, my basic theory is string theory with the finite nature hypothesis and various simplifying assumptions.) Let us assume that string theory with the infinite nature hypothesis is correct. In that case, I would bet in favor of MOND-chameleon particles and a quantum foam theory based on Lestone's theory of virtual cross sections. Consider some hypotheses which might be a plausible alternative to my basic theory:

    (1) The flat spacetime of quantum field theory and the curved spacetime of general relativity theory emerge from a virtual quantum foam spacetime in which massless virtual particles travel at a virtual speed C which is vastly greater than c. The virtual speed C is so large that it appears to be infinite. Virtual quantum foam spacetime is amazingly hot -- so hot that ordinary spacetime decomposes into quantum foam. The non-virtual particles are the vastly cooler particles that are confined by gravitational geodesics.

    (2) The virtual quantum foam spacetime is 26-dimensional. It has 3 dimensions of ordinary space, 1 dimension of time, 3 dimensions of linear momentum, 3 dimensions of angular momentum, and 16 dimensions of uncertainty. The dimensions of uncertainty arise because 4-dimensional spacetime has 4 dimensions of hbar uncertainty multiplied by 4 dimensions of alpha-prime uncertainty. The quantum foam spacetime is the physical interpretation of 26-dimensional bosonic string theory, which has 25 dimensions of higher-dimensional space and 1 dimension of time.

    (3) Micro black holes with masses less than the Planck mass and with charges roughly approximated by the electron charge existed during the very early stages of the Big Bang but then rapidly evaporated down to the fundamental particles of the Standard Model and whatever massive particles might need to be added to the Standard Model.

    (4) All of the fundamental particles with mass are quantum micro black holes. Massive bosons are roughly like virtual 1-spheres in quantum foam spacetime. Leptons are roughly like virtual 2-spheres. Quarks are roughly like virtual 3-spheres.

    (5) Virtual photons and virtual gravitons can be exchanged between two micro black holes and there is a relevant transmission coefficient for widely separated micro black holes. Exchanges of virtual photons and virtual gravitons in quantum foam spacetime determine the strengths of the electromagnetic and gravitational fields.

    (6) The conventional, widely accepted theory of quantum evaporation of micro black holes is wrong because the theory ignores the virtual cross sections of massive virtual particles. If the virtual cross sections went to zero then the conventional theory would be correct.

    (7) There is widespread transient violation of conservation of electromagnetic energy but the violation averages out to zero on time scales larger than the Planck time.

    (8) The Heisenberg uncertainty principle is 100% correct for directly measured particles but for virtual particles the Heisenberg uncertainly principle needs to be replaced by an (hbar,alpha-prime) uncertainty principle which takes into account Lestone's theory of virtual cross sections.

    (9) After quantum averaging, Einstein's field equations are 100% correct but the -1/2 in the standard form of Einstein's field equations is apparently replaced by -1/2 MOND-chameleon-fake-function, where this function is caused by the presence of MOND-chameleon particles. These hypothetical MOND-chameleon particles have variable effective mass depending on nearby gravitational acceleration. The MOND-chameleon particles are bosons which form a Bose-Einstein condensate. This Bose-Einstein condensate forms an insulating barrier between the cooler non-virtual particles and the vastly hotter virtual particles.

    embarrassing typo ... The last sentence should be:

    If most of the proton charge radius puzzle can be explained by uncertainty in estimating the finite structure constant due to unexpected axion detection, then (1.2 ± 1.0) * 10^-4 eV/c^2 might be a plausible estimate for the axion mass.

    5 days later

    I say that my three most important ideas are: (Idea 1) Milgrom is the Kepler of contemporary cosmology. (Idea 2) The Koide formula is essential for understanding the foundations of physics. (3) Lestone's theory of virtual cross sections is essential for understanding the foundations of physics. How might (Idea 3) be tested?

    John P. Lestone of Los Alamos National Laboratory has suggested a theory of quantum micro black holes that might explain the value of the fine structure constant. He suggested 7 principles, the first 4 of which are as follows: "Properties used to calculate the fine structure constant for my imaginary particles (1) My particles have a very high temperature. (2) Despite having a high temperature my imaginary particles can not change their rest mass upon the emission of electromagnetic energy. Using known physics my imaginary particles (if isolated) can not emit any "real" photons. (3) However, I consider the possibility that my imaginary particles can emit and absorb unphysical L=0 "virtual" photons via the time-energy uncertainty principle. (4) The emission and absorption is controlled by statistical arguments involving their assumed "classical" temperature and possibly other effective temperatures. ..."

    "Possible path for the calculation of the fine structure constant", Los Alamos Report LA-UR-16-22121, 4 April 2016, page 10

    How might Lestone's theory be related to the idea that quantum information can be explained in terms of Fredkin-Wolfram information? My guess is that if string theory with the finite nature hypothesis is empirically valid, then it is necessary that string theory with the infinite nature hypothesis should "almost" work.

    Dreiner, Grab, and Stefaniak considered the possibility of a supersymmetric extension of the standard model with a baryon-triality symmetry with a right-handed scalar electron (selectron) or scalar muon (smuon) as the lightest supersymmetric particle.

    "Discovery Potential of Selectron or Smuon as the Lightest Supersymmetric Particle at the LHC" by H. K. Dreiner, S. Grab, and T. Stefaniak, 2011

    Consider the following 6 hypotheses:

    (1) After quantum averaging, Einstein's field equations are 100% correct. The empirical successes of Milgrom's MOND are explained by MOND-chameleon particles which have variable effective mass depending upon nearby gravitational acceleration. (2) For every particle in the Standard Model, its superpartner is a MOND-chameleon particle. (3) The right-handed selectron is the lightest supersymmetric particle and explains most of the mass-energy of dark matter particles. (4) The bosons that are not MOND-chameleon particles are the photon, the graviton, the 8 distinct gluons, the Z-boson, the W boson, the W- boson, the Higgs boson, the axion, and the inflaton. The string vibrations of these 16 distinct types of bosons determine a 25-dimensional space with 1 dimension of time. 26-dimensional bosonic string theory allows for an 10-dimensional general relativity model with 16 dimensions of virtual bosons. (5) The universe undergoes an amazingly fast cycle of heating and cooling. A cool phase of the universe lasts for one Planck time interval at the end of which the universe becomes amazingly hot and instantaneously undergoes a hot phase. The temporal duration of the hot phase appears to be instantaneous because its time duration is (Planck time interval) multiplied by ((little c)/(big C)). Big C is the speed of light in 26-dimensional bosonic spacetime. The hot phase is instantaneous de-compactification and re-compactification of 4-dimensional spacetime with respect to the 26-dimensional spacetime of bosonic string theory. At the end of a hot phase, the universe undergoes another cool phase. (6) The existence of the cycle of cool phases and hot phases might provide a basis for Lestone's principles (1) through (4) in a bosonic string theoretical 26-dimensional spacetime, which consists of amazingly hot virtual particles. Thus, Lestone's principles might be explained in a higher-dimensional spacetime of virtual particles -- the de-compactified spacetime might be thought of as a quantum foam that is so hot that ordinary spacetime breaks down.

    Write a Reply...