Essay Abstract

A hint of where to peek into what promises to be the ultimate source of the physics rules of the Universe and what present day physics speculation can be eliminated. If a 4th quark family appears at the LHC, then this ultimate source has been revealed!

Author Bio

Frank Potter is a Research Physicist at Sciencegems.com who was formerly at the University of California, Irvine for 25 years. His latest popular science books are Mad About Physics and Mad About Modern Physics with co-author Christopher Jargodzky, in which one can find traditional and new challenges requiring at least one logical step beyond the physics textbook exercise. His research includes extensions of the Standard Model of particle physics to discrete spacetime and modifications of the general theory of relativity.

Download Essay PDF File

  • [deleted]

Fundamental symmetries elaborated by pure mathematics into physical theory is a disaster given empirical observation (e.g., Yang and Lee, 1957). The SI standard of mass is a physical artifact, Newton's G cannot be calculated. The Standard Model arrives massless, the Higgs boson is faery dust. Supersymmetry's partners refuse to appear (solar axion telescope), protons do not decay (Super-Kamiokande). Supergravity, lattice and loop quantum gravity, and above all string and M-theory predict nothing.

Physics is drunk with symmetry - Noether's theorems, Newton and Green's function. Covariance with respect to reflection in space and time is not required by the Poincaré group of Special Relativity or the Einstein group of General Relativity. Quantum field theories (QFT) with hermitian hamiltonians are invariant under the Poincaré group containing spatial reflections. Parity is a spatial reflection and parity is not a QFT symmetry! QFT are invariant under the identity component of the Poincaré group - the subgroup consisting of elements that can be continuous path joined to the Poincaré group identity; only an orthochronous Poincaré group representation. This subgroup excludes parity and time reversal. Supersymmetric (SUSY, gauge symmetry plus spacetime symmetry) grand unified theories relating fermions and bosons to each other contain allowances for symmetry breaking (inserted soft breaking terms into the Lagrangian where they maintain the cancellation of quadratic divergences).

Noether's theorems demand continuous symmetries or at least approximation by a Taylor series. Noether fails for parity. Quantum gravitation theories supplement Einstein-Hilbert action with an odd-parity Chern-Simons term. Physics cannot abide parity, adding symmetry breakings to make theory consistent with observation. An axiomatic system is no stronger than its weakest axiom. Empirical reality is parity divergent for all but the strongest interactions. CERN will be a massive disappointment. Physical theory is fundamentally wrong for postulating intrinsic parity symmetry. That is physics' self-imposed limit, that is why it fails.

Dear Uncle Al,

Thank you for your comments, most of which I understand. Mixed in with some of your obviously true statements are many that cannot be challenged without further developments. However, I would like to point out a few related options.

You might be willing to consider a different approach, i.e., using discrete symmetries and their finite groups instead of continuous symmetries, as well as a discrete spacetime. Some consequences already worked out are

(1) there would be no need for the Higgs boson to provide masses because mass ratios of the leptons and of the quarks in the Standard Model are determined by the relationships to the j-invariant of elliptic modular functions, i.e., the masses are invariants under the linear fractional transformations of the finite binary rotation groups in 3-D for the leptons and in 4-D for the quarks;

(2) the Standard Model gauge group is simply a very good continuous approximation to the finite gauge group at the Planck scale, thereby explaining electroweak symmetry breaking, etc., in the simplest possible manner;

(3) by using finite rotational groups for the lepton families and the quark families - and I point out that they are subgroups of the Standard Model gauge group - one can make the unique mathematical connection from our 4-D spacetime via special quaternions called icosians to 8-D space and 10-D spacetime, thereby connecting to a discretized version of M-theory. This mathematical process ensures that the 4-D physical world is all we need but it has connections to the mathematical world of the Monster group and all its richness.

(4) If the b' quark appears at the LHC, then CERN will be a huge success! And Nature has discrete symmetries everywhere, even at the Planck scale.

I could fill in many more details of the advantages of the finite groups for the basis of the physics rules in the Universe, but I direct you to the references.

Your essay at the end touched somewhat with what I have been working on. An aspect of this with respect of a quantum phase transition and the cosmological constant I wrote up at:

http://www.fqxi.org/community/forum/topic/494

This discusses the E_8 or the exceptional Jordan matrix group. I don't discuss this much here in this essay, but the triality of octonions in the 3x3 J^3(O) Jordan matrix points to the Mathieu group M_{24} or Leech lattice as potentially more fundamental. A break down of this is the (H_4xE_8)^2, where the two H_4 are the 120 or 600 cells (dual to each other) with the icosian group of quaternions. The 120-cell also tessellates hyperbolic manifolds whuch as the AdS spacetime, and this has relationships to AdS/CFT correspondence issues.

I don't discuss these issue terribly much in the essay here, but these issues are in certain way I think related.

I will say I have some doubts about the existence of 4-th quark and lepton doublets. The problem is that this would have increased the number of species in the post inflationary quark-gluon plasm of the unvierse. This would have lowered temperatures in the early universe and have changed the deuteron abundance we observe.

Cheers LC

Dear Lawrence,

Thank you for your comments and insight. I can make the following responses:

(1) The existence of the 4th quark family of quarks representing 4-D binary rotation symmetries is the key to understanding the mathematical connections, solves several outstanding problems, AND it allows the YES/NO prediction of something to be found at the LHC: the b' quark at about 80 GeV decaying to b quark plus photon, and the t' quark. There is a slight hint of this b' quark in the Fermilab data at the acceptable energy range but nothing conclusive, so the enormous current at the LHC and judicious selection procedures should resolve the issue. There may be some initial confusion with people claiming a possible Higgs decay but deciphering the spin of the decaying particle will clear this up. By the way, I expect no Higgs because it is not needed.

(2) If my approach is correct, there will be no 4th lepton family for two reasons: (i) no more 3-D binary rotation groups exist for a 4th lepton family to exist, and (ii) astrophysics limits the number of fundamental particles to 15, so a 4th lepton family would exceed this limit by one.

(3) Yes, the Leech lattice and triality of the octonions (here the icosians) play a very important role, especially in discrete spacetimes, and I have been considering them for quite some time. The beauty here is how all of this mathematics is interconnected.

(4) One more very important point. If one sticks to continuous symmetries in 10-D spacetime and tries to 'divide down' to 4-D spacetime and a 6-D internal symmetry space, one gets the 10^500 or so possibilities string theorists are struggling with. However, if one uses DISCRETE 10-D spacetime, the division is easy and unique to a 4-D discrete spacetime and a 4-D discrete internal symmetry space - just what is needed!

(5) Both the spacetime and the internal symmetry space will turn out to be discrete at the Planck scale, and we are observing these in their continuum limits. The successes of the Standard Model are because it is such a good approximation to the real symmetries, those represented by finite subgroups of the Standard Model gauge group. That is, the actual lepton and quark family groups are finite subgroups of SU(2)L x U(1)Y = SU(2) x I = SU'(2), where I is the 2-element inversion group, and there are 3 in 3-D and 4 in 4-D, so 3 lepton families and 4 quark families.

The LHC will let the truth be told!

Cheers FP

I submit two additions to my two previous response posts:

(1) The Frank Potter posts above with black background are mine, the author. I simply forgot to login first.

(2) I read that many other submissions talk about unification of the fundamental interactions with great complexities of mathematics. With my approach the game is so much simpler mathematically. If the b' quark appears at the LHC at around 80 GeV, then my 1st and 2nd references in my submission show how a UNIQUE UNIFICATION is accomplished in discrete 10-D spacetime dictated by Weyl E8 x Weyl E8 = 'discrete' SO(9,1). The beauty here is that the physical spacetime is 4-D and so is the internal symmetry space for the Standard Model, all as expected. One simply needs to consider discreteness as dictated by finite symmetry groups instead of Lie groups. One doesn't need to complicate matters further.

  • [deleted]

Dear Franklin,

I am having trouble understanding how a discrete symmetry can become a continuous one. I have read your references, but it is still unclear. I am picturing a perfect crystal. At macroscopic regions it can still displays the same discrete symmetries. Does this mean that space time at larger scales has discrete symmetry defects? If yes, is this related to mass? (I am thinking here about the mechanism for electric resistance in a metal due to crystalline defects.) Why are the masses different for different generations then? Also if yes, why is the dimensionality still 4? If there are "crystalline defects" would the long range dimensionality not be able to change? Can you please clarify?

Thank you.

Dear Florin,

Thank you for your questions. Perhaps my comments below with help somewhat. As you can see, one must accept discrete symmetry at all scales, although we may have difficulty resolving the discreteness above some scale.

(1) We would probably agree that if the b' quark at about 80 GeV shows up at the LHC then there is an excellent possibility that spacetime is discrete and the internal symmetry space of the Standard Model is discrete, because a geometrical model predicts these results from the discreteness itself and the corresponding symmetries. If the t' quark then appears at about 2600 GeV, the discreteness is cinched.

(2) You are asking about "how a discrete symmetry can become a continuous one"? It doesn't. What may appear to be a continuous symmetry at some scale of resolution is still a discrete one underneath. Read on to see my argument.

(3) I ask: why do you want a continuous symmetry? I.e., what is your evidence for a continuous space/spacetime? Let's accept that your best evidence is the Standard Model gauge group SU(2)L x U(1)Y x SU(3)C because the fundamental interactions agree extremely well (except for gravitation).

(4) Now I see that you are having difficulty predicting lepton and quark masses, the number of lepton families with light neutrinos is maxed at 3 but no such limit exists for quark families, etc.

(5) I tell to you that the SM gauge group based upon those Lie groups is a great synthesis, but obviously something is lacking. One possibility is that the continuous gauge group is actually an excellent approximation, sort of "covering up" the real fundamental symmetries of of the interactions, the discrete ones represented by certain finite subgroups of your gauge group.

(6) In fact, using this discreteness and the 3-D finite binary rotational subgroups and the 4-D finite binary rotational subgroups, I can predict the mass ratios, the numbers of lepton and quark families (the hierarchy, etc), the family relations, and much more in agreement with the Standard Model. The mass ratios for the different families of leptons 1:108:1728 and the ones for the quarks arise from the subgroup invariants for each family (each family having a different binary rotational subgroup) and these are directly related to the j-invariant of elliptic modular functions. Therefore, the masses are invariant under all linear transformations, etc. (These mathematical ratios will need corrections because they are the ones for an isolated particle, not one in the environment of other particles.) All this happens in 4-D internal symmetry space and its 3-D subspace. So why go to a larger space?

(7) These results arise from the discrete symmetries that lie "hidden" beneath the normal continuous symmetries of the Standard Model. So we probably need to realize the standard Model as a marvelous approximation and accept the new evidence for fundamental discreteness - if the b' quark appears.

  • [deleted]

Dear Franklin,

Thank you for the quick and detailed answer. I understand your model has good predictive power, but I still do not understand the physical mechanism of the continuous approximation. I am able to rotate any object in a continuous fashion, and not in discrete angles. Also the continuous Lorenz symmetry seems to hold in all experiments ever performed. How is this possible when the underlying symmetry is discrete? You mentioned in one of your papers that the continuous symmetry acts like a cover for the discrete ones. Is this term identical with let's say SU(2) acts as a double cover for SO(3)?

My thinking is that if the underlying space-time is discrete, then you either ca have an exact tessalation of the continuous space by the discrete one (but this may be at odds with general relativity because the curvature can destroy the tessalation), or you have space-time local defects. In the second case would you not expect light diffusion in vacuum via a Rayleigh scattering for example?

Dear Florin,

Thank you again for your comments and questions.

(1) At what size scale are you rotating "any object in a continuous fashion"? It's a long way from the Planck scale of 10^-35 meters to even the miniscule scale of the LHC probing at about 10^-23 meters. And it's further still to objects of centimeter size.

(2) The "cover" I mention has nothing to do with the double cover of SO(3) by SU(2). I'm simply saying that at our macro-scale wrt the Planck scale we cannot resolve the discreteness easily - like looking at my sheet of paper with 1 cm rulings from 10^12 meters away, i.e., from Jupiter, say. And furthermore, if I have a regular octahedron rotating in my hand and you are looking with your naked eye, at some distance you will not be able to distinguish its shape, and probably assume that it is spherical for simplicity, leading to the use of a continuous group for the spherical symmetry.

(3) If the underlying spacetime is discrete, then Penrose's heavenly sphere is tesselated by the finite subgroups of SO(3,1), for example. If you look carefully, these are the same ones as for subgroups of SU(2), the ones tesselating the Riemann sphere, and therein begins the connections via icosians to Weyl E8 x Weyl E8 = 'discrete' SO(9,1), etc.

(4) I haven't worried yet about curvature destroying these tesselations because the fundamental particles (leptons, quarks, etc.) themselves must draw together 'nodes' of some type (mathematical?) to form the subgroup symmetric entities, i.e., curve the space. Otherwise one is left with a space without particles.

  • [deleted]

Dear Franklin,

Thank you again for the clarifications and for the quick answer. Also thank you for your patience in answering my questions, because I still do not understand.

Suppose for the sake of argument that we are dealing with a discrete Newtonian space-time and that the smallest space cell is a cube with the side equal with the Plank length. From Jupiter, a small cube on Earth will look like a sphere. I agree with this. But if the space tessellation is exact, space will have a very strong anisotropy because the cube angles are 90 degree no matter what size the overall cube has. In general, any tessellation will still have relative large angles involved regardless of the size of the basic unit. Because of this I would expect in any exact discrete space time model to see big anisotropies (which are not observed). One (maybe the only) way out is to have a disordered tessellation (partial, like a pentagonal one, or total like let's say a spin glass). Then a local disorder will be imperceptible from Jupiter, and you still get your theoretical predictions. However, the point you have to address in this scenario is why light does not scatter around like it does in the air.

We are able to observe distant stars and we also do not experience any space anisotropies. How do you reconcile your proposal with those 2 experimental facts?

Dear Florin,

Thank you for more questions and comments. I'm not sure where your measurable space anisotropies would arise as measurable quantities, and I point out that they must be measurable. The underlying space/spacetime is discrete with anisotropies but these are not yet measurable anisotropies.

(1) Regarding a photon traveling a straight path from a distance star. Why no scatter and why no deviation from a straight line? If spacetime is discrete at the Planck scale of distance, say, and its path is determined by linear fractional transformations (i.e., Mobius transformations), every transformation of the form tau -> 1 tau (where tau is the ratio of lattice sides) moves the photon through the lattice undeviated from a straight line path. No measurable deviations here.

(2) Regarding rotations, since they are linear fractional transformations also, then they would occur as expected, progressing around the axis. If the scale is very small - such as the Planck length scale - I doubt whether any measurement with present day apparatus will be able to reveal the discreteness.

(3) Consider your cube example. You state that "because the cube angles are 90 degrees no matter what size the overall cube has", a statement I do not understand. If the space is a cubic lattice with cube lengths of about 10^-35 meters and the physical cube to be rotated has side lengths of 10^-2 meters, one would need to be able to resolve angle changes of about 10^-33 radians to observe the effects of any discreteness. The fundamental particles simply move to the next node points via tau -> 1 tau. I know of no such measurement apparatus capable of doing this!

  • [deleted]

Dear Franklin,

You are right, the anisotropies do happen, but only at energies high enough to resolve the small distances (which we cannot access today). This begs the question then: would this discreetness leave any imprint on the cosmic background radiation? Probably not because by the time the universe became transparent to radiation it was already too cold. Then the only hurdle remaining is the GR curvature, and I am thinking outloud, what would be the relationship with Unruh radiation?

Well, thank you again for your answers and good luck in this contest. My questions only reflected my strong interest in your paper and I think that your essay is one of the best.

  • [deleted]

I agree there can't be more than 15 fundamental particles. I suppose we could consider a larger number of such particles as being of such high mass they don't ontribute to a partition function in the state of the universe at lower energy we can reasonably model today.

If you read my essay I am trying to address whether the issue of black hole complementarity can tell us something about the small value of the cosmological constant. One upshot of this is that the value is set by the existence of a quantum critical point. This occurs when the value of quantum fluctuations is very large compared to the scale of a system. The magnitude of the fluctuation with some energy ΔE = ħ/Δt sets the effective temperature for a system with a temperature T = ΔE/k, and there exists a Euclideanized time which corresponds to this temperature. The critical point itself is one where a quasi-particle mass diverges (nearly infinite) and this drives the system off that point to a renormalized value. The reciprocal of this this mass is within certain p-brane anaylsis the value of the cosmological constant adjusted or renormalized from a Λ_{bare} = M_p^4 to a smaller values Λ according to a renormalization of fields around that point.

This should then address issues of the landscape, where there are these 10^{500} possible confiturations. There is then a statistical distribution which favors a cosmology on the landscape with respect to this critical point and with and RG group.

This is meant to point to a quantum error correction code, which at the highest energy is the Mathieu sporadic group, or the Leech lattice. A final word below on this, for I think this is somehow penultimate. The sphere packiing of Planck volumes describes and algebraic system whereby each sphere is a fundamental root which can hold a single "letter," or a quantum bit. Now of course for a field configuration in this lattice there might in an elementary sense be a wave which orbits two vertices connected by an edgelink. The field or wave will then have some deficit angle due to the quantum information on the spheres. This will result in a gauge like shift in a wave function. So for two quantum bits there is

|ψ> = |0> e^{iθ}|1>

where θ = θ_0 ∫A*dx (the integral a loop integral) and where the gauge connection is determined by the algebra of the quantum error correction code.

The discreteness here is subtle, for the crystaline lattice (analogous to solid state physics) is coordinate dependent or dependent upon a frame bundle condition or gauge. The discrete nature is further difficult, for this determines the number of flux lines on p-branes, and for dimensions greater than 5 this is an NP complete problem. This is an interesting problem and is spelled out in Abhijnan Rej at http://www.fqxi.org/community/forum/topic/505 .

Cheers LC

  • [deleted]

PS: I forgot to conclude with the matter of this as some penultimate theory. The Mathieu group is a subset of the automorphism over the Fischer-Greis "monster" group. This might be the mathematics which codifies the ultimate theory of physics we can know. There are some interesting suggestions for this. The full automorphism appears to be a 26-dimensional Lorentzian system, which is remarkably similar to the 26-dimensional bosonic string. This is stuff that Borcherds, Conway and Sloane have worked out in the last few decades.

Cheers LC

  • [deleted]

Dear Potter,

i am a bit puzzled about my physics update. What i learnt some years back that the first family of quarks is constitutted by u,d,s quarks with masses in the range of 3-5 Gev. The second family belongs to c,b,t quarks with mass of 1.5, 4.5 and 175 Gev(latter is an estimate value). You claim that b quark mass will be around 80 Gev and t quark estimated at 2600 Gev.The first family constitute the matter that forms the visible matter of baryons while the second family remains illusive in this respect. Does it constitute dark matter and remain in their free state there! Or there exists probability of existence of another family of very very heavy quarks that we still need to postulate.

If one looks at the emergence of the force fields, these came in sequence of gravity,nuclear strong, electromagnetic and nuclear weak, as demanded by universe evolution. Quarks /gluons started forming nuclei and finally atoms/molecules before the first star came into being.

What was the nature of primordial matter that got created at Big Bang! It may well be something that no accelerator that we may ever build on earth can regenerate for us. Thus, it is hard to believe that fundamental Maths can help solve all the mysteries of Physics. Physics deal with physical reality and to me mathematics is a mere tool like the experimental syatems and has no way that it can govern the explanation of what happens/happened/will happen in Physics.

Dear Lawrence,

I agree with your thinking about the connections mathematically, especially those connections to the Leech lattice, triality, and the Monster. The Golay-24 code and the Mathieu-24 group may also be important. Since the discrete 4-D physical spacetime telescopes up mathematically to discrete 8-D space and discrete 10-D spacetime as proposed, then the triality connection to Feynman diagrams,etc., brings up mathematically the discrete 24-D space and discrete 26-D spacetime. But first we need some evidence for the discreteness - i.e., the b' quark (and the t' quark) that represents the [3,3,3] discrete symmetry group.

Dear Narendra,

Thank you for your questions. Here's a brief review:

(1) We now have 3 families of leptons [e, e neutrino]; [muon, muon neutrino]; [tau, tau neutrino] and 3 families of quarks [up, down]; [charm, strange]; [top, bottom].

(2) The Standard Model makes no prediction for any families beyond the first lepton one and the first quark one. AND, knowing that 3 families of each exist, the Standard Model does not make any clear connections between the families but is obviously extremely close to the ultimate description of Nature. My geometrical approach makes very clear connections among the families, fits mathematically within the aegis of the Standard Model, but 4 quark families are predicted.

(3) I need the 4th quark family to appear at the LHC as the b' quark (read as b-prime quark). A hint may have shown up at Fermilab but not good enough for any discovery claim - too much background.

(4) The Big Bang with inflation plus other parts is the present standard model of the evolution of the universe, but as more and more data is accumulated, there has been some fuzziness developing which may allowed some alternative approaches to be considered. Nucleosynthesis is much of the game to be better understood by any alternatives to the standard model.

  • [deleted]

As i understand you gave the masses of the 4th quark family,b prime and t prime that are still to be seen experimentally. You claim to predict these as per your approach. May i have the salient features of your geometrical approach that predicts the fourth quark family. Besides the mass, what are the charges associated with the fourth family, in contrast with the third quark family. Can you visualise the time period that such quarks may have existed subsequent to the birth of the universe with Big Bang. That may provide you a comological way to look for them, instead of the LHC. The latter remains a dobtful starter.

'... However, Godels incompleteness theorem may be an impediment. It proves that there is no such thing as a complete logic system, that every logic system contains TRUE statements which cannot be proven true...'

'Why is the last line of a proof surprising, if its truth is already hiding tautologically in the lines above?' Richard Powers, The Gold Bug Variations.

If to explain some phenomenon or prove some theorem we start our reasoning from assumptions and axioms which contain preconceptions, if the truth of our allegations depends on the truth of unprovable assumptions and axioms, then we can never prove them in an absolute sense, however valid they may be within the set of axioms and rules of reasoning they are formulated. The problem is that though our assumptions and axioms may seem self-evident, they aren't necessarily true as they only reflect our view of our world and express a logic which may differ from nature's logic. Richard Powers suggests as much: that we put as much information in our choice and formulation of axioms and rules of reasoning as we can get out of them. If the proof of a theorem to some extent also involves the proof of the implicit assumptions which are built into our axioms and rules of reasoning, then the formulation of a theorem can be thought of as an effort to formulate this implicit information explicitly, its proof being incorporated in the theorem as it is formulated. If in that case we don't so much prove something but rather adapt our thinking to the way our observation evolves, then the impossibility to (dis)prove statements which can be made within a consistent set of rules and axioms (Gödel) might originate in the incompleteness or indefiniteness of our definitions and axioms, in the lack of information or restrictions we've put into our rules, axioms and assumptions, so statements can inherently be too ambiguous to prove or disprove. The problem is that much of the information we put in them appears too obvious for us to consider as being information, as if it reflects a truth that needs no inspection: as it is almost impossible to be aware of this implicit information, we indeed are surprised at the last line of the proof, as if we got some information for free that we didn't put in ourselves in the first place. As our reasoning and the tools we think with are rather the product, the expression of our relation to our world than something which is open to inspection (by itself), it is difficult to detect the implicit information present in our assumptions, in the preconceptions they may contain. This might mean that if we could explicitely formulate all implicit information in a set of axioms and rules so there would be no ambiguity, nor in the theorems we can formulate within that set, Gödel's theorem would no longer apply, any statement or theorem being a tautology. If we have more confidence in a theory as it is more consistent and it is more consistent as it relates more phenomena, makes more facts explain each other and needs less additional axioms, less more or less arbitrary assumptions, then any good theory has a tautological character though a tautological theory of course isn't necessarily true nor useful.

In an uncaused, causeless universe which creates itself (see Mechanics of a Self-Creating Universe), where things and events create each other, they explain each other in a circular way, are each other's 'cause'. Though a circular reasoning at first sight may seem ridiculous, here we can take any statement, any link of the chain of reasoning without proof, use it to explain the next link and so on, to follow the circle back to the statement we started with, which this time is explained, proved by the foregoing reasoning. Though in a self-creating, noncausal universe a proof seems to be less convincing than a proof which follows a causal reasoning, a causal assertion or explanation ultimately is invalidated as the primordeal cause it is built upon by definition cannot be understood nor proved. The point is that if our logic originates in nature's logic and not the other way around, that our logic is but a reflection of our relation to our world and not a reflection of some absolute, platonic kind of truth which precedes, exists outside that world, an objective reality as there's no such thing, mathematics and its development follow physics, and not the other way around, so we cannot blindly rely on its conclusions that explain the why and how of our universe, its laws.

'... If I assume that Nature is an expert mathematician, I can ask: "Where would Nature begin?"...'

Though I doubt that nature can even have a beginning (a question I'm still working on), if the universe has to create itself out of nothing, without any outside intervention, then its primordeal law of physics is that the grand total of everything inside of it, including spacetime, somehow has to remain zero, the 'somehow' being the prime subject of physics. Though dreaming up mathematics without bothering too much about the nature of the quantities its equations refer to sometimes can help decide whether ideas in physics make sense, mathematics itself cannot dream up really new physical approaches or ideas. An excessive emphasis on mathematics tends to create its own reality and confuse our view on physical issues. Though many models in physics may mathematically be consistent, I'm still waiting for the one model which obviously, compellingly and necessarily excludes any other model and explains why the universe needs the particular particles we find, why the ratio between their masses is as it is etcetera.