• [deleted]

Hi Steven,

Thanks for some challenging questions. I'll do my best to answer them, but some go beyond the level of the current development.

First, I don't think that it's fair to say that Larson's mathematics are incorrect. Actually, he loved mathematics and was much better at it than I am. However, he insisted that the new system required no new mathematical formalism, that its strength, its major contribution, was found in its clarification of physical concepts that, while mathematically valid, were not understood correctly.

Of course, the most fundamental example of this is the mathematical formulation of motion itself. Modern physics admits only the 1D motion of objects, or vectorial motion, while many natural phenomena can only be correctly understood in terms of a scalar motion concept, even though the underlying mathematics of the two is the same, in many cases.

The conceptual difference between scalar motion and vectorial motion is huge, while the mathematical difference between the two, is hardly discernible, in Larson's development of the consequences of the RST. In the new RST-based development, which we are pursuing at the LRC, this is not the case. Both the mathematical and the conceptual differences between scalar and vectorial motion are recognized as quite significant in our work.

Yet, a better characterization still would be that the conceptual and mathematical nature of the RST itself is unified intuitively, in our new development, rather than formulated. Larson repeatedly pointed out that the conceptual datum of the new system is unity, not zero, but, actually, it turns out that it is both, since the space/time ratio of s/t = 1/1 = 1 and 0, in a given physical situation. The biggest mathematical problem in legacy physics is that fundamentally incorrect physical concepts lead no where mathematically, such as in the case of singularities, where the physical ratio 1/0 cannot be defined mathematically.

This problem is ultimately overcome through the use of the physical concept of rotation, since there is no fixed order in the unit circle, and we can formulate an infinite set of size one rotations through the ad hoc invention of the imaginary number. The trouble is, the physical concept of rotation depends upon a fixed background for its existence, so there is no way to incorporate it into a background-free concept like GR.

We believe that the answer to this ancient enigma is to abandon the idea of rotation altogether, as a fundamental starting point. What this means, in a scalar context, is that, instead of building up from 0 to 3 dimensions, we start with a combination of n^3 and n^0, in the form of a pseudoscalar/scalar ratio, n^3/n^0, which contains the n^2/n^0 and the n^1/n^0 pseudoscalar/scalar ratios, as subsets.

Of course, when the space/time physical dimensions of these mathematical ratios are inverted from s/t to t/s, the magic of Lie groups and Lie algebras comes into play, which is an advantage Larson couldn't even have dreamed of. Since this enables us to leapfrog from simple equations of motion to the entities of matter and radiation in the standard model, in a mathematically consistent manner, with no singularities to plague us, and with no background required, it appears to be the best of both worlds.

One of the most impressive accomplishments of Larson's scalar rotation developments, after the derivation of the periodic table of elements and the space/time dimensions of physical constants, is the identity of the 1D, 2D and 3D scalar motions with their associated electrical, magnetic and gravitational phenomena respectively, and the explanation of the relation of 1D electrical motion with 3D matter motion, producing the 2D magnetic motion phenomenon, and vice versa, that is the coup-de-grace of his development, I think. Yet, while this tapestry of physical phenomena is amazingly woven together, like the physical theories of legacy physics, it comes up short in providing us with the perfection of the finished product that we seek.

There are still some tattered edges that remain, like the explanation of the gravitational constant, the inability to explain the energy levels of the entire atomic spectra, etc, and it's my conviction that this is due to the incorrect concept of scalar rotation, but our work is cut out for us to show that this is indeed the case.

One of the challenges we face is the explanation of the inter-regional ratio. Late developments seem to indicate that it emerges from the geometry of the tetraktys, which is very encouraging. However, it would help, if we knew how Larson measured it. As far as I can determine, no one knows this. It may still be in the ISUS archives somewhere, but if it is, Bruce Peret was unable to find it, when he went through them last year.

As far as the identity of the concept of electrical charge with Larson's concept of rotational vibration goes, it's a matter of the degrees of freedom one is able to find in the two concepts of scalar motion. In Larson's development, the linear vibration rotates two-dimensionally, then a so-called "reverse" rotation can be optionally added to this, and, finally, the reverse rotation can oscillate in its "direction" of rotation, providing for positive and negative charges in the ionization process and so on.

This works very well, if we ignore the fundamental problem that rotation cannot be scalar. By the same token, legacy physic's electrical theory works very well, if we ignore the fundamental problem that a point charge cannot exist, and that the same electrical concept required for ionization also is used to explain electrical current, even though the theoretical requirements in each case are contradictory.

What we need is a consistent theory, one that can explain the electrical charge phenomena in the context of the structure of matter, as well as in the electromagnetic context, without introducing conflicting theoretical requirements. While Larson's development is very appealing in this respect, if the same type of compromise with fundamental concepts has to be accepted, as that found in traditional theory, we don't gain all that much ground.

In the RST-based theory being developed at the LRC, we have found the degrees of freedom necessary to explain the ionization phenomenon, as can be seen from figures 1 and 2 of my essay. However, unlike in the electrical theory of legacy physics, the electron | hole concept of the new RST-based theory does not include the idea of an electron cloud. Instead, like in Larson's concept, the electrons are part of the atomic combination of scalar motions, and they don't really maintain a separate identity within the atomic structure.

Nevertheless, the concept of an uncharged electron, which has never been observed, is missing from the new development. In Larson's work, the uncharged electron explains electrical current in terms of scalar motion and has many compelling features, as it is a unit of "rotating space," moving in relation to the net time-displacement of the atoms, and is easily coaxed out of the material by acquiring a rotational vibration, the theory's definition of electrical charge.

Since the energy to drive the electrons through the conductor of an electrical current is much less than that required to ionize an atom of the conductor, legacy theory makes use of the valence concept and the electron cloud, while Larson's theory explains it via the uncharged electron, which is not part of the atom. In the new RST-based theory, there is no uncharged electron, at least as far as we now know, so I'm not sure how this will work out, but, again, I'm just following my nose here.

Regards,

Doug

  • [deleted]

Hi Doug,

As we say here in Holland: 'a fool can ask more questions than ten wise men can answer'. Your answers inspire many more questions, like:

1) Why is it so important to hold on to the standard model, Lie algebra's and so on, while Larson explains that most of these short-lived particles are 'cosmic particle/atom' debris?

2) About the inter-regional ratio. I was referring to Dewey's explanation of this number as the number of possible rotation combinations in a space unit combined with the fact that rotational motion is the rotation of a vibration which adds an additional amount of 2/9 of rotational motion putting this number to (1+2/9) * 128 to 1 (http://www.reciprocalsystem.com/ce/iratio.htm). Is this analysis something that can follow from the tetraktys?

3. About the gravitional constant. Xavier Borg puts it's dimensions as s^6/t^5 but according to Larson that is invalid since the product of gravitional and inertial mass is a dimensionless number. That is as far as I can follow his explanation of all these masses that are involved (Nothing but Motion, CH 13). How does mass appear in your 3D vibrations?

Best regards,

Steven

  • [deleted]

Hi Steven,

Your questions are welcome. I'll do my best to answer them to the extent within my power.

It's not that it's "so important to hold on to the standard model, Lie's algebras and so on," but it's important to explain observations. The standard model is a phenomenologically based model of what we observe. The names used to classify the particles and even the theory of how to relate them one to another are not so important, but the fact that it works, if the twenty something free parameters it needs are plugged in, is a significant fact.

The standard model, as the ultimate development of Newton's program of research, is based on the idea of autonomous forces, which we must reject, recognizing that force cannot be autonomous, as it is merely a quantity of acceleration, a property of motion, we might say.

Nevertheless, the standard model's Lie algebras have to do with the generators of the n-dimensional Lie groups, and, while they are rotational generators, they point to important geometric relationships that have to correspond to the mathematical language we need to describe a proposed reality, based on motions, not forces. So, they become a very useful guide in our study of the universe of motion.

As for the inter-regional ratio's relation to the tetraktys, the answer is yes. Larson's discussion of it begins with what we now call Larson's cube, which is a geometrical expression of the scalar motion of the tetraktys, as shown in figures 3 and 4 of my essay. The key factor is that there are eight 3D units possible, algebraically speaking, comprising the spatial pseudoscalar, and another eight, inverse, 3D units, comprising the temporal pseudoscalar, although this is not shown in the figures of my essay explicitly, nor noticed by Larson.

The temporal pseudoscalar looks just like the spatial pseudoscalar when viewed in Larson's spacetime region, but it is inverted (i.e. the timespace region's space and time magnitudes are interchanged), when viewed from his timespace region. This means, as he pointed out, that there can only be a "point contact" (i.e. scalar contact) between the magnitudes of the two regions, and therefore the effect of the temporal pseudoscalar magnitudes, in combination with the spatial pseudoscalar magnitudes, is reduced accordingly.

Larson calculated the numerical value of the ratio of fully effective units, to that of effectively reduced units, empirically, then he later derived the numbers needed to do this via his model of rotating vibrations, interpreting the meaning of the ratio as the name implies. Interestingly enough, however, the discovery of the 1/9 (2/9) factor in the lepton mass relationships has shown a geometric link to these magnitudes, through the possible orientations of 2D rotation in Larson's cube, and I'm convinced that the same can be shown for the oscillating pseudoscalars, eventually.

Finally, Borg's dimensions for G are just the result of Newton's equation for gravity, F = G(m x m')/d^2. G is a dimensionless number, but given the space/time dimensions of mass and distance, the equation

F = G((t^3/s^3) (t^3/s^3))/s^2 = (s^6/t^5)(t^6/s^6) (1/s^2)) = t/s^2,

shows that inertial mass is the measure of the intrinsic inward motion comprising matter, through its resistance to applied outward motion (s/t). Therefore, the dimensions of the motion that comprises matter, s^3/t^3, must be the inverse of the dimensions of mass, t^3/s^3, which for two masses is the product, s^6/t^6, for each unit of time that the force is measured. The 3D equation of motion to understand is analogous to the 1D equation of motion for velocity, where the distance traveled is the 1D motion, s/t, multiplied by the time traveled, or s = s/t * t.

In the case of the 3D motion of the masses, however, the dimension are different. It's s^6/t^6 * t = s^6/t^5, giving the constant G the appearance of having space/time dimensions, when in reality, it is dimensionless. Borg has no theory of motion to explain his space/time dimensions. He only derives them from his astute observations of the SI system of units.

Regards,

Doug

  • [deleted]

Steven,

In my last post above, I forgot to address your last question, "How does mass appear in your 3D vibrations?" This is really the most important question of all.

Legacy physics make mass and energy equivalent, and in this way "accounts" for mass. On the other hand, Larson points out that just because they can be converted into one another does not mean that they are equivalent. He writes,

"Mass is equivalent to energy only when and if it is transformed from the one condition to the other, and the mass-energy equation merely gives the mathematical equivalents in the event of such a conversion. In other words, an existing quantity of energy does not correspond to any existing mass but to the mass that would exist if the energy were actually converted into mass."

This is because of the dimensional differences between the two. Mass has three space/time dimensions, while energy only has one time/space dimension. Thus, the second power of one-dimensional speed is required to convert from one to the other.

In the LRC's new RST-based theory, the pseudoscalar/scalar vibrations are three-dimensional motion with one and two-dimensional subsets, which fact makes for an interesting combination of interactions, all of which have not been explored at this point in time. But it's interesting to note that, since the pseudoscalars are spherical, if their locations are not perfectly coincident, the S|T combination of the two is constrained geometrically: The S|T combo has to form a line. Likewise, the combination of three S|T combos is constrained geometrically: If the combo is to be anything other than a line, it has to first be a plane, the plane of the triangle.

Consequently, as shown in figure 1 of my essay, we start with the points of the spatial and temporal pseudoscalar oscillations, and these take two geometric forms in combination, the line of the bosons and the plane of the fermions.

Given that observation indicates that the direction of propagation of a photon (boson) is always orthogonal to the axis of oscillation (the theory behind this is way beyond the scope of this comment), we can easily see that, contrary to the linear array of S|T units, the bosons, the planar array of S|T units, the fermions, would not be able to propagate at all, since the outward direction, relative to one another, is collectively opposed.

This "killing" of the 1D boson propagation, by the formation of non-propagating 2D fermions, leads us to notions of supersymmetry, where bosons are transformed into fermions and vice-versa, via symmetry transformations. In the toy model of figure 1, the implication that this must be the case is startling.

Thus, we can easily see how mass arises from these pseudoscalar vibrations, and we can see the relation between mass and electromagnetic energy in the S|T combos, but then why, or how, mass constitutes inward scalar motion is another story.

Regards,

Doug

  • [deleted]

Hi Doug,

Thanks for your clarifications. Since I have very little knowledge of the SM and the mathematics pertaining to it I can give no other comments than just my gut feelings, which are that even though it is extremely complicated and needs at least 20 parameters to get certain numbers right in the first place, it still cannot be used to explain basic physics phenomena like gravity or the conversion between matter and radiation.

Therefore it looks very hard for me to use the SM as guidance for developing an RST based system. Maybe I just need some more time to let your interpretation of RST sink in.

I had one last question that popped into my mind when looking at your 3D Space/0D Time cube. The interaction between the 0D time point, 3D Space and its reciprocal would that be described by a convolution process?

Regards,

Steven

  • [deleted]

Hi Steven,

Primarily, the standard model is phenomenological. The three families of quarks and leptons and the bosons are observed phenomena, although quarks are not directly observable. The attempt to explain the phenomena in terms of four fundamental forces has been very successful up to a point, and the calculations based on quantum field theory are extremely accurate.

In a universe of motion, on the other hand, force cannot be fundamental, so naturally we cannot be satisfied with the SM, but the fact that physicists are able to use a force model to explain as much of the structure of the physical universe as can be explained with this approach, has to hold some valuable clues for a new, motion-based, model, in my opinion.

The clues I find most interesting are those given by the symmetry of the standard model. The U(1) symmetry is a symmetry of 1D rotation. The SU(2) symmetry is a symmetry of 2D rotation, while SU(3) symmetry is a (sort of) symmetry of 3D rotation. The trouble is, these rotations are not in geometric dimensions of real numbers, but in the dimensions of complex numbers.

Still, what physicists have done is observed the daunting array of particles coming out of accelerators and teased the patterns out of it. They have brilliantly perceived the groupings of these particles and sought to understand the properties of these groupings, through the principles of symmetry, and have been astoundingly successful at it, even correctly predicting the existence of new particles, such as the omega minus particle. So why wouldn't their work be a good guide to anyone seeking to understand the structure of the physical universe, from the perspective of fundamental space/time ratios?

This said, however, the problem is that the use of complex dimensions has led to the acceptance of unphysical notions such as "internal" symmetry spaces that have no physical presence in ordinary three-dimensional space. Bruce Peret is convinced that these internal spaces are motions in the time region, best described with principles of projective geometry, but I believe they are artifacts of the mathematics, mathematical edifices with no more physical meaning than other, no less complicated, abstract spaces that can be imagined, such as the derivatives in the financial world that have led to default credit swaps and caused so much misery and woe.

If we allow ourselves to get carried away with abstractions, then we are on dangerous epistemological ground, as Einstein warned. We need to stay grounded by observation and temper our abstract extrapolations. This applies to mathematics as well as physics, especially since it's only through mathematics that we can really speculate about the unseen micro world.

Ok, but we know that spin space is real, so doesn't this justify the notion of internal symmetry spaces? Not if we can find another way to explain spin space without compromising our notion of three-dimensional space, which is what I believe we can do, given the mathematics of operationally interpreted ratios, where a negative number is simply the inverse of a positive one, and given a rigorously defined notion of motion, as the reciprocal relation of changing space and time.

As far as utilizing the mathematics of convolution, to describe the interactions of these entities, goes, I would imagine it would be helpful, but someone else will have to determine that, I think. My main goal is to get to the atomic spectra, and I don't see any use for these functions in that context, at least for the moment.

Regards,

Doug

  • [deleted]

Hi Doug,

Does'nt Bruce Perett already give some explanation of atomic spectra when calculating photons frequencies in his RS2 theory?

Regards,

Steven

7 days later
  • [deleted]

Hi Steven,

Sorry it's taken me so long to respond. I've been tied up doing other things.

Bruce, like Larson, like Bohr, are able to calculate the atomic spectra for hydrogen, because it's based on a simple integral relation, but the only physicists capable of calculating the spectra of elements beyond hydrogen, at least in principle, are quantum physicists.

In his book "The Story of Spin," Tomonaga explains why. The spectra of the heavier elements breaks up into different energy levels. The phenomenon is referred to as spectral multiplicity. The origin of multiplicity and the Zeeman effect stumped the most brilliant quantum physicists at first, until the idea of electron spin and the quantum numbers of QM evolved sufficiently to explain the selection rules for state transitions.

However, in QM, the origin of multiplicity is in the orbiting electron itself, so in an RST-based theory, another explanation must be found, since, as you know, there are no orbiting electrons and there is no electron cloud of moving electrons, surrounding a nucleus, in an RST-based theory of atomic structure, such as Larson's, or Peret's or the LRC's.

If you look at the toy model in my essay, you will see that the scalar motion of the electron's three S|T units neutralizes the net scalar motion of the proton's nine S|T units. The different quark configuration of the neutron makes it neutral without the electron, and combining the two yields the deuterium atom, which would have a net scalar motion equal to, but opposite in sign to the electron's net scalar motion, if it weren't for the electron's presence in the combo.

From there, the pattern is repeated in the higher combos of heavier elements, showing how the "embedded electron" of each proton accounts for the number of electrons in the atom of each element and their isotopes.

Now, the question of how the structure absorbs electromagnetic energy in a multiplicity of discrete energy units, and emits them according to some probability of transition, some set of selection rules, if you will, is what we have to answer next.

Since the S|T units of bosons and fermions in the RST-based model of figure 1 are identical, except for their geometric configuration and net scalar motion, the implication is that the different energy states can be explained in simple chemical-like manner: It appears to be just a matter of balancing the scalar motion equations.

Although there have been some interesting developments along this line, there's no breakthrough to report as yet.

Regards,

Doug

  • [deleted]

Hi Doug,

I think Bruce's theory moved already beyond the point of only explaining the hydrogen spectrum. The splitting of the spectral lines according to him is caused by the magnetic and electric rotations of the atom, as he describes with his theory on quantum numbers.

Also I think his model of the atom now includes possibility to capture electrons: atoms.

So, why did the research in ISUS split up into at least three different directions? (yours, Bruce's and Ronald's).

It there a difference in insights on the basics?

Thanks so much for your time,

Steven

  • [deleted]

Hi Steven,

Just as Newton established a program of research into the structure of the physical universe that ushered in a new age and that has continued for centuries, Larson has established a new program of research that promises to usher in a new age that will continue for a long, long, time.

At the LRC, we distinguish between the two systems by referring to the Larsonian program as the Reciprocal System of Theory (RST) and the Newtonian program as the Legacy System of Theory (LST), but it's important to understand that the new system subsumes the old, it doesn't replace it.

The LST is based on Newton's laws of motion, mainly F = ma, and its success is due to the fact that this definition of force can be completely defined mathematically as a function of time. Indeed, as Steven Wolfram points out in his tome, "A New Kind of Science," the whole LST-based program is about four PDE equations, but the trouble with this program is that it is limited by the definition it is based upon.

In the words of David Hestenes, "The central hypothesis of Newtonian mechanics is that variations in the motion of a particle are completely determined by its interactions with other particles," and the whole goal of the program is to classify the kinds of forces that exist and by so doing "develop a classification of particles according to the kinds of interactions in which they participate."

Quantum mechanics necessarily introduces some modification to this program due to the inability to define the trajectory and the momentum of a quantum particle simultaneously, but it doesn't introduce a new program of research. The LST still depends on the continuous functions of time. Yet, as we all know, the nature of the challenge facing physics from day one, and the whole basis for the trouble that the LST program finds itself in today, has to do with the mystery of how nature consistently incorporates both the discrete and the continuous concepts of magnitude.

In Wolfram's new research, he doesn't pretend to be able to reconcile these two in any fundamental, earth shaking, way, but simply finds it more practical and productive to investigate the structure of the physical universe from the discrete perspective. Thus, his new kind of science is based on discrete systems, or algorithms, instead of the continuous functions of the LST, but he doesn't really take issue with the latter approach, except in a utilitarian manner.

What does all this have to do with your question regarding the different approaches being taken to develop an RST-based theory? The answer is that it has everything to do with it, because in both cases, the central challenge is still how to define continuous and discrete magnitudes consistently. Larson's approach, continued by Satz, is based on Newton's third law of motion, for every reaction there is an equal and opposite reaction, but the theoretical role of this fundamental law is found in the more abstract sense that for every dimension of motion there are two "directions" that exist.

Thus, Larson assumes that a constant reversal in the "direction" of motion is as natural and as permanent as unidirectional motion, just as LST physicists must recognize the central role of simple harmonic motion in their program. Everything depends upon it.

However, in Larson's development, it is assumed that the constant reversals are linear, occurring in only one dimension. In Peret's development, the constant reversals are the reversals of rotation, which are two-dimensional. In the LRC's development, the reversals are the three-dimensional oscillations of the pseudoscalars. In all three instances, however, the critical difference between the RST-based program and the LST-based program is that the reversals are not reversals in the direction of the LST's vectorial motion, as defined by the changing position of an object, but they are the reversals in the "direction" of scalar motion, as defined by the changing quantity of space that is the reciprocal of the changing quantity of time, which is assumed in the fundamental postulates of the new system.

That is to say, it is assumed in the RST-based program that the observed march of time is simply one aspect of the underlying reality of a universal march of both space and time, or a universal motion. The "space clock" is not normally perceived due to the continuous reversals in the space aspect of the universal motion that constitutes matter, except at large distances, where the galactic aggregates of matter are beyond the gravitational limit.

Nevertheless, the fact that the space aspect of the motion can be in a state of continuous reversals also means that the time aspect can be too, and this opens up the illuminating concept of reciprocity on a new level, where the roles of space and time are inverted, introducing anti-matter.

What was so surprising was the discovery, with the advent of Wolfram's work, that his cellular automata rule 254, the most uninteresting rule of all, perfectly captures the fundamental assumptions of the RST: The continuous increase of space and time expands from a point to infinity from any point of space/time, or time/space, one cares to select, but once selected, the point is no longer definable in terms of position of space or time, leading to the foundations of quantum mechanics.

With this much understood, it's just a matter of understanding how the four dimensions of the tetraktys lead to the laws of physics that we can define, such as F = ma. Hence, in working with the RST-based theory, we don't have to assume that particles and positions are fundamental nor deal with the conundrum that forces us to ignore that a point cannot be consistently defined in a way that requires that it actually has extent in some weird way that cannot be explained rationally. Now, a point is a point with no extent, period.

Finally, let me dare to add that this leads us to the understanding that, since F = ma, this implies that in some real sense m = F/a, and we can now comprehend what that means. We no longer need hide from it, because the "direction" reversals allow us to reconcile the discrete with the continuous and quantify something like t^3/s^3 in a sensible way. Ultimately, this is why the RST constitutes a new program of research, but the critical role of n-dimensions is still being clarified, just as it is in the LST program.

I hope this helps.

Regards,

Doug

6 years later

Douglas, to most readers this might seem like a trivial point, but to me, as someone who knew Larson and worked with him, it is far from trivial. Can you explain why you consistently spell Larson's last name with a small "l"? Breaking the rules of grammar so egregiously must have some overriding purpose. Can you tell us what it is?

Write a Reply...