Edwin,

I don't presume that you agree with the following statements. They represent my opinion. My work has a unique approach that I have found works very well up to the point of my educational limitation. That limitation leaves my work at an introductory level of presentation. One of its shortcomings compared to your work is that mine is just another mechanical interpretation. However, that shortcoming is shared by theoretical physics. The mechanical ideology has kept physics from being the foundational science that it is purported to be. It is the lowest level of competent interpretation of the nature of the universe. It is useful only for solving mechanical problems.

The theoretical approach to interpreting meaning from empirical evidence has, in my opinion, caused the equations to become subservient. We are learning back from the equations that which the theorists have forced onto them. And, that which the theorists have forced onto the equations has severely reduced their usefulness even from the mechanical perspective. I work to demonstrate that this is the case and that progress in scientific learning depends upon returning the equations of physics back to their empirical roots. That is why the arbitrary decision to make mass an indefinable property, rather than striving harder to establish its direct empirical meaning, gets tossed, by me, so deliberately.

Your expertise is far greater than mine. Your work involves establishing fundamental unity right from the start. I feel certain that that is the first requirement to be met by the correct description of the nature of the universe. As I stated quite some time ago it suits me fine to think that your work could be correct. I am certain in my own mind that current theoretical physics is not correect.

Regarding my own work, there is a conservation principle involved and it results from mass being an inverse acceleration. The principle is one of conservation of acceleration. That which light gives up matter gains and vice versa. One further point of clarification with regard to my treatment of electric charge, polarity is a property of mass, again this results from mass being inverse acceleration.

I often state my opinion in a matter of fact manner as is the case now. I don't think that that is because I believe that I have ultimate answers. No mechanical theory can contain the ultimate answers. I think it results from my impatience with theorists pushing their inventions of the mind in their own matter of fact statements.

James Putnam

James,

Sometimes it is good to pick a particular problem and hone in on it and work it to death. Not all of our ideas are good but a few good ideas are valuable. I've read your essays and your comments for years and am familiar with your thinking. My remarks about Smolin were to encourage you that some very decent physicists share, to some extent, your unhappiness with mass as it is often defined. It's also worth quoting Yau, of Calabi-Yau fame:

"In general relativity mass can only be defined globally. ...as measured from far, far away (from infinity actually). In the case of local mass [...] there is no clear definition yet. [And] mass density is a similarly ill-defined concept in general relativity."

This is a rather remarkable statement!

Also, the idea that the Higgs 'gives' mass to particles I find absurd. It is an artifact of a theory based on charge. And even then the Higgs only accounts for certain masses.

So you have chosen a very good bone to gnaw on. And I find your intuitive redefinition of mass as inverse acceleration to be extremely interesting. I played with it some more since my earlier comments -- still interesting.

Takes my old brain a while to absorb new ideas, but I'm not in a hurry. I plan to keep thinking about your idea. I probably won't spend too much time on entropy or charge, but I hope to give your idea about mass a fair shake.

Best,

Edwin Eugene Klingman

Edwin,

I earnestly awaited your opinion:

"...the idea that the Higgs 'gives' mass to particles I find absurd. It is an artifact of a theory based on charge. And even then the Higgs only accounts for certain masses."

I didn't post my own because I don't know enough about the experiment to be confident. That remains the case. However, I will risk giving my opinion: If the experiment conducted confirmed that total energy before equalled total energy afterwards, then fine. If the experiment gave empirical evidence that the products observed could have been decay products from theoretically predicted particles then that is also fine. I say this with full respect for physicists knowledge of patterns in empirical evidence. Those are the patterns upon which theoretical physics has builds its structure. I also say it with less appreciation for what appears to be the looseness that theoretical physicists have adopted for their work when offering interpretations of the meanings of those patterns.

What has been on my mind that I cannot address myself is this: Have the experiments shown that a particle, that might be the Higgs particle, not only might have been caused to appear, but that there is evidence that a massless particle was made massive due to the action of a 'third' particle that might have been the Higgs particle? I will admit a predjudice on my part. My own work began with explaining mass. I don't have a need for a Higgs particle. I find it difficult to not be suspicious that theorists are again making use of looseness to unwittingly force an interpretation upon us that holds the standard model together for a while longer.

With regard to my own treatment of mass, it results in opposition to the decision by theorist to make mass an indefinable property. I find that decision unsupportable by the empirical evidence from which the existence of mass is inferred. The evidence shows that mass exists, therefore, the roots of mass must be linked directly to that empirical evidence. The intrusion of theory, at that early point of deciding how to interpret mass in Newton's equation f=ma, was, in my opinion, the first error of theoretical physics. I believe it to be an error of monumental importance to the rest of theoretical physics that followed.

Changing mass to a defined property changes almost everything that follows. While I chose to venture into that new fertile territory, I understand that others would not see it my way. Since those 'others' include theoretical physicists, then readers should give that fact full weight. In the meantime, I will push on. Those are my thoughts. Thank you for sharing your thoughts.

James Putnam

Concerning changing the units of physics into the forms used in my essay:

The change made is not one of introducing new types of units. For example, the units used include meters and seconds. They remain as before. The actual change made is far more radical. It involves returning the equations of physics back to their empirical roots. In other words, removing all units that are not necessary for expressing empirical evidence. Empirical evidence consists of patterns in changes of velocities. The units used to express changes of velocity include just the two units of meters and seconds. Those are the only two units that should be necessary to express the equations of physics in their empirical forms.

An example of an empirical form for an equation of physics is f=ma before deciding about how to interpret mass and force. The empirical evidence for f=ma makes clear that there are four properties involved. They are: distance, time, force and resistance to force. Both distance and time are naturally indefinable properties and their units of meters and seconds are indefinable units.

Being definable means that a property can be expressed solely in terms of pre-existing properties. There are no properties pre-existing distance and time. There are no units pre-existing meters and seconds. However, both force and mass have been learned of and inferred by the patterns of empirical evidence. Everything we are to know about them is rooted in that empirical evidence.

Both force and resistance to force should be fully expressible in terms of the pre-existing properties of distance and time. The units of force and resistance to force should be fully defined by the pre-existing units of meters and seconds. This is the practice followed in my use of units.

The units are the two old units of meters and seconds. Both force and mass must have units that are formed from conbinations of meters and seconds. There are several possible combinations. The choice for mass, that is both logical and proves to be successful is for mass to have units of inverse acceleration. Those units are seconds^2/meters. Force, as a ratio of two accelerations is unit free.

From this point on all other units of higher level properties are definable in terms of those of mass, distance and time. All of them consist of combinations of meters and seconds. For example, energy as the product of force and distance has units of meters. Actually, in long form it has [(units of accleration)/(by units of acceleration )][units of meters].

James Putnam

    The reason for using meters and seconds only, the units of empirical evidence:

    Properties are defined by their empirical evidence. That empirical evidence tells us all that we will ever know about each property. Theory is the intrusion of guesses onto physics equations. The guesses cannot be more informative than is the empirical evidence. The guesses can be less informative. They are more often than not restrictive to the extent that important empirical meaning is lost. It is artificially hidden from our view by the added-on theory. Those theoretical guesses become permanent restrictions on the usefulness of the equations.

    In the case of mass, getting the units right based upon the natural guidance of the empirical evidence means knowing what mass is right from the start. It means knowing that theory that is in contradiction to the meaning set by the empirical evidence cannot be correct. Mass is the most important property to get right. The degree of usefulness of the rest of theory depends upon getting mass right.

    There is another very critical property that must be gotten right or unity will be lost. That property is 'electric charge'. There were two unknown terms in Coulomb's equation, often represented by the letter 'q'. It was guessed that those two terms represented a new property that was the cause of electro-magnetic effects. That guess became the theory of electric charge.

    The best suited empirical units of mass are those of inverse acceleration. The use of those units leads to the recognition that the units of those two unknowns in Coulomb's equations have to be seconds. Putting this claim up against the long adopted theory of electric charge is difficult to make convincing. The chance to be convincing comes from putting forth the results that occur due to aaccepting seconds as the units for electric charge.

    It is the case that the magnitude of fundamental electric charge with the units of seconds becomes key to achieving unity. That magnitude represent a universally constant increment of time. That increment of time placed consistently throughout the equations of physics brings those equations into compliance with always present fundamental unity.

    In my essay, I included appendix A for the purpose of emphasizing that that is the case. The two expressions for the fine structure constant are shown to be derivable one from the other. That is just one of numerous examples that could have been chosen and that do exist ready for presentation. Several have been included in my previous essay contests entries. This present essay uses that immensely useful fundamental increment of time to explain what is thermodynamic entropy.

    James Putnam

    • [deleted]

    James,

    My question does seemingly not refer to your essay. Because you dealt with entropy, I would like to ask you for help. I refer to Phys. Z. 10, 323 (1909) where Ritz agreed to disagree with Einstein.

    Ritz argued that electrodynamic irreversibility was one of the roots of the second law of thermodynamics, while Einstein defended Maxwell-Lorentz electromagnetic time symmetry using both retarded and advanced potentials on equal footing. Ritz considered the restriction to the form of the retarded potential as one of the roots of the second law while Einstein believed that irreversibility depends exclusively upon reasons of probability.

    Can you please point me to belonging work either by Einstein or by someone he refers to, maybe Boltzmann?

    Secondly, I would appreciate an understandable to me explanation how "entropy is a theoretical pathway for moving from It to Bit".

    Eckard

      Hi James,

      Could you please give me links describing experiments in which there is no detectable change in temperature due to the mixing of the gases? That is interesting and I could not find anything in references. Thank you.

      By the way I think that Gibbs simply had misused the entropy equation. In your essay you have mentioned the issue of indistinguishability of the particles in the volume. It is quite enough to know that there is no paradox at all (at least in Boltzmann statistics).

      Your Proposed Changes to Physics Theory are very revolutionary. However the (2) would be interesting for me as I have tried to apply in my essay Einstein's equivalence between gravitation and spacetime geometry to the rest of known "force fields". And even with a little more courage the same concept apply to particles.

      Good luck!

        Eckard,

        Thermodynamics pertains to macroscopic systems. The properties of temperature, pressure and volume are macroscopic quantitites that pertain to the internal energy of a system. The system could be the universe. The development of entropy began with Clausius discovering a new property of the system, the system being the 'It'.

        Boltzmann's definition of entropy broke away from the macroscopic perspective and introduced a new kind of entropy that described a condition of microstates. His definition introduces the 'Bit' into a definition of entropy. One of the points of my essay was to show that Boltzmann's entropy is not a thermodynamic entropy. It is an introductory form of statisitical entropy. It did not include distinguishability.

        Gibb's mixing entropy is also an anaylsis of microstates but introduced distinguishability. The example given in my essay involves two gases that are distinguisable. The mathematical solution for his mixing entropy has a form that is seen repeated in the rest of the entropy definitions up to and including Shannon's information entropy. I consider each of the microstate based entropies to represent the 'Bit'.

        So, my essay begins by recognizing that the development of entropy is usually presented as if it followed logically connected steps. Yet, the aim of my essay was to demonstrate that the perceived connection between Clausius' entropy and Boltzmann's entropy does not logically follow. The meaning of Clausius' entropy was not explained nor understood by physicsts. It was skipped passed without adequate explanation in favor of jumping directly to statistical entropy. Statistical entropy was understandable.

        With regard to Ritz and Einstein, my only resource for that is the Internet. I do have a paper I printed off sometime ago. It is titled: Ritz, Einstein, and the Emission Hypothesis by Alberto A Martinez.

        James Putnam

        Hi Jacek,

        I don't have any quick links to offer. It is something that is usually referred to simply as being 'well known'. That which I have said about it involves only ideal gases. I just gave some of my opinion about Boiltzmann's entropy and Gibb's mixing entropy in the message above addressed to Eckard.

        I do say that foce is unit free. A consequence of taking that position is that one force can be the product of two other forces. If you look back at my essay titled 'The Variability of the Speed of Light' in the last essay contest, you will find that I used this new view of force to calculate the universal gravitational constant. It involved using one force as the product of two forces.

        This essay contest is my fifth one. Each of my essays involved deriving results using my new approach to the units of physics. I hope you find something helpful to you. I haven't yet read the other essays. I look forward to reading yours. Good luck with it.

        James Putnam

        Here we go again.

        Yet another essay about how the supposedly measurable differences between perfect abstract entropy and perfect abstract thermodynamic entropy and perfect abstract Boltzmann's Entropy can be perfectly abstractly explained by the same old familiar seemingly identical perfectly depicted reality deprived squiggles.

        As I went to great pains to explain in my essay, BITTERS, the absolute of real time is once. According to James, absolute abstract time requires abstract "fundamental increments and a fine (abstract) structure constant." Planck's (abstract) constant is somehow involved, although the (abstract) proportionally constant 'k' from Coulomb's Law seems to have actually been utilized. An (abstract) replaceable electric charge is also needed if you hope to be able to calculate the abstract extent of perfect absolute abstract time.

          So that readers are not mislead by Joe Fisher. My essay is not about abstractions. Even the use of ideal gases refers to a very close approximation of what actually happens to gas as its pressure approaches zero. The properties can be and have been observed experimentally as anyone with at least an introductory level of physics would know. The constants used are not abstractions. They each result from experimental results as anyone with at least an introductory level of understanding physics would know. Even though Joe is able to copy words back at the author while spouting his egotistical disdain, it is clear from his response that he did not understand the steps in the essay. For example Planck's constant was not 'somehow' involved, it was clearly put to use for real physical reason as anyone with even an introductory level of physics would have easily recognized.

          James Putnam

          Well instead of using approximations of what actually happens to unknown quantities of arbitrarily humanly contrived mixtures of "ideal gases" besides unification as it now becomes a singular gas as its pressure approaches abstract zero, I suggest you get real.

          No experiments are required for the full understanding of reality.

          Joe Fisher,

          I suggest that you return to your numbers and snowflakes.

          James Putnam

          • [deleted]

          James,

          Thank you for the link. It seems to confirm that Einstein did not deeply delve in the matter.

          Are you aware of http://www.mdpi.org/entropy/papers/e9030132.pdf ?

          Eckard

          Eckard,

          "Are you aware of On the So-Called Gibbs Paradox, and on the Real Paradox?"

          The Author Arieh Ben-Naim briefly mentions, as a side issue, the real papradox but does not address it except to say that examples of real world discontinuity are observed.

          The 'real' paradox has to do with the mathematical derivations of entropy and getting them right. is isolated in the ideal gas example that I gave. The author moves from one type of circumstance to another without connecting them. One example is when he states that "However, for ideal gases, the mixing, in itself, does not play any role in determining the value of the so-called entropy of mixing. Once we recognize that it is the expansion, not the mixing, which causes a change in the entropy, then the puzzling fact that the change in entropy is independent of the kind of molecules evaporates." He has to state clearly that all examples are of the adiabatic type.

          The example of two distinguishable ideal gases, as I gave in my essay, does not cause a change in entropy whether one refers to mixing or expansion. I fail to see how he see a difference in expansion versus mixing in this same type of example. Mixing entroy cannot be dismissed as lightly as he thinks. It is an important theoretical foundation statistical mechanics and quantum theory. Its mathematical form repeats itself in derivations of entropy up to and including Shannon's information entropy.

          I wondered how to respond with details of my own but couldn't see how to address all of the shortcomings that I feel exists in that article. I am thinking that I could write a few messages in my forum here that address some of the issues without direct reference to his article. There are many articles written to address the Gibb's Paradox. I could combine the major issues raised in these articles and address them myself. I think that in my essay I chose the best example for representing the paradox. It was not one chosen by the author to address in detail.

          James Putnam

          Eckard,

          I see that a few words didn't make it into the final message. If you need clarification please let me know and I will rewrite those few parts. Thank you.

          James Putnam

          Thermodynamic entropy was defined by Clausius. It is the beginning of the concept of entropy. He discovered a thermodynamic property that joined with temperature, pressure and volume. His definition is the only thermodynamic definition. Even Boltzmann's definition is not thermodynamic.

          Boltzmann's entropy was the first of statistical entropies. His definition carried the name entropy and by this act introduced confusion about how to connect non-thermodynamic entropies to thermodynamic properties. Boltzmann's entropy, along with the others that followed his lead, involved counting cells or microstates. Thermodynamic properties do not include counting molecules or microstates.

          Clausius discovered a new property using only macroscopic thermodynamic properties. He arrived at a definition of entropy that was directly proportional to the ratio of heat to temperature, both thermodynamic properties. The difficulty faced by the non-thermodynamic entropies is that their non-thermodynamic definitions must give correct answers for actual thermodynamic entropies.

          As one might expect, their results will be haphazzard. Yet, those incorrect results help to expose the parts of the definitions of non-thermodynamic entropies that are themselves non-thermodynamic. In this way, those definitions can be distinguished from thermodynamic entropy and given recognition for what they actually represent. They represent statistical definitions of things such as microstates or anything else that the theorist wishes to count.

          The name entropy no longer belongs exclusively to thermodynamic properties. It hasn't done so since Boltmann gave his definition. The confusion that has followed results in part from his retention of his constant in his definition. Other definitions have retained that constant. The constant does have a thermodynamic connection to thermodynamic entropy. So, those definitions that choose to retain Boltzmann's constant do have a loose partial connection to thermodynamic entropy.

          James Putnam

          Hi James,

          You are correct when you say that we should use only length and time but we can go even further and use just length !

          Have a look at my theory (3D Universe Theory), I derive most empirical constants with just the Planck Length and a simple expression 8Pi-1. I know that most people will dismiss these formulae as pure numerology but there is a chance you might have a different opinion.

          Cheers,

          Patrick

            • [deleted]

            Patrick Tonin,

            Hi. Thank you for reading my essay. That point about using length (meters) and time (seconds) only applies to themacroscopic world for convenience. Other units that are more fundamental, i.e. Planck length and Planck time, could be used instead. I don't use those because meters and seconds work conveniently enough for me down to atomic dimensions. There is another reason for not adopting Planck units yt and that is that those units are formed from combinations of units which I do not need. I need only the units of empirical evidence and they are length and time. This is not a strange thing to do.

            My argument is that every mechanical dimension should be rooted in empirical evidence if possible. I find it is quite possible to do, and, is the key to learning what fundamental unity is. Fundamental unity is immediate and I feel is convincing evidence that returning the equations of physics to their empirical roots, by ensuring that all units of properties are expressible in the same units as is the empirical evidence from which is existence is inferred, is the most accurate and most useful form of physics equations.

            It is through the units that ideas are made concrete parts of physics equations. It is the units that make equations either empirical or theoretical. Since all units are reducible to those of mass, distance and time, it is the case that mass must be made right. Mass must be expressed in the same terms a is its empirical evidence. The units of that empirical evidence are only those of distance and time. All of my work involves that change. This essay is just one part of my work.

            Your essay uses planck length and time. Your cubit uses length only. Your layers are traversed one at a time every Planck unit of time. You do mention that your dimension in the radial direction represents space-time. Your layers consist of length only and they are your present one after the other. I assume that your statement that we only need length is based upon your idea that the present only has dimensions of length. There is though the matter of change. All empirical evidence of physics occurs as patterns in changes of velocity. How do you account for change in equations. Calculus equations are equations of change. Physics need extensive use of calculus in order to express its ideas. You may answer this message in you own forum. I will look for it there.

            Thanks again,

            James Putnam