• [deleted]

Dear Genevieve Mathis,

Thank you for your answer. I think you give an interesting take on the limits of mathematics. I asked my questions because conclusions are based upon beginning premises. As the reader, whether I do or do not accept the beginning premises determines my position on the conclusions. I think that I am not the best person to discuss your original post. I am somewhat surprised that it didn't receive more interest from others here. Best wishes.

James

  • [deleted]

Genevieve Mathis,

You wrote:"I have accepted that the human brain has evolutionarily developed in a macroscopic world, as opposed to other scales (microscopic/quantum/cosmologic/etc.),..."

Do not exclude that there is only one correct scale. The peculiar microscopic scale can be attributed to improperly interpreted mathematics.

Eckard

Constantinos,

You said:

"We can only know what we can 'observe' and 'measure'."

I absolutely agree. The "overabstractification" of physics drives me nuts.

    Incidentally, I have completely lost track of the discussion here. I find this software annoying and think FQXi ought to switch to software like they use over at places like nForum and Meta.MathOverflow.

    • [deleted]

    Greetings Ian, thank you for your response.

    As careful as I am in crafting my ideas and arguments, there is always some significant aspect that is left out or some misreading that is left in the written words. For example, if I were to re-write the quoted statement in your post I would write

    "We can only know our 'observations' and our 'measurements' ".

    We really cannot know 'what Is' but only 'what we observe', 'what we measure' and 'what we understand'. These are the essences upon which we create our World (whether it be our physical universe or the world of our personal lives). Any attempt to model the World of 'what Is' in my view leads to Metaphysics and to Modern Physics. In my paper The Interaction of Measurement I show that we cannot know a physical quantity (thought of as a function of time) through our direct measurements of it.

    The essence of Physics in my view is 'measurement', while mathematics only provides 'logical certainties'. If we were to combine the two, it seems to me that a new Mathematical Foundation of Physics may be needed. One that establishes Basic Law of Physics as mathematical identities (tautologies) that describe the interaction of measurement. This I show is possible in the case of energy in my short paper, Planck's Formula is an Exact Mathematical Identity.

    Though I argue that we cannot know 'what Is', I do not go as far as saying that 'nothing Is'. I simply acknowledge that the very asking of the question (let alone answering it) is meaningless.

    Constantinos

    • [deleted]

    Ian and Constantinos,

    While I share your attitude, I am an engineer who trusts in sound prediction and benefits from complex calculus.

    I would like to convey the insight that the key question is not knowledge but a clear distinction between influences that are merely to be expected and those that already lead to something which can now be observed and measured in principle.

    In terms of physical theory, that now is not necessarily the now of reality. It rather corresponds to the moment under consideration, i. e., to the latest point of time at which something can influence the process.

    Eckard

    • [deleted]

    Hi Constantinos,

    i read your explanation of the double-slit-experiment and i have a question to it.

    If the emission of an electron (by passing the double-slit) has to be thought as a continous field of energy - consisting of the energy of exactly one electron - and distributed in its strengthness due to the probability-interpretation of QM over the detection-screen in the well known manner of an interference-pattern - then, due to the laws of addition the area exactly in the middle behind the two slits has the highest probability to pop up as a light-spot *firstly*. Therefore the first light-spot after accumulation of enough energy must be in exactly this area. Could this have been really the case in all the experiments? And why are there light-spots in areas where there had to be exactly destructive interference - means, a probability of zero?

      • [deleted]

      Hello again Stefan. I just posted a reply to your other post addressed to me. Thank you for your very good questions. Following up on my reply to your other question, let me just say that there are local conditions on the detection screen (beyond the radiated pattern from the emitted electrons) that we just don't know, making any precise prediction just impossible (even if we were to predict the 'area' where the first flash will occur). We may agree that QM does not 'lie' here. But QM does not provide a 'physical explanation' either. What I am hoping with this is to help provide some 'Physical Realism' to what now has become 'quantum weirdness'.

      • [deleted]

      Dear Eckard,

      ... your call for experimental confirmation and predictability is well taken and appreciated.

      There are some new posts re: double-slit experiment explanation in the blog section of Edge of Physics. Just in case you were not aware of this ...

      Best,

      Constantinos

      • [deleted]

      I just found your site. I am in love. I just forwarded the link to all I know. Good luck on future growth. I look forward to it!!

      5 days later

      Constantinos,

      I definitely agree regarding measurement as I consider myself an empiricist. I'll have to ponder the rest.

      Ian

      Ian,

      I was involved in a discussion at another website. My involvment was not as a main participant. I have copied edited parts of my few messages and include them below. I have removed all texts except for mine. The point I was making is that thermodynamic entropy is not yet explained. Skipping past it to expressions of statistical mechanics is not satisfying to me. I think the mathematical expression defining thermodynamic entropy is an example of the Limits of Mathematics. I began my posts with:

      "Saying that thermodynamic entropy is energy in transit divided by temperature is not, I think, an answer to: What is thermodynamic entropy? What did Clausius discover? Whatever it is, it requires the passage of time. Clausius allowed for absorbtion of energy under conditions of equilibrium. Statistical expressions do not include this dependence upon time."

      "The T in the definition of thermodynamic entropy represents thermal equilibrium conditions. There is no fluctuation in temperature included. Yet energy in transit is included in the Q. Energy is absorbed over time under conditions of thermal equilibrium. I am not arguing that this theoretical ideal condition is possible in the real world. I am saying that Clausius did discover something of fundamental importance and it is not yet explained to this day."

      "Since the derivation includes only long established macroscopic type properties, though they relate to the internal state, shouldn't it be expected that thermodynamic entropy should be explainable as a similar macroscopic property."

      I was then referred to this Link to Paper for a more logical and axiomatic explanation of thermodynamic entropy. I read the paper and responded:

      "I see the paper as being analogous to reverse engineering. Equilibrium and entropy are assumed as givens. When the authors assume conditions of equilibrium, then they have already introduced temperature into their analysis. It is there from the beginning. It is represented by the presence of equilibrium. The scale used to quantify temperature is introduced later, but the scale is not the property of temperature. Equilibrium is the condition of constant temperature. Different conditions of equilibrium represent different temperatures. They represent conditions of relative hot and cold. The practice of avoiding using these words and the mathematical symbol of T for temperature is not sufficient for saying that they are absent in the analysis until derived later. Changes in levels of equilibrium, no matter how accomplished, introduce the flow of energy as a given. The analysis is formal and axiomatic, very logical. I prefer relying upon the original measureable properties. Entropy is not explained in a mechanical sense. It is not described as a measurable property analogous to heat, temperature, pressure or volume.

      I still expect that it should be explanable as a macroscopic classical style property. I think that its mathematical expression does contain a contradiction. Heat is energy in transit while equilibrium appears to exclude energy in transit. I think that this contradiction, or apparent contradiction, is what needs to be resolved, and, thermodynamic entropy will then be revealed. Another way of looking at this is to say that: When we finally understand what temperature is, then we will quickly understand what thermodynamic entropy is."

      I had stated earlier that: "Temperature is an indefinable property with indefinable units of measurement." My point is that temperature is not yet explained.

      There was no interest shown or discussion regarding my last message. Anyway, I post it here not because it represents anyone else's view but my own, rather, I submit it for consideration, and possible correction by you and others, as a mathematical solution without a clear physical interpretation, and, a possible classic example of the limits of mathematics.

      James

        • [deleted]

        Ian where are you, and why you don't speak to me ...

        James,

        I highly recommend reading Dan Schroeder's somewhat unusual textbook An Introduction to Thermal Physics, specifically chapter 3 (though the basic ideas presented in that chapter also appear in a paper he co-authored with Tom Moore in the American Journal of Physics in the late 1990s). Based on Dan's and Tom's work, I think thermodynamic entropy (and temperature) make sense, but its not always presented in the form given by Schroeder and Moore which makes it confusing. It's really the honing of Shannon's ideas.

        Ian

        • [deleted]

        Ian:

        I like the idea of attempting to identify the limits of mathematics because insight into this issue would help to flag the moments when physics or other subjects, such as economics, might be going off the deep end. It is a profoundly challenging question, though, and it sits at the intersection of philosophy, mathematics and science. This is one of those situations where insight into a large question is gained by asking an even larger question.

        So let's ask instead: What is the difference between abstract mathematics - with no applications - and real mathematics? A good argument can be raised that there is no difference for one simple reason - all mathematics is abstract. There is a profound difference between useful and real. Abstractions from reality can be exceedingly useful, but they aren't real - they are idealized mental forms that are intentional simplifications of reality. The number 3 seems very real, and so does pi, but there is a reason why you can't see either number written in the sky or on the waves of the sea - these numbers exist only in your mind. They spring from the reality that defines them and makes them meaningful in more than one situation. We shouldn't be surprised that they are useful in multiple situations - we chose them as useful abstractions from reality for exactly that reason.

        Long ago, I browsed through a book on the Tao in a library, and didn't find much useful, but one idea resonated strongly with me. It was stated something like - the abstraction of the thing is not the thing itself. This is a powerful and useful observation. It turns out that the Tao has many translations and I have never again seen this quote in exactly the same way, but the idea is eternal. The word cow is not a cow, A picture of a cow is not a cow. Your thought of a cow when you see the word cow or smell cows is not a cow. Our understanding that a cow herd contains individual cows does not create any cows. Only that individual creature on Farmer Jones' farm, named Bessy, calmly chewing grass with a bell around her neck as we watch her with our very eyes is a cow. She is unlike all other cows - a reality we forget when we abstract her.

        For every cow there are many abstractions of it. Likewise, it is possible to abstract reality into mathematics in a infinite number of ways. But if our goal is to understand reality we should never confuse it with mathematics or with any of our abstractions. Physics and other sciences begin to diverge from reality the instant this confusion is made.

        Stan

          • [deleted]

          Dear Stan,

          Well, applied mathematics is applied abstraction. Someone said: "The map is not the territory".

          Hopefully, the community of physicists will not feel too offended if I reiterate that predominant physics is ignorant of something that is well understood by common sense: abstracted time is different from real time.

          In particular, W. Ritz was correct in 1909 when he insisted that future events cannot influence the past while Einstein preferred to stick on the traditional belief in an a priori given time from minus infinity to plus infinity. Ritz died soon.

          Maybe, Minkowski got ill and died because he felt being wrong with his exciting idea of a belonging complex spacetime.

          If I recall correctly, Schwarzschild died in 1916 before his complete solution for the metric around a point mass was interpreted as reality even for past and future singularity and beyond.

          If I compare the about 4 000 Mio € expense for LHC with the 750 000 Mio € agreed in order to finance some almost bankrupt generous European states, I see the LHC worth his money if one will draw the due consequences from the outcome.

          Regards,

          Eckard

          • [deleted]

          Eckard:

          I think Ian senses, along with some other physicists, that physics may have gotten lost somewhere along the way. He is looking for a method that will flag an abstraction as having no parallel in the real world. There are many ways to approach a problem like this, and if you have an open mind insights can be obtained that give you the ability to tell the difference even when the rule is not absolute.

          The solution begins by understanding what abstraction is in general, and then applying that understanding to math. The human brain is designed to abstract our experiences into words so we can form a mental model that we use individually and communicate to others. This ability allows us to continue the process by abstracting some words into numbers or mathematical structures. In other words, we can create abstractions of abstractions, and with math that is how the process begins.

          The process of abstraction has no natural limit, so with so-called abstract math we have created abstractions of abstractions of abstractions, etc. This gives us the first probabilistic rule of "real" abstractions. We can state it thus: The higher the level of the abstraction, the less probable an attempted application of the abstraction will work in the real world. This is not the only useful probabilistic rule, but is one of several.

          To understand this rule we have to become more concrete by working with examples - a process which is a hint of other rules. Whenever examples are elusive or hard to work with, you have probably landed on an abstraction that is useless in the vast majority of situations. As a first example, consider negative numbers: are they "real"? The answer is, yes they are, but they are more abstract than positive numbers and thus apply to fewer situations. For example, if you are happily crunching away with your equations and come up with a negative human height you immediately know you have an error. Likewise with negative frequencies, negative wavelentghs, negative volume, etc. The artful mathematician might immediately jump in, beat his breast and insist, "I can make people negative 9 feet tall!" That isn't the point. The point is that it is necessary to strain mightily to find the required example, and this is true to the extent that when you end up with a negative human height in a real world problem you almost always have an error in your calculation. What we want to know is the probability of error, given only the nature of the abstraction itself.

          The scarcity of negative human height, negative frequency, etc. in the real world should serve as a warning about the danger of over-abstraction. Negative numbers are one of the most useful 3rd-level abstractions (abstractions of abstractions of abstractions), but even they apply to a much smaller set of real world situations than positive numbers. The higher the level of abstraction the more useless the application as measured by the ratio of errors to correct answers in real world problems.

          A corollary to this rule is that when you find yourself working with 12th level abstractions, you should have the sense to realize that applications are highly improbable, even if they aren't impossible. You are in a situation where you will make zillions of errors for every useful result, and they will be so subtle you will probably never find them or even sense that they are there.

          Whenever you are in this situation, you are a fool if you don't do two things: 1) Seek lower-level reality checks, preferably in abundance. 2) Try to reduce the problem to a lower level of abstraction. If you can't do either one of these things, you have to accept the reality that your model has an unknown but extremely low probability of being correct.

          Stan

          • [deleted]

          Ian,

          ... and why aren't you speaking to me?

          Thanks for your reply, Stan. My apologies on not responding sooner. I have been insanely busy lately and haven't had a chance to check on this board recently.