• [deleted]

Lawrence,

You wrote: "As for integrating from -∞ to ∞, breaking this up would change the answer if there is a delta function at zero. If the integral is set up between (-∞, o^-) and ()^+, ∞) that would ignore the delta function."

I do not stick on arbitrarily chosen definitions when I understand the essence. My honest main concern is to get rid of unjustified arbitrariness. For instance, I consider Duhamel's integral not just older and equivalent to the meanwhile mandatory definition of convolution by integration from minus infinity to plus infinity but simply the original convolution, as the non-negative numbers are obviously basic to the positive and negative integer numbers.

Because I was not sure about the ranges of indefinite integral tables, I looked into my old Bronstein and found 44 integrals from zero to oo, 1, pi/2, or pi/4. Merely a single one extends from minus infinity to plus infinity. Because the function sin(x^2) exhibits even symmetry, the corresponding integral from zero to infinity equals half the given and perhaps actually in IR+ calculated and then doubled value. I recall having found this method in a book on integral transforms by Snowden.

In what I am calling safe and secure physics, there are only positive items and only positive basic quantities. The primary role of IR+ also implies that for this pre-traditional in the sense "before Copenhagen" physics, as admitted by Pauli, complex quantities are valuable but not essential tools.

The interval between your 0^- and 0^+ has the measure zero. It is irrelevant not just in physics but consequently thought in mathematics too. Buridan's ass is still a good reminder of the ignored fact that numbers are not appropriately attributed to points but to measures instead. The ass would likewise suffer starvation if he was looking exactly at any other point.

Omitting the neutral zero would not ignore the delta function but merely require putting it on a less naive footing.

I already mentioned Terhardt and also Aseltine reporting trouble with unilateral Laplace transform. I forgot the names of three professors from MIT who not convincingly tried to fix the trouble by means of distributions.

I would agree that such questions and confusions are not immediately important in physics. In order to not be considered selfish, I would even humbly belittle my reason to deal with them. However, you gave the clues yourself:

- "Bedrock" stuff including cardinality introduced by Cantor. He was charismatic but insane and called a charlatan. His proponents were forced to call his set theory untenable and naïve.

I do not object to facts: There is no limit to natural, integer, and rational numbers alias measures. Each of them is countable, i.e., it can be reduced to the unity by means of the four basic operations. The measure between two rational measures can always arbitrarily further split.

Uncountables alias alogos alias incommensurables alias irrational including transcendental "numbers" do not have such relationship via a finite number of basic steps to the measure one.

The notion cardinality would be justified if there were not just countable and uncountable measures.

However aleph_2, aleph_3, etc. could not be justified by a single reasonable application within about 130 years.

Infinity has mutually excluding meanings:

Originally it means the property of unlimited measure, the possibility to count endlessly. In this common sense Galilei understood: Trichotomy is invalid for infinite quantities.

When Spinoza clarified: One cannot enlarge and not exhaust the infinite, he still maintained the absolute alternative distinction of infinite in contrast to finite but he referred to something fictitious.

Leibniz used the sloppy notion of "infinite relative to something", which is still in use when we write  oo or 0. For Leibniz the infinites and infinitesimals are fictions with a fundamentum in re like sqrt(-1).

Cantor introduced omega as a created infinity and fabricated transfinite integers.

Engineers like me don't worry using for convenience oo like 1/0 as if it was a quantity.

- You are prepared for making intelligent comments on ... algebra ... topology

Let me admit that such comments are perhaps at least pretty similar to what Wikipedia and textbooks have to tell. Do you expect them to solve what seems to be out of order in the very basics?

I would rather appreciate you providing reasons how to refute the ideas I tried to suggest.

One of my key claims is that the body of discrete rational numbers/measures and the continuum of all measures mutually exclude and complement each other. All discrete spectra I measured where strictly speaking continuous because the time of my measurement was always limited. Conversely, all my continuous measurement was based on discrete samples.

My second claim is: IR+ fits best when we causally describe the result of any process. Consequently, a lot of speculative physics might deserve a skeptical scrutiny.

Please accept the challenge

Eckard

  • [deleted]

@Eckard,

I will have to get back to this. To be honest I suspect you are drumming up a problem or controversy where none exists, or the answers or theorems exist.

LC

  • [deleted]

Lawrence,

Zermelo saved Cantor's naive set theory in 1904 when Julius Koenig objected against well-ordering of the real numbers by fabricating the axiom of choice. In ZFC stands Z for Zermelo and C for choice. Lebeg's measure does not need the axiom of choice.

Zermelo's 1908 improved proof of the well-ordering is based on exhaustion. Zermelo ignored that the original meaning infinity cannot be exhausted.

By the way, Dedekind explained in his letter to Weber on Jan. 24, 1888 why the irrational numbers are not immediately identical with his cuts.

The only miracle to me is why not even Poincaré, who called Cantor someone who spoils the young generation, Borel who rejected the axiom of choice, and Lebec were not encouraged enough as to not consider measure belonging to sets but the other way round consider measures the primary objects in mathematics and physics. Maybe, the proponents of Cantor's untenable naive definition of a set were just too strong and those like Kronecker and Brouwer too weak.

Regards,

Eckard

  • [deleted]

The axiom of choice leads to some curious results, such as the Banach-Tarski decomposition of a sphere into an unmeasurable set and its recomposition into two spheres. This theorem was suggested as a basis for particle physics and a parton model approach to QCD and energy scaling. I am not so sure about that.

Cantor's transfinite numbers remained in the hinderland of set theory for some time. Yet Bernays and Cohen found in the 1960s that the continuum system Cantor proposed was consistent in ZF, but not provable. So the Cantor transfinite set system moved into the domain of established mathematics. Set theory gets even stranger, with least inaccessible cardinal numbers by Uman which are infintely greater than the aleph numbers, even אּ_ω or אּ_∞. These are in a class even further up the transfinite ladder, and there are a couple of set thoeretic infinite systems far beyond that. Of course for me I can't entertain these ideas that much. These don't appear to be effective for mathematical physics. So as far as I see much of this is a sort of mathematical playground that has little bearing on the natural world --- at least for now.

Cheers LC

  • [deleted]

describes reality as a nested density fluctuations of infinitelly dense particle field. Does some connection to topos theory exist there? I don't see any yet.

  • [deleted]

Absolutely fascinating. Even though I'm convinced that we are decades from having a real breakthrough in quantum mechanics I wish I had the capability to understand a brain storm between these two fascinating minds.

  • [deleted]

Kerr solution: J = aGM^2/c

m(n) = [n]^1/2 [constant], i.e., sqrt[n] [constant]

where: a = 1/n and

constant = corrected Planck mass = 674 Mev

-n----n]^1/2[constant]----Empirical mass---Agreement

1/36------112.3------muon 105.7------------94.0 %

1/25------134.8------pion 134.98-----------99.9 %

1/2--------476.6-----kaon 497.7-------------95.8 %

3/4--------583.7-----eta 547.8--------------93.4%

1----------674---------Planck mass-------- -----

2----------953.2-------proton 938-------------98.3 %

2----------953.2-------neutron939.2?--------98.5%

2----------953.2-------eta' 958--------------99.5 %

3--------1167.4-------Lambda 1115.7------95.4 %

3--------1167.4-------Sigma 1192----------97.9 %

4--------1348.0-------Xi 1314.8------------97.5 %

5--------1507.1-------N ~ 1450------------96.1 %

6--------1651---------Omega 1672.5-------98.7 %

7--------1783---------TAU 1784.1---------99.95%

8--------1906.3-------D 1864.-------------97.8 %

10------2131.4-------D(s) 2112.2-----------99.1 %

12------2334.8-------Lam(c)2284.9---------97.8%

Well, that is the 16 most common and stable of the

particles observed, with the exception of the electron

which has n = 1/(1319)^2 and I want to study that a

bit more. Maybe only a full K-N solution will suffice here.

My argument is that this high degree of ordering

demands an explanation. The fact that it was achieved

with the admittedly very approximate Kerr solution

makes things even more interesting. The fact that

Discrete Scale Relativity is definitively required to

determine the crucial value of the corrected Planck

mass should be fully appreciated.

Barking dogs may now start barking.

Scientists will undoubtedly start thinking.

Happy Winter Solstice [33rd anniversary of DSR]

Robert L. Oldershaw

www.amherst.edu/~rloldershaw

  • [deleted]

This would have been more interesting if you described their ideas more and spent less time extolling the virtues of collaboration.

  • [deleted]

Lawrence,

Let's not forget that I am claiming having unveiled a basic mistake: It is natural to count measures, not points. Several oddities of set theory and related topology can be ascribed to this mistake.

You wrote: "The axiom of choice leads to some curious results, such as the Banach-Tarski decomposition of a sphere into an unmeasurable set and its recomposition into two spheres."

-- This is well known. It is also no secret that many mathematicians distrust the AC.

You wrote: "This theorem was suggested as a basis for particle physics and a parton model approach to QCD and energy scaling. I am not so sure about that."

-- I am not interested in such suggestions as long as they seem to be unfounded. I cannot even decipher what QCD stands for. I would read QED quantum electrodynamics.

You wrote: "Cantor's transfinite numbers remained in the hinderland of set theory for some time."

-- Can you please tell me the meaning of hinderland, a word missing in my dictionaries. Hinterland in German might have a different meaning.

You wrote: "Yet Bernays and Cohen found in the 1960s that the continuum system Cantor proposed was consistent in ZF, but not provable."

-- I was only aware of Cohen "The Independence of Continuum Hypothesis" PNAS 1963.

You wrote: "So the Cantor transfinite set system moved into the domain of established mathematics."

-- I prefer positive evidence as do positivists. The existence of god has been mathematically proven while the opposite seems to be impossible. ;-)

I see Cantor's evidence just delusion.

You wrote: "Set theory gets even stranger, with least inaccessible cardinal numbers by Uman which are infintely greater than the aleph numbers, even אּ_ω or אּ_∞. These are in a class even further up the transfinite ladder, and there are a couple of set thoeretic infinite systems far beyond that. Of course for me I can't entertain these ideas that much. These don't appear to be effective for mathematical physics. So as far as I see much of this is a sort of mathematical playground that has little bearing on the natural world --- at least for now."

-- Uman is not known to me. Replace little by no, and I will agree. I appreciate having learned the apt word playground from you.

Regards,

Eckard

  • [deleted]

I think we are close to being able to work out a problem here with SU(4). I wrote on The Beautiful Truth on SU(4) and SU(3)xU(1). This seems to imply QCD in a parton-like theory with Bjorken scaling is a low energy stringy theory that is holographically dual to an AdS_3 (or at higher energy AdS_4) spacetime physics. In this way we might work how hadron scattering is dual to holographic fields of a supergravity multiplet.

We could start working this up after the New Years I think. We will need to communicate off the FQXI website here.

Today is the solstice, so happy Yule and ring those solstice bells. This is the season we all light up candles and faralitos --- in our modern age electric lights, in keeping with an ancient tradition of trying to bring the light back. Hannukah is past, so the light of the Menorah are no more. The Romans had Sol Invictus and Saturnalia, and the Christian celebration of Jesus' birth is to carry the idea of light entering back into the world.

Cheers LC

  • [deleted]

I pasted in the above post into the wrong FQXI page.

I don't think that Cantor's work is a delusion, though I am not in a position to rigorously defend it. The problem is that I am not a set theorist particularly, though I have a tangential acquaintance with it. There are arguments over the role of the AC. I had some years ago an interesting converstation with Chaitan over this. The mathematics of transfinite mathematics does have it origin in the dichotomy between countable and uncountable infinity. There is little mathematical debate over the existence of that.

The issue with physics, where it might have some bearing, is that physical objects occur in discrete packets. We measure things in clumps, where even quantum measurements involve a spot on a photoplate or an electronic click that registers a bit of information. So we actually measure things which are not continuous in a strict mathematical sense. These are particles or dynamical quantities. Yet we are faced with some curious issues with physics that dates back to F = ma. There you have a dynamical entity (force) equal to a kinematical quantity (mass) times a geometric quantity (acceleration). The geometric stuff involves relationships between things we measure. It is in this domain that we have continuous quantities. It is also here that the foundations of mathematics enter into physics. Most physicists, myself included, might only appeal to some point-set theory, such as compactness or paracompactness and so forth. If one were so game they might delve into deeper set theoretic issues. However, most physicists don't go there.

The loop quantum gravity crowd tends to see physical spacetime as a discrete system. However, the recent measurement by Fermi spacecraft on the simultaneous arrival of photons of widely different wavelength from billions of light years shoots down a major prediction of LQG.

Cheers LC

  • [deleted]

Lawrence,

I would appreciate explanations why in the end Dedekind equated numbers with points. I myself found out so far:

Homer used the word pempazein (to five) for counting. Obviously counting was performed at the five fingers of a hand. The Greeks used

alpha, beta, gamma, ..., eta instead of 1,´2, 3,... , 8,

iota, kappa, lambda,... pi, instead of 10, 20, 30, ..., 80

rho, sigma, tau, ... omega, instead of 100, 200, 300, ..., 800

Vau = 6, Koppa = 90, and Sampi = 900

Aritoteles quoted the Phythagorean Eurytos who defined the unity (monav) as a point without position and accordingly a point as a unity that has a position.

Accordingly I tend to blame the intention of Eurytos to play with figurated numbers for the perhaps worst mistake in mathematics.

I will later deal with your defense of Cantor's naivity. Alfred Nobel did perhaps know why he decide, let be no price for mathematics. He did not like

Mittag-Leffler who supported Cantor.

Regards,

Eckard

  • [deleted]

The assignment of points with numbers goes back a long way. From meter stick lengths to coordinate grids on maps that is what we do. So the issue of assigning a real number of every point on a line, or pairs and n-tuples on planes or higher dimensional space goes back to some pretty classical ideas. Calculus is based ultimately on the idea that an infinitesimal constructed from a limit computes something, or numbers are assigned to points.

This procedure, which has practical uses, has lead to various questions about the foundations of mathematics. In the 20th century this lead to Cantor's pointing out how countable infinity was less than a continuum infinity, Godel's discovery that axiomatic systems are incomplete with statements that are true and unprovable, and Cohen's fusion of these two in his demonstration the continuum hypothesis is consistent in ZF, but not provable.

Cheers LC

  • [deleted]

Lawrence,

Let me reveal mistakes step by step. You wrote:

"The assignment of points with numbers goes back a long way. From meter stick lengths to coordinate grids on maps that is what we do. So the issue of assigning a real number of every point on a line, or pairs and n-tuples on planes or higher dimensional space goes back to some pretty classical ideas."

-- The primary meaning of numbers is still rudimentary to be seen in the Roman numbers: I, II, III, etc. Numbers are based on the choice of a unity "one". The next steps were repetitious recognition of this unity. If the unity is a length, then any decent number is also a length. In other words, the primary and therefore correct meaning of a number is a measure. Points do not have a measure. A number indicates how often the unity is repeated or split. Any piece of an ideal one-dimensional meter stick is still to be thought to be one-dimensional. The measure is given by two points: at zero and at the end. For instance, the number 35 is only correct if we start count at zero an not at 100cm. This clarification seem to be trivial. However, it avoids a lot of wrong consequences.

Regards,

Eckard

  • [deleted]

As I see it you are calling into question the whole development of mathematics going back to calculus. Of course I can't write up a review of classical mathematics leading up to point-set topology and later foundational aspects. I am not sure where your departure lies. I would imagine that few physicists would cry foul over a rejection of Cantor transfinite set theory, and many mathematician might be unperturbed. However, if you point of departure reaches further into established or classical mathematics I suspect your objections will gather a smaller audience.

I am at best a sort of meatball mathematician, but mathematics involves relationships between structures which are consistent. That is the primary requirement. The idea of limits in calculus, with infinitesimal lengths and the like, might sound strange from one perspective. However, from a formal perspective these systems work. That is the primary requirement for any successful mathematical system.

Cheers LC

  • [deleted]

Lawrence,

Why do you think the interpretation of numbers as measures instead of points is not agreeable with calculus?

According to

www.classicpersuasion.org/pw/burnet/egp.htm?chapter=2

"the Euclidean representation of numbers as lines was adopted to avoid the difficulties raised by the discovery of irrational quantities". Euclid is not only famous for qed but also for his book "Elements" which was printed in 1500 editions and there are uncounted handwritten copies. Who abandoned his notion of number and why? Was it at the time of Descartes?

Regards,

Eckard

  • [deleted]

I was not clear on your objection I think. Calculus is based on measure theory, at least integration is largely based on it. To be honest I am a bit unclear on what you see as so troubling. The real line is the set of all real numbers, which exist at all points on the line.

Cheer LC

  • [deleted]

Lawrence,

I found four conditions for a non-principal ultrafilter in non-standard mathematics:

1 modus ponens, 2 closed under intersection, and 3 dichotomy. While 1+2 define a filter and 1+2+3 define an ultrafilter, I am mainly interested in 4 non-principality, i.e., adding or deleting a finite munber of elements to/from A does not affect whether A is in p.

The latter well agrees with my understanding of really real numbers in contrast to rational ones. In other words, a single number or even a finite number of real numbers can be neglected when considered submerged into the genuine continuum of really real numbers. Accordingly, I prefer |sign(-->0)|=1, not =0.

I know: It is absolutely uncommon to write -->0 or in the general case -->x for the argument of a function. When we write f(x) we assume x subject to trichotomy: having neighbors, which are either smaller or equal or larger.

Now I will tell you what makes the difference when we do not consider points but measures: In any case including integration and its inverse the arrow of measure has always the same direction wrt zero. Then there is not f(x=0) but always only a f(x-->0) with a direction according to |x|>0. The point x=0 has no measure and no bearing. Absolute exactness is a fiction that contradicts to the fiction of absolute continuity.

While the replacement of points by positive or negative distances from zero will have diverse implications for the issue continuum vs. discrete approximation, I felt in particular urged to remove some illogicalities that hindered me to both "correctly" and reasonably perform a restriction to IR^+ and gave rise to trouble with delta(0) and the lower limit of integration in case of unilateral Laplace transform.

In contrast to points, measures are not zero-dimensional.

Cantor's point sets do strictly speaking NOT constitute a genuine (Peirce) and physically relevant continuum because he excluded what he called the infinitum absolutum. He claimed that there are cases of an infinitum creatum sive transfinitum in nature. They aren't.

After Abel called mathematics a mess, an increasing crowd of mathematicians strove for rigorous formalisms. Many were even ready to accept Cantor's seemingly correct proof for a more than countably infinite number of real numbers. Cantor enchanted the experts by keeping fix all of infinitely much of numbers!

Do not take me wrong. I do not deny that irrational numbers are uncountable, i.e. outside the rationals, and we may consider them to constitute the Peirce continuum. However, they must not quantitatively compared with the rationals. Infinity is strictly speaking an absolute property that cannot be increased. Infinitesimals are elements for linear approximation that are well described by means of epsilontics and notions like "small of higher order", or "as if they were infinite".

Regards,

Eckard

  • [deleted]

Analysis uses measures because there is no "machine" which can compute or enumerate all the elements of the reals. Yet measure theory is still based on point set topology.

It has been a long time since I concerned myself with these issues. Going back to the late 19th and early 20th century history of mathematics it is clear there were considerable debates over this. I would tend to say that unless somebody comes up with a new system of mathematics that the current system will remain. I think it would require a huge abstracted idea to reframe these foundations of mathematics.

Cheers LC

  • [deleted]

That type of relationship (between Döring and Isham) is what I've been looking for my entire career. Unfortunately, I'm already a tenured department chair which severely limits my freedoms. Congrats to Döring on finding such a thing when he did.