Thank you LC for a nice explanation...

By the way

.............................. I Want you to explore one more model of the Universe, where ...............reproduction of Galaxies in the Universe is described. Dynamic Universe Model is another mathematical model for Universe. Its mathematics show that the movement of masses will be having a purpose or goal, Different Galaxies will be born and die (quench) etc...just have a look at my essay... "Distances, Locations, Ages and Reproduction of Galaxies in our Dynamic Universe" where UGF (Universal Gravitational force) acting on each and every mass, will create a direction and purpose of movement.....

I think this is INTUTION and is inherited from Universe itself to all Biological systems

For your information Dynamic Universe model is totally based on experimental results. Here in Dynamic Universe Model Space is Space and time is time in cosmology level or in any level. In the classical general relativity, space and time are convertible in to each other.

Many papers and books on Dynamic Universe Model were published by the author on unsolved problems of present day Physics, for example 'Absolute Rest frame of reference is not necessary' (1994) , 'Multiple bending of light ray can create many images for one Galaxy: in our dynamic universe', About "SITA" simulations, 'Missing mass in Galaxy is NOT required', "New mathematics tensors without Differential and Integral equations", "Information, Reality and Relics of Cosmic Microwave Background", "Dynamic Universe Model explains the Discrepancies of Very-Long-Baseline Interferometry Observations.", in 2015 'Explaining Formation of Astronomical Jets Using Dynamic Universe Model, 'Explaining Pioneer anomaly', 'Explaining Near luminal velocities in Astronomical jets', 'Observation of super luminal neutrinos', 'Process of quenching in Galaxies due to formation of hole at the center of Galaxy, as its central densemass dries up', "Dynamic Universe Model Predicts the Trajectory of New Horizons Satellite Going to Pluto" etc., are some more papers from the Dynamic Universe model. Four Books also were published. Book1 shows Dynamic Universe Model is singularity free and body to collision free, Book 2, and Book 3 are explanation of equations of Dynamic Universe model. Book 4 deals about prediction and finding of Blue shifted Galaxies in the universe.

With axioms like... No Isotropy; No Homogeneity; No Space-time continuum; Non-uniform density of matter(Universe is lumpy); No singularities; No collisions between bodies; No Blackholes; No warm holes; No Bigbang; No repulsion between distant Galaxies; Non-empty Universe; No imaginary or negative time axis; No imaginary X, Y, Z axes; No differential and Integral Equations mathematically; No General Relativity and Model does not reduce to General Relativity on any condition; No Creation of matter like Bigbang or steady-state models; No many mini Bigbangs; No Missing Mass; No Dark matter; No Dark energy; No Bigbang generated CMB detected; No Multi-verses etc.

Many predictions of Dynamic Universe Model came true, like Blue shifted Galaxies and no dark matter. Dynamic Universe Model gave many results otherwise difficult to explain

Have a look at my essay on Dynamic Universe Model and its blog also where all my books and papers are available for free downloading...

http://vaksdynamicuniversemodel.blogspot.in/

Best wishes to your essay.

For your blessings please................

=snp. gupta

It is evident that our ideas are fairly different. As I see it singularities are quantum mechanical, being in effect topological quantum numbers. Classically they make little sense, but quantum mechanically they may hold deep information. Also instead of galaxies being generated continuously whole cosmologies are generated.

Anyway one can't disprove a theory with a theory. I can't comment a lot on the astrophysics of galaxies, for that is not a specialty of mine. I would have a hard time benchmarking your hypotheses with what is the standard in astrophysics.

Best luck on your essay,

LC

Hi Lawrence,

That is funny I was thinking the same, I lost all hope. In that case I won't ask you to evaluate my revolutionary theory:)

last year essay

your this year essay

Thanks.

P.S. I hope you recover from bronchitis which I had a severe reaction( I thought I was going to die!) to an antibiotics that was giving to me for the same. That is why my essay was quick to the point.

Thanks for your comment on my page. I am very aware of what Yukawa potential is. It is just that combined with coulomb potential the system seem to predict the electron and the proton naturally. and this combination I already get it from the simulation of my system.

Thanks again

Dear Doctor Crowell,

"Very little of human action really involves reason."

Probability learning, I would say-- for every possibility X sub i (i = 1 to n), regret at having chosen X sub i when the payoff occurs elsewhere; and in other situations, regret at having NOT chosen X sub i, when the payoff does occur there.

This is the signature of a learning algorithm, evident in the Born rule when Bohm and Hiley (in The Undivided Universe) are taken to heart-- that the two sides of the simplest possible equation for the Born rule represent two different concepts.

I just adopt/adapt this and instead say that the two sides of the equation are two different ALGORITHMS.

So of course, I have to resort to game theory. Hence the probability learning game.

The quantum particle doesn't KNOW the laws of physics, so it has to LEARN them.

(Who is doing the teaching?)

David Tong, in his notes for QFT which he teaches at Oxford, says there is a limit close to the Schwarzchild radius where some physicists believe that QFT will break down. And then there must be a different theory.

If I understand this correctly, it can't be QFT because that depends on nice results for Lorentz transformations, which would be expected to have exceptions, I guess, in the neighborhood of the Schwarzchild radius.

Then what would the new theory be, and how would QFT "emerge" from it, to use the popular term?

More generally, it seems to me there must therefore be an "infomorphism" from this other kind of theory to field(s), or field, if we respect string theory.

I have been thinking of this in terms of proper time.

And instead of another kind of field theory at that scale, I've been imagining a different kind of "particle" theory.

But instead of being an "object," I've been thinking of the particle as a "process" as in formally specifiable computer process. (algorithm)

Then to get an infomorphism, the proper time of such a process must map to a SET of all possible (string theoretic) fields, as represented by their coordinate times.

Hence there should be a game-theoretic selection of fields, and since the non-flat ones give indeterminate readings for number of particles created, we should expect that the field selected will be flat. Otherwise, any number of particles could be created. But we are looking for a deeper theory from which such fields will emerge, and therefore it is the "particles" (processes) that determine how many of themselves there are, not the fields, which are selected, and which do not in this idea determine the number of particles.

Here's the start of another discussion about this in the contest.

Do you agree with David Tong, as I interpret him, that the usefulness of QFT breaks down in the neighborhood of the Schwarzchild radius?

If so how would you see the particle-- not as an object-- but as a (computer) process?

    When it comes to my statement "Very little of human action really involves reason," I can appeal to the science fiction comedy movie "Men in Black." In there Tommy Lee Jones says to Will Smith, "A person can be rational, but people are a panicky heard of dangerous animals."

    Quantum mechanics by itself is as far as I see dead as a doornail. As for what might happen near the Schwarzschild radius there is I think a twist on the Langlands S-duality. We have in physics the basic observables length [L], time [T], mass or momentum [1/L]. Time and length are related to each other by the speed of light c. The intertwiner between momentum and length is the Planck constant 徴. However, we have a curious intertwining between mass and length, which is the Schwarzschild radius r = 2GM/c^2. By way of contrast with the Planck constant that is a reciprocal relationship between length and momentum, or certainly the uncertainty spread of the two, here we have a direct relationship.

    The context where by complexity enters the world I think is due to the existence of quantum hair and its connection to open entanglement topology of states. The connection between the structure of quantum mechanics and general relativity is through the abelian translation symmetries of the Heisenberg group and the BMS symmetry. This connects with the above linear or direct connection between momentum and position.

    There are a lot of unknowns here. We will have to see how things develop in the future. We may all be surprised by how our understanding evolves.

    Cheers LC

    The holographic principle : You can find a solution for it in my book "THE FRACTAL RAINBOW":

    According to John Maldacena (See article Scientific American, January-2006):

    "HOLOGRAM theory states that a quantum theory of gravity within a space-time anti-De Sitter is equivalent to a theory of ordinary particles at the border."

    "Unfortunately not yet known any theory of boundary that results in an interior theory that includes just the four forces we observe in our universe [...] Since our universe has not a defined boundary (such as having a space of anti-De Sitter and as precise holographic theory), we are not sure how a holographic theory for Our Universe would be defined due that there is no appropriate place to put the hologram."

    One option could be to propose, as a boundary of Our Universe for the HOLOGRAM theory, that it will not be situated on higher scales (Cosmic Horizon), but it could be on the smaller scales (Planck Horizon) where we could also have a 2D space boundary.

    This 2D "virtual" surface at Planck scale could be the boundary to be considered for the HOLOGRAM theory: the Planck Horizon (Boundary).

      On a 2-d boundary you would have a simple conformal field theory of the form originally proposed by Zamolodchikov. One can have higher dimensional CFTs corresponding to SO(8) which has a triality condition in E8. E8 or E8xE8 ~ SO(32), which is a supergravity candidate. the AdS/CFT correspondence is one aspect of a more general system of entanglement symmetries on horizons and boundaries.

      Cheers LC

      Dear Lawrence B. Crowell,

      My essay was on a complete different point of view of yours, I am still studying to understanding it. I gave you a 10 because it seems to attack things from fundamental points of processing information. I took the point of view of organisms that process information. I define life basically as an ecosystem or an entire biosphere (I don't state that in the paper, It's something from the discussion I've been having with people) which is basically like a chemical clock. And as such, life began as a chemical clock reaction that spread like wildfire in the primitive ocean. As it variety due different conditions it met in different niches, it evolved in complexity, yielding life as we know, based on cell.

      But, in all scales, life strives to mimic the whole entirety of the ecosystem, given the need to transport energy all the time.And 1 organism or a chemical cycle within a cell needs always to put within larger and larger scales of ecosystems. So, you have multi cell life and colonies as expression of this expansion in gathering resources. The top of this is the use of mathematics in modern human life to organize societies, though, this is a reflex from the primitive instance of chemical clocks working by inequality as a threshold to work as a clock. Note that even the topological shapes of organisms are organized by inequalities given by thresholds of substances.

      This is my essay:

      http://fqxi.org/community/forum/topic/2846

        Daniel,

        Thanks for the positive assessment of my essay. I do propose the existence of complex adaptive systems is due to fundamental structure, which lay at the level of quantum gravity.

        You might want to pursue this idea of life being large scale or even planet wide early on. There are ideas about how the earliest biology or precursor of biology was an open system of replicating molecules. It may have been RNA-protein complexes developed within this gemish. The RNA were stabilized in this form. From this ribosomes, which are strange proteins with RNA within them, developed this way.

        I will take a look at your paper as soon as I can. I have been unfortunately rather ill the last couple of weeks, so I am moving at lower gear right now.

        Cheers LC

        Lawrence,

        What I propose is closer to the idea of Chomoton Theory. But, I want to go to an even more basic level, that life starts merely as a chemical pacemaker, that spreads. As for quantum gravity, I have an approach that, I am still studying the underlying subejects, that yields everything from simple gravitational relations. It involves elliptic surfaces and control (catastrophe theory). If you wish to know more about it, send me an email. (I also need some guidance on what I should persue mathematically)

        Quite a lot of concepts in this one, advanced and technical. The basic premise seems quite similar to the "freebit" concept by Scott Aaronson.

        I think I saw a typo (SPT/STP), and also the abbreviation UP was not explained (I guess "Unitary Principle" from the context).

          Dear Lawrence --

          I was surprised to see your invocation of both holographic bounds, and an endorsement of hypercomputation. If there's an infinite amount of computation happening along a worldline that is nevertheless contained within a finite volume at infinity, I feel like I should have violated some kind of entropy bound.

          One way to see it is that, if hypercomputation is possible, I can compute Chaitin's Omega--the infinite binary expansion of which has entropy rate one bit/digit. It's completely unpredictable from the point of view of a "finite" agent, and so standard Gibbs/Jaynesian arguments that link entropy to epistemic states of belief goes right through.

          (To my mind, the logical difficulties of hypercomputation actually argue in favor of holographic bounds, but that's just me.)

          Yours,

          Simon

            • [deleted]

            Dear Simon,

            You are right, that is why the hypercomputations are truncated. Black holes are quantum mechanical and the finite DOF in the quantum hair on the stretched horizon prevents complete hyper-computation. As for Chaitin's О© number, these truncated systems can't compute it, but they might be able to "guess" it or throw the dice in a way that is loaded in their favor.

            The classic situation with hypercomputations is the Zeno switch that opens and closes every 1/2^n intervals of a second as n в†' в€ћ. The energy involved with flipping the switch diverges and the whole system becomes (in principle) a black hole before the outcome can be read. Hypercomputations are then in a sense a sort of idealization, but a pernicious one because of Loeb's theorem. However, in a subtle sense it is possible to "go beyond Turing" a little bit to make more reasonable guess, make choices etc, instead of being completely blind.

            Cheers LC

            The UP is unitary principle and it should symmetry protected topological (SPT) state. I guess I am not familiar with free bit, but Aaronson talks about "ghost in the Turing machine" as free will. That is in a way what this is.

            LC

            Thank you Lawrence. While some colleagues of mine are thinking a great deal about thermodynamics of computation, I hadn't seen them jump into these computability questions.

            I'm curious about the ways in which approximations can go wrong when it comes to uncomputable numbers. For example, I might be able to guess, but I won't be able to put on any kind of probability bounds. Are there any non-trivial things that you can get from an approximation of an uncomputable number? My guess is that you have a story about how this happens in your truncated hypercomputation example.

            This paper is really more a way of presenting some initial work on the duality between the equivalence principle and unitarity of QM. The thought about weaving hypercomputations into this was a way of making it address the essay prompt. I have studied hypercomputation some and wrote a paper some years ago that made some reference to it.

            The truncated hypercomputation with black holes and quantum gravity at the UV should have a duality with physics in the IR at the energy scales of our ordinary world. There is this EU initiative on bio-computing , and Bio4comp which some think might lead to better computation of a range of algorithms or new or different algorithms.

            Cheers LC

            Dear Lawrence,

            I think that most of the cell reactions can be characterized as chemical clocks. As long as there is homeostasis, that is, that is, control parameters, there will be a clock of some kind, not even if is not regular in time. Like a thermostat. Not, that this is not the same of a random reaction, since in this case the chemical reaction will simply follow the 2nd law of thermodynamics and defuse energy, where a chemical clock is a physical analogue to engine. On the other hand,a clock is more akin to a Carnot cycle.

            The idea of a chemical clock came to me when I was considering the case of pH regulation of a cell, the ion transport. This is the most trivial active function of a membrane cell, in my view, other than the most trivial function, that is no to let the contents of the cell to spread to the environment. The ion transport is like a little machine, that is always pumping ions in order to keep the pH around a certain level.

            The idea is to consider the most primitive example of life, a kind of self controlling cycle. Some thing I had to exclude, to try to get in the most basal level I could thing of. I can get rid of a membrane if the elements needed for are abundantly available in the primitive ocean.

            I can get rid of reproduction if there is no defined requirement for perfect conservation of information. The sequence of stages required for the working of a clock is itself information. I don't know what is the original sequence, but I tried to propose one that would work like one in the primitive ocean. Also, as I posted in the additional information (BK reaction can be thought of composed of sub reactions) and also gave a certain mathematical treatment in section 2, different cycles could superimpose into a different one. Also, due differences in composition due the depth of water and environment, there should be some sort of competition for which reaction could thrive.

            In the large ocean, cycle waves would compete, like these ones, in a petri dish. Just imagine that the ocean is a thin layer over a huge surface.

            https://www.youtube.com/watch?v=zDgx6n6aExE

            I also proposed that, after a long time, these reactions would be strong enough, or adapt to, the kind of environment rich in organic material, like, in alkaline vents. The presence of stuff like lipids and rna, would select much more complex reactions on the long term, like those which happen in cells.

            4 days later

            Dear Lawrence,

            With great interest I read your essay, which of course is worthy of high rating.

            I'm glad that you have your own position

            «The subjective existence of our consciousness and our perceived ability to act freely is in contrast to the causality paradigm of physics that processes occur by strict conservation principles and determinism. Quantum mechanics is often cited as nondeterministic, however wave function evolution is determined; measurements appears stochastic. Yet a meat puppet guided by stochastic outcomes is no more free than one governed by strict determinism.»

            «What these theorems tell us is that if there is a classical underpinning to quantum mechanics they must be nonlocal and have no observable consequence».

            You might also like reading my essay , where it is claimed that quantum phenomena also occur in the macro world, due to the dynamic nature of the elements of the medium in the form of interacting non-local de Broglie waves of electrons, where parametric resonance occurs and solitons are formed, whose operation mechanism is analogous to the principle of the heat pump.

            It

            «sets up the network system for continued emergence at lower energy, down to the level of chemistry. Emergent complex structures involving a large number of particles, a large N limit, manifest themselves from stars and a wide range of different planets to the emergence of life.»

            I wish you success in the contest.

            Kind regards,

            Vladimir