• [deleted]

Hi Ken,

I am glad to see that you also have a contribution here.

Have you seen my essay, which is also about the block universe/time? Now there are two of us.

What could be even more interesting to you, is my last paper

http://xxx.lanl.gov/abs/0811.1905

which also discusses the probabilistic interpretation of Klein-Gordon equation in a block-universe spirit.

  • [deleted]

Ken,

You write "Thanks for your thoughts on the matter... I'll need to think about how a "sorting algorithm" might do the job. The problem with continuous, classical fields vs. classical particle paths is that the number of options to sort would seem to be much larger in the case of fields. (And don't forget, the field has to consistently solve a set of differential equations throughout the 4-volume. And those are Euler-Lagrange equations that already minimize the action, so that sort of minimization principle doesn't get you anything extra in this case.)"

Right. That's why finitely probable particle paths in a nonlocal quantum mechanical system are not the same as infinite possible paths in a local classical system. Suppose (and I do) that differential equations are not the only or even the best mathematics to model continuous functions, under an assumption that time is n-dimensional continuous. Then, insofar as gravity is time dependent, dissipative energy over hyperspatial manifolds in an infinitely self similar system restricts particle paths locally to a set confined to the 4D manifold defined by your two boundary points t and t'. A sorting algorithm (which implies strongly polynomial time solutions) applies to problems in such self-assembled phenomena as protein folding, where the final configuration is known but the path of the process through space is not. Soritng the paths by energy differential between the two boundary points to the stable state is a least energy solution (thus my suggesiton of an n-dimensional, 2-point boundary value model).

The key concepts here, besides the assumption of an n-dimension continuous time metric (n > 4), are 1) infinite self-similarity, which obviates a boundary between classical and quantum domains; and 2) removing the problem from the spherical volume to the flat hypersurface, where maps are mathematically simpler and better behaved.

Do hope we can continue a dialogue. Thanks.

Tom

  • [deleted]

Hi Ken. I think we use mathematical models that have effective ways to treat asymmetry, rather than that we can insist that there is no asymmetry.

For me, there are three layers of modeling, (1) lists of finite data of finite accuracy from experience, raw data such as Gregor Weihs can still send you as about 1.5GB from his Bell-EPR violating experiments running up to 1998, (2) statistics of that data, and (3) idealized models, which in one way or another generate probabilities, expected values, and correlations, etc.

For the Weihs raw data, there is no doubt that it is presented as points in space-time. The time is listed for every data point, and the place is implied by which list a given time is found in. There is a detailed discussion of how to compare the times that are assigned at the two places. We know the experiment was done in Innsbruck, running up to 1998. We have no such raw data for experiments done in 2037.

To construct statistics, we have to identify an ensemble of data points, which requires that we decide on a way to identify regions of space-time as similar enough (but not identical, otherwise the statistics are trivial). There may be multiple ways to do this that are interesting, not all based only on identification of space-time regions.

Note that I'm not advocating a frequency interpretation of probability, only that we expect to be able to construct statistics that usefully correspond to expected values that are generated by a measure-theoretic mathematics, and that ultimately we **must** be able to construct an engineering method (which rules out ad-hoc models). There is of course a significant industry of estimation of parameters in probability models from statistics, significance testing, etc.

If I were Max Tegmark, I might claim unabashedly that there is a mathematical model that is identical to the raw data that I could gather, timelessly, if I were God, Maxwell's Demon, or a Physicist Supernatura. However, for my lifetime I suppose there will only be time-asymmetric lists of past data, while probability models will gaily predict what statistics there will be in the future. I'm not saying that there cannot be a block world mathematical model of the sort I take you to suggest --- perhaps our world is a classical finite automaton at the Planck scale, or a classical continuous field that has absolutely no fluctuations in the ultraviolet limit, even though the field fluctuates FAPP Lorentz-invariantly above the Planck scale (I note that it is only because of my analysis in "Bell inequalities for random fields" that I can legitimately daydream anything of the sort) --- but our present inability to handle the complexity of the initial conditions of such models means that we have to deal with statistics and probabilities. I consider that to do Physics is to stay away from the metaphysics of considering ontological structure, that Physics is to leave questions of what lies beyond the immense gaps in our knowledge to Philosophy and Religion. Of course on Philosophy days and on Sundays it's only proper for us to go there.

To do Physics should also be to show some humility about our models, to accept that future Physicists are likely to think of better ways to organize our experience, even if they aren't as smart as Physicists today, if they have enough time.

PS. Probability as substance doesn't fit with my epistemological stance.

PPS. So far, I can't see enough usefulness from using initial and final conditions to go there. I think the constraints of raw data on our models come from anywhere/when we happen to collect it. We certainly can't collect all the raw data on a space-like hyperplane. Sometimes we engineer the raw data (actually, we engineer the ensembles, we can't make events happen precisely when we like), sometimes it just happens to us.

I hope this is interesting enough to read it (all). I walk in a fine tradition in these comment streams of too much already.

  • [deleted]

Hi Ken,

"I found the Feynman reference -- it was a very similar example addressed in Wheeler/Feynman's 1949 paper (not the 1945 one). Check it out -- they conclude that all these paradoxes rely on an "all-or-nothing" sort of interaction, but once you allow continuous interactions (say, a glancing blow due to a slightly-misaligned trajectory through a CTC) there's always a resolution."

The Novikov Conjecture would not be necessary if Polchinski's paradox was not a reality, i.e., if there were no self-inconsistent CTCs in GR solns. But, there are such CTCs in GR and the only resolution of cases like Polchinski's paradox, besides pointing to their highly improbable nature, is that *something* will happen to prevent violations of the principle of non-contradiction. The Novikov Conjecture is an addendum to GR. Essentially, I'm trying to find out if your approach suggests an underlying physical mechanism that would prove Novikov's conjecture.

"Now this question I'm surprised to hear coming from a "Block World Kindred Spirit"... To me, one of the biggest advantages of the block universe framework is that it *is* a consistency principle, in and of itself! Paradoxes can't happen in a block universe, by definition."

That reality is a BW doesn't rule out the existence of self inconsistencies, it would just be a BW with self inconsistencies. How would we describe such a reality? We don't know because our brains are wired in accord with the principle of non-contradiction. As physicists we tacitly assume that reality doesn't contain such violations, otherwise physics fails. So, I'm with you in that belief and I'm trying to find out how GR needs to be modified.

"So GR doesn't need any new consistency principle as long as you don't impose so many boundary conditions that there's no solution, and QM wouldn't need one either if we re-build it along the lines I suggest in my essay. I would hope that adding a generic constraint on the allowed boundary conditions would naturally prevent overconstrained problems in classical GR (such as this one)."

I don't understand how self-inconsistent CTCs result from "overconstrained problems." A CTC is simply a curve on the spacetime manifold whose metric is given by some soln to Einstein's equations. If there were a problem with EEs being overly constrained in these cases, why is only one curve on the spacetime manifold affected?

"Well, everything's mysterious until you understand it... :-) They've done interference experiments with buckyballs, which are pretty darn big. Is the "force" that keeps those C60 molecules from hitting the dark fringes "mysterious" or not?"

Are you claiming that the "Bohm force," responsible for interference in the twin-slit experiment, provides the physical mechanism underwriting Novikov's Conjecture? In that case you've blurred an important distinction between screened-off particles and non-screened-off particles, e.g., macroscopic objects.

"Regardless, you wouldn't ever *feel* such a quantum effect; you would just see the end result. That's because if you are measuring one set of parameters (like a force), then Heisenberg says you're losing information about some other parameters, and in my approach it's always in the unknown parameters where the "mysterious" quantum effects would come into play. (After all, the parameters that you measure are imposed as a boundary condition on the system.)"

I still don't see how the uncertainty associated with boundary conditions for any particular trajectory would serve to *absolutely prevent* the instantiation of that trajectory. Again, that argument could be applied to ANY trajectory, so why does ANYTHING have a trajectory?

According to statistical mechanics, there is an exceedingly small probability that all the air molecules in my office might suddenly move to the other side of the room. I'm not worried about this occurrence because the probability is low, but that's not why there is no "problem." There is no "problem" because I can compute the consequences to me, the room and its other contents should that happen. The problem with self-inconsistent CTCs is that we can't compute their consequences. So, either the universe contains phenomena we can't analyze or it doesn't, in which case GR is to be corrected/amended. I'm just trying to figure out how you, via your approach, propose to amend it.

"For that last part of your question, it sounds like you're trying to back me into a corner where I have to choose between free will and the block universe. I doubt that such a corner exists, but if it did, I'd come down on the side of the block universe every time."

No, I'm not addressing the issue of free will. Again, I'm trying to find out if your approach suggests an underlying physical mechanism that would ultimately prove Novikov's conjecture.

Thanks very much for your response, Ken. I think we're almost done!

Mark

  • [deleted]

Dear Ken,

Congratulations for your nicely written tutorial on the conceptualization of time as a 4-D block! Your exposition points out the most common pitfalls in representing/understanding the frozen time.

1. "Beyond Copenhagen, there are several other established interpretations, some of which are explicitly inconsistent with the block universe (one of them postulates many universes)."

I developed a "world theory" which provides easily a block view for the MWI and standard QM (only that we don't necessarily have Lorentz invariance). On the other hand, Penrose presents a general relativistic spacetime able to split in many worlds (he splits them along lightlike 3d-surfaces, if you are interested, I will look for the article).

2. Your discussion of the wavefunction discontinuous collapse can be related to my solution, in which I replace this discontinuity with "delayed initial conditions", sending back the Quantum Mechanics in the block time view.

My Smooth QM is deterministic (but compatible with free-will), and, contrary to Bohm's theory, it relies only to the evolution equation (Schrodinger for purified states, Liouville - von Neumann for mixtures), and does not require other hidden variables than the initial conditions of the evolution PDE. These I called "delayed initial conditions". I find some similarity with your solution, in that both of them look like retro-causation. I understand that you solved the incompatibility between initial/final conditions by using Klein-Gordon equation; I solve it by moving the discussion to the entanglement between observed state and initial measurement device. Another important difference: my delayed initial conditions are partial, and spread in spacetime, not just at the beginning and end; they are "caused" by the measurements. I do not want to detail more my theories on your discussion thread, since it is appropriate to talk about your work here. I just pointed out some connections.

Best wishes,

Cristi Stoica

"Flowing with a Frozen River",

http://fqxi.org/community/forum/topic/322

  • [deleted]

Professor Wharton,

"After all, *eventually* the future will be past, and we shouldn't have to treat those events differently in our equations once that happens. (Granted, learning about uncertain values makes them more certain, but that sort of thing equally applies to uncertain values both in the past and the future.)"

From a layman's perspective, this is the problem I see with "block time." Yes, if time is a fundamental dimension proceeding from the past into the future, block time does make sense, but the reality is the present, with time flowing by it from future potential to past circumstance. Which is more fundamental, the earth rotating, or the linear progression of days? I would argue time is a consequence of motion, rather than the basis for it.

  • [deleted]

Hrvoje, Tom, and Peter: Since your recent posts are getting a bit off-topic, let's move these discussions to email for now. (Hrvoje and Peter -- I already owe you responses to your latest emails, but probably won't get to them until next week. Tom, feel free to contact me at wharton(.at.)science.sjsu.edu.)

  • [deleted]

Mark: I don't see Novikov's self-consistency principle as being an "addition" to GR; it's just a tautology: if you apply so many constraints to a system of physical equations such that there is no solution, then there's no solution. And if the solutions correspond to "reality", then it must be impossible to impose those inconsistent constraints in the first place. This must be true for *any* physical theory; not just GR. If you ask "What keeps me from imposing that many constraints?", the answer is simply that those constraints are self-inconsistent. You might as well ask why I can't both impose a net force on an object and also impose that its velocity remains constant.

Now, if your question boils down to which *sorts* of constraints one is allowed to impose on physical equations, without overconstraining the system... you're getting into exactly the questions that I am considering. You should also read Steve Weinstein's essay in this contest (and related recent arXiv post) for some very interesting insights. (Also, thanks for your detailed response to my questions on your own essay thread; I'll get to that as soon as I can, probably next week.)

Best,

Ken

  • [deleted]

Cristi,

Thanks for your kind comments. I think you might find a lot of connections between your research and the approach of Larry Schulman (I cited his book as a reference in my essay). He also is trying to "nudge" the wavefunction into a system that can match two-time boundary conditions.

I like certain aspects of your essay very much -- particularly getting away from this "instantaneous" aspect of measurement that many people seem over-reliant on -- but I just wanted to comment that I think it's important to treat measurement and preparation on the same footing. After all, any preparation process could also serve as a non-destructive measurement of a yet-earlier preparation. So if you're going to use diagrams that make a clear distinction between the preparation and the measurement, I'd suggest showing that the final measurement might *also* have a time-duration to it, and draw the figure in a way that shows the process you envision might repeat itself over and over.

Of course, I would also urge you to consider that there might be other, hidden variables that get changed over the duration of the measurement, while the aspects that are actually measured stay constant throughout the measurement process. And going to a relativistic picture naturally gives you exactly the right number of extra parameters to make this work. See my arXiv paper (0706.4075) if you're interested in how this might work...

Cheers!

Ken

  • [deleted]

John,

You write: "...reality is the present, with time flowing by it..."

Yes, I realize that's how most people see matters. And that's exactly why I made such a concerted attempt in this essay to try to convince the reader that such a picture just doesn't make sense when it comes to physics.

But here's another point I didn't go into in much detail. The analogy of time "flowing" is dangerous because the very word "flowing" (and the general concept of motion) is meaningless without a prior concept of time. Given our primitive concepts of space and time, flow and motion both make sense. The danger comes in when one tries to make an *analogy* between, say, the flow of water and the "flow of time". I tried to make the point in the essay that it's a terrible analogy, because now instead of flow velocity = distance/time, one ends up with the non-sensical notion of "time velocity"=time/time, which isn't anything meaningful at all.

We do have primitive notions of space and time (for more on this, see "The Stuff of Thought" by Steven Pinker) -- the question is how to overcome these intuitive concepts so that we can think about physics objectively. And the best way to do this is to move to a static block universe. If my essay didn't convince you, at least give Huw Price's book a try (Time's Arrow and Archimedes' Point). It's by far the best generally-accessible, non-trivial book on time that you'll find.

Best,

Ken

  • [deleted]

Dear Ken,

"I'd suggest showing that the final measurement might *also* have a time-duration to it, and draw the figure in a way that shows the process you envision might repeat itself over and over."

Thank you for the suggestions. I totally agree with you. In fact, in a more detailed description, in the original article "Smooth Quantum Mechanics" (http://philsci-archive.pitt.edu/archive/00004199/), I specified it:

"After each observation, the quantum system gets entangled with the measurement device. Thus, even if the system is found in a precise state by the measurement, the entanglement with the measurement device makes its state to be again undetermined. The next measurement selects again an initial condition, to specify the state of the observed system. But now the system gets entangled with the measurement apparatus used for the last observation, and the cycle continues."

Unfortunately, in the essay I omitted, because of the length limitation, to describe the cycle, as well as in the version I cite in the essay (the second one) of my paper SQM (http://philsci-archive.pitt.edu/archive/00004344/).

And you are right, a picture with this cycle will help a lot.

Thank you,

Cristi Stoica

"Flowing with a Frozen River",

http://fqxi.org/community/forum/topic/322

  • [deleted]

Professor Wharton,

Maybe I shouldn't have used the word "flow." Especially since your view of time is that it exists as a static higher dimension, so let me put this another way; Does the rotation of the earth turn tomorrow into yesterday?

It's not that I view the "present" as a "point" in time, since I view time itself as an abstraction, similar to temperature. My argument is that there is simply what might best be described as "energy" and as the arrangements of this energy change, each arrangement is replaced by the next, so this progression of events goes from future potential to past circumstance. So the only "flow" is an attribute of the energy.

  • [deleted]

Dear Ken,

Thanks for your reply. I want to press one point b/c I don't know that you appreciate the "problem" I'm trying to convey.

"Mark: I don't see Novikov's self-consistency principle as being an "addition" to GR; it's just a tautology: if you apply so many constraints to a system of physical equations such that there is no solution, then there's no solution. And if the solutions correspond to "reality", then it must be impossible to impose those inconsistent constraints in the first place. This must be true for *any* physical theory; not just GR."

I just gave a lecture to our engineering students on the different rates at which Earth-based clocks and orbiting clocks run according to the Schwarzschild soln (the GR corrections are now relevant to GPS satellite technology). We can use the vacuum solutions of GR to find the trajectories of small objects where "small" means their stress-energy tns (ST tns) doesn't appreciably affect the vacuum soln, of course. When we put an object into motion along one of these vacuum trajectories, the object follows the trajectory as one would expect since the object doesn't change the spacetime curvature alg the trajectory.

Now, suppose I solve Einstein's eqns and find a self-inconsistent CTC in some vacuum soln (these solns exist). Do I have a soln of GR that REALLY possesses a self-inconsistent trajectory? Yes. Do I have a soln of GR that possesses SELF-INCONSISTENCY? No, because while it's true that a small object won't change the CTC in question, technically speaking, the RHS of Einstein's eqns (ST tns) must be divergence-free b/c the LHS is divergence-free (Einstein tns). So, technically speaking, no matter how small the object's ST tns, it MUST be divergence-free in order to claim that you have a soln of GR. But, of course, I can't construct the ST tns for a self-inconsistent situation (duh). So, GR does NOT possess self-inconsistency, even though it DOES possess self-inconsistent CTCs.

Hopefully you've jumped ahead of me and anticipate the question, "What if I have a soln with self-inconsistent CTCs, instantiate it physically and then place a small object on the self-inconsistent CTC?" This may or may not be a soln of GR. [Keep in mind that the object is small so it does not affect the existence or structure of the trajectory proper.] If the object is dry ice or something that evaporates before getting to the self-inconsistent region, we have a GR soln (like putting GPS satellites on free fall geodesics about Earth) and we know what will happen. If the object is a bowling ball, we don't have a GR soln and we don't know what will happen. So, what happens in that case? Do you see that the answer to that question lies outside of GR, even though GR clearly prompted it by saying this trajectory exists and gives me no reason why I can't at least START an object thereupon. [This nicely captures the clash of relativity's BW with dynamical experience.]

I'm just wondering if your approach suggests how GR might be corrected/augmented to answer this question. In the quote above I infer that you believe, as do I, that it must be impossible to realize this situation so SOMETHING must preclude it, i.e., make it impossible, not merely improbable. Per our discrete approach there is no empty spacetime, so the problematic trajectory really DOES NOT EXIST - the GR approximation is not accurate in saying that empty trajectories "exist" (although, trajectories in vacuum solns can be realized (made real, made to exist) via a small, divergence-free ST tns - but, no ST tns means no trajectory). What do you say?

Mark

  • [deleted]

Dear Ken,

Sorry to double team you here, but below is a passage from Halpern's essay which supports Mark's claim about Novikov's self-consistency principle being an add-on to GR.

"To combat such conundrums several proposals were suggested. Hawking formulated the "Chronology Protection Conjecture" as an attempt to forbid backward time travel based on the laws of physics [11]. Igor Novikov took a different tact and proposed a self-consistency principle

that permitted past-directed travel as long as it was fully consistent with what already had transpired [12]."

Cheers,

Michael

7 days later
  • [deleted]

Cristi: I guess we're in agreement on most of my earlier points. I'm trying to incorporate a finite-duration measurement myself (at least for non-destructive measurements), but so far the closest I've come is to allow the exact interaction time to be part of the overall solution space, with its own probability distribution. For more details you'll have to wade through arXiv:0706.4075.

John: I'm afraid I don't understand your question. I will say that retreating from "flow" and "motion" to a more general "change" can't explain anything fundamental about time, because the very concept of "change" *relies* on our primitive notion of time to even make sense. (What could change mean without time?) I'll continue to argue that the best way to get rid of these primitive temporal notions and focus on the physics is to use a block universe framework.

Mark and Michael: Your points are well taken, and serve to remind me that I've been mentally inhabiting a block universe for so long that I forget most people don't think that way. From the traditional "time-evolve the initial boundary conditions" perspective, it's absolutely correct that something like this would appear mysterious and in need of an additional postulate to prevent paradoxes.

I guess my revised point is that *any* consistent block-universe perspective (mine or yours) can deal with this problem almost trivially, without additional postulates. To recap how my particular type of model would solve this problem, one would impose external boundary conditions on both the space-time region in question and on the particle itself, but the precision at which one can impose all of those boundaries is limited by the uncertainty principle. The probability of any given outcome is then directly related to the number of acceptable solutions to the boundary-value problem. If some particular solution (say, the particle going around the loop) isn't self-consistent, then it's not a solution, and the probability of that outcome will be exactly zero. Simple as that.

We've moved far enough off-topic here we should probably retreat to email if you're not happy with such an answer... and I'll "see" you both soon over at your own essay thread!

Ken

  • [deleted]

Ken,

Humor me for a moment and reconsider a reality in which change and motion are acceptable. The arrow of time goes from what comes first, to what comes second. For the observer, past events proceed future ones, so we observe time as going from the past to future. On the other hand, these events are first in the future, then in the past, so their arrow goes the opposite direction. Throughout history, in fact the very description of the narrative construct we call history, the understanding of time is of the first arrow. That events proceed along this universal path, whether Newton's absolute time, or Einstein's relative time, from past to future.

Yet the only reality ever experienced is of the present. So lets examine the consequence of viewing reality as a fixed present consisting of energy in motion, thus causing change and as each arrangement described by this energy is replaced by the next, these events go from future potential to past circumstance. Therefore past and future do not physically exist because the energy to manifest all such events is only manifesting one moment at a time.

So rather than a fundamental dimension, time becomes an emergent description and consequence of motion, similar to temperature. Temperature, as a scalar average of motion, doesn't exist if we only consider singular motion, but only emerges when measuring a mass of activity. So time, as a sequencing of units of motion, doesn't effectively exist if we cannot define a progression. It is just quantum fuzziness. The present can't be a dimensionless point either, since it is a description of motion and would only be dimensionless if all motion has stopped, so, like temperature, the measurement becomes fuzzy when examined closely.

Whether time proceeds along some dimension from past to future, or is caused by the progression of events from future to past, might seem semantic, yet consider the consequences; If time is that dimension moving toward the future, we need to explain how it deals with potentialities. Either we go with multi-worlds, in which all potentials are taken, or block time, where the potentials are illusionary and it is fundamentally deterministic. Now if we view it from the other direction, where time is the events moving from future potential to past circumstance, the collapsing wave of probabilities makes sense, since it is only energy in motion and time is simply an emergent description of the process, not some fundamental dimension.

What is primitive is the narrative assumption that time is a linear projection from the past into the future.

20 days later
  • [deleted]

Might it be that instead of there being a retrocausality of wave functions that they configure themselves at one time according to the a future time as determined by a block structure?

Lawrence B. Crowell

  • [deleted]

John, Thanks for your comments, but I don't really have much to add to my previous response to you. I just don't see how time can be said to emerge from a picture that starts with a primitive concept of change or motion, because those concepts require an even more primitive concept of time to even make sense. Such an approach is therefore doomed to being a circular argument.

Lawrence, I think I agree with what you're trying to say, but the way that you said it is technically wrong. When you use the phrase "at one time", surely you don't literally mean "at one particular temporal coordinate", because you're talking about a block structure that spans some range of time. Instead, I'm guessing that you're talking about some wavefunction that finds a global solution in a block universe framework, which is exactly what I'm arguing for in this essay. But to say this happens "at one time" or "all at once" is flatly incorrect -- the solution spans many time coordinates, not just one. (The key is to avoid temporal language entirely when thinking in a block universe framework, or else you fall into the trap of imagining some meta-time that is *not* included in the block universe.)

Ken

  • [deleted]

Ken,

You are right that it is primitive, but physics is about understanding the basics. Motion doesn't exist without time, because time is units of motion. Just as collective motion doesn't exist without temperature, as temperature is averaging of motion.

Time as a dimension doesn't accord the physical reality of the present any precedence over the physical non-existence of the past and future. You may not have a problem with that, but I like my understanding of reality to accord with reality. Therefore I view time as the series of events which go from being in the future to being in the past, created and consumed by the process which is the present. Not as a non-dynamic dimension along which we exist. Yes, time is relative. If you speed up the motion, time speeds up, just as temperature increases. It is only when you are assuming some fundamentally static dimension that this seems illogical.

Write a Reply...