• [deleted]

Dear George Ellis,

Thank you for urging me to reread Feynman vol. 2. When I looked into it for the last time several years ago, I was only interested in the question whether it shares lacking care with almost all other textbooks on electricity by introducing the complex calculus like an Ansatz as if it was a given fact instead of a step by step distinction between reality and different levels of modeling. While I may confirm that Feynman used complex calculus always correctly, I didn't find where he devoted the due attention to this issue.

When I had earlier read Feynman I skipped his relativity related stuff for two reasons. I did not doubt that Einstein's relativity is correct, and I did know that it is irrelevant for electrical engineering.

Feynman's lectures are distinguished by the author's readiness to frankly reveal the often speculative basis of his reasoning. So far I found in vol.1 only successful efforts to incorporate relativity into electromagnetism. Maybe, I will find in vol. 1 how Feynman dealt with the foundation of relativity.

Anyway, I appreciate your hint and consider it more valuable than utterances of agreement with my essay.

Thank you,

Eckard

  • [deleted]

Thanks much for the link to your paper. I was just thinking that a really annoying top down problem is how on earth a proton can have spin 1/2 with all this stuff going on with valence and sea quarks of all kinds, not to mention gluons and photons and weak bosons. It is just amazing that there could be any simple quark model at all. Some bigger symmetry must be herding these cats. And an even weirder symmetry must require that electrons hang around these messy protons - and this is supposed to be 'the simplest atom in the universe'. I bet that a crucial ingredient in hierarchy is being able to 'chunk' 3 quarks into 1 proton, and 4 fermions into 1 atom as a 2 body problem. Parentheses do that in a fairly natural way, at least at this elementary level. Hence to look at octonions. Now I must dig up my copy of Large Scale Structure - it has been quite a while.

    Dear Crisit

    your presentation is very nicely done. I agree completely with your emphasis on the importance of global conditions, which of course is fully in agreement with my essay. I also agree about quantum theory maybe having influences into the past.

    As to wave function collapse: you state "But we can assume that the interaction with the measurement device (and the environment, as the decoherence program requires) only disturbed the unitary evolution, and the collapse is only apparently discontinuous" This is pretty close to my concept of the apparatus acting down on the particles to cause an effective collapse. "The measurement of O1 in fact refi nes both the initial conditions of the system \psi, and those of the apparatus \eta". I think I agree: this is a case of what I call adaptive selection (which occurs in state vector preparation).

    Where we disagree is that you want to preserve unitarity: I think its clear you can't. The top down action from the apparatus causes non-unitary behavior at the particle level. I think that's clear in the case of state vector preparation.

    Best wishes

    George

    Frank

    "what do you think is the legitimate and true goal of modern physics and modern science?"

    It is to understand the mechanisms whereby physical things work (the natural sciences) and how livings beings exist and function (the life sciences), together with understanding the historical process whereby they came into being (the historical sciences.)

    Science cannot deal with issues of aesthetics, ethics, or meaning. This is because it deals either with issues that can be tested by replicable experiment that any community of scientists should be able to reproduce,or with observations of things that exist historically where any scientist can examine these historical remains and test the theories about them that others have proposed. The core of science is testability by observation or experiment of proposed theories.

    Things like television sets are the product of technology, which utilises science to create useful artefacts. They are the outcome of abstract thought, and exist because of top-down action from the human mind to the physical world. No scientific theory can either predict or explain the existence of television sets, because they are not predicted by Maxwell's equations, Newton's laws of motion, or any other set of fundamental equations that describe how physics works.

    George

      Yes interesting.

      To me the very important thing is how when you chunk things in this way you change their properties. Thus neutrons decay in 11 1/2 minutes when free, but are stable for billions of years when incorporated in a nucleus. Electrons interact by Thomson scattering when free but not when bound in an atom. A hydrogen atom is not longer a hydrogen atom when combined into water.

      If you believe that identity is described by patterns of behaviour or interaction of an entity, then context changes identity. That's why the billiard-ball model of bottom up interaction does not work in most cases.

      The Large Scale Structure book won't help much in all this. It's about gravity and does not deal with octonions. Actually rather than octonions I's go for geometric algebra

      George

      Hi George,

      You wrote, "If you believe that identity is described by patterns of behaviour or interaction of an entity, then context changes identity."

      Context or multi-scale variety? I think the former assumes independence of identity and context, while the latter assumes continuous functions over multiple scales. In other words, I think the model is not bounded by context implying arbitrarily chosen conditions; rather, self organized and implying self-limitation.

      "Actually rather than octonions I'd go for geometric algebra."

      Not sure what link you chose, because it doesn't work -- maybe Doran and Lazenby?

      Hestenes' spacetime algebra is fully relativistic.

      I for one am convinced that these extensions of Hamilton's seminal result do conclusively restore analysis to a primary role in physical models.

      Tom

      Hi Tom

      "Context or multi-scale variety? I think the former assumes independence of identity and context, while the latter assumes continuous functions over multiple scales. In other words, I think the model is not bounded by context implying arbitrarily chosen conditions; rather, self organized and implying self-limitation."

      Well it's hard to avoid the usual way of talking, where one says hydrogen is bonded with oxygen to give water, even though they are neither hydrogen nor oxygen once bonded. Your description is more accurate but also more difficult to follow intuitively. One hangs onto the idea that it is the components that make the object exist, even when, as your comments imply, they lose their identity when this occurs. It's a historical statement really, about where the combined entity came from.

      Yes I meant Doran and Lasenby, developing out of Hestenes' work.

      George

      George,

      I'm in complete agreement with your thesis... what's surprising is only that such a common-sense notion needs to be argued -- i.e. that higher-level structure can impose constraints on lower-level behavior.

      The underlying issue seems to be that while the rationale for reducing physics to a simple and compact set of principles is clear to everyone, there's no such understanding of how and why higher-level structures (and hence constraints) should arise. We know a great deal about the structure of atoms and molecules, but it's not clear why the lower-level physics should happen to support such stable complex systems. In our present intellectual environment that kind of question hardly seems a sensible one to ask.

      In biology, on the other hand, it's clear (at least in principle) where higher-level structures and constraints come from -- essentially, from the requirement that biological systems be able to replicate themselves within a given physical environment. This basic functional requisite underlies evolution and the vast range of diverse requirements that come to constrain living systems at many levels.

      In the conclusion of my essay, I suggest that a similar basic functional requisite might be able to account for the diversity of structural levels in physics, namely the requirement that physical information be measurable and communicable between systems. This does not depend on any specific definition of "observer" or "measurement", but on the argument that every way of observing or communicating anything requires a context in which other information is also observable. This suggests that there may be quite complex and very stringent structural constraints on any system of interaction in which information is physically definable and communicable.

      On reading your paper on contextuality in QM ("On the limits of quantum theory") I find your arguments once again eminently sensible, and I have the impression that our approaches are complementary. I've also appreciated your work on the problem of time, over the years, and you may find my essay of interest also from the standpoint of its (unfortunately abbreviated) discussion of Minkowski spacetime.

      In any event, congratulations on another meticulously clear and careful piece of work; it ought to be rated very high.

      Thanks -- Conrad

        Hi Conrad

        You say in your essay "The problem is that measurements are inherently contextual. They can't be analyzed into simple, self-contained elements without losing sight of what makes them work." Absolutely right!

        Just a comment on vision: you say "so it requires a huge amount of complex low processing to support the seemingly stable sensible view of the world you see as you glance around." Yes, but also it actually it requires a huge amount of *high level* information to interpret what we see, because the light striking our retina, even though it conveys a vast amount of information, does not convey enough information to unambiguously tell us what is there. Eric Kandel explains this in detail in his book The Age of Enlightenment: he emphasizes that it is only because of top-down processing form the cortex that we are able to form visual images.

        Yes I agree with you about time and information (you may have seen my new paper on time). And then you say "So it's conceivable that what we're looking at is an evolutionary process operating through a kind of natural selection, analogous to biological evolution. Instead of many organisms replicating themselves, reproducing their species, here we have many local systems contributing to the reproduction of their common environment, as a body of shared, self-defining information." Yes I agree - you will see in my quantum essay how I also emphasize this process of adaptive selection works in physics processes such as state vector preparation. It also lies at the heart of the difference between Hamiltonian and Lagrangian dynamics, as I comment elsewhere in this thread. I thinks it a key feature of how physics works, as well as of how biology works.

        Congratulations on a great essay.

        George

        Dear Professor Ellis,

        Thank you very much for commenting about my presentation.

        We can disagree about unitarity. At this point I find both unitary and discontinuous collapse explanations incomplete. I will take this opportunity to explain why it is not that clear there is a discontinuous collapse. For example, in the case of the experiment with Mach-Zehnder interferometer, with delayed choice. Our choice concerning the second beam splitter seems to affect what happened to the photon at the first beam splitter. Assuming there's a discontinuous collapse, it should happen somewhere between the first and the second beam splitter. But this means that a photon which was split and travels along both arms of the interferometer, suddenly collapses on one arm. This would be strange, and conservation laws would be violated. Assuming that the collapse happened before the photon entered the first beam splitter, then why not saying as well that it never happened, or that it happened at the Big-Bang.

        The case of preparation-measurement, which is generally considered the irrefutable proof of collapse, was explained at the slides you quote, "The measurement of O1 in fact refines both the initial conditions of the system \psi, and those of the apparatus \eta". Those slides present a possible unitary explanation of the collapse, by using the entanglement with the preparation device. It is very similar to the Mach-Zehnder interferometer experiment with delayed choice: what we choose to measure determines the way the system interacted with the preparation device.

        Another argument I find convincing is that of conservation laws. They normally follow from unitary evolution - from commutation with the Hamiltonian. Is there a method to obtain the conservation laws, method which holds even when there is a discontinuous collapse?

        I admit though that, besides these arguments and others I put in those slides, one can't find an irrefutable experimental evidence for or against discontinuous collapse. If there's discontinuous collapse, it will always hide no matter how we rearrange the experiment. If it takes place unitarily, locally appears like serendipity, as if a disturbance distributed between the preparation device and the measurement device "accidentally" puts the system in the observed state, so that it doesn't need to collapse. But, even if we allow discontinuous collapse, this kind of "accidents" happen. This is in fact what makes QM contextual. So, if we have anyway to accept strange contextual nonlocal backward in time behavior, then the worst about unitary collapse is already accepted when we accept discontinuous collapse.

        Best wishes,

        Cristi Stoica

        • [deleted]

        The system tells me I'm logged in but this box does not know it ...

        Christi I'll have to look at this further but just a brief response:

        "The case of preparation-measurement, which is generally considered the irrefutable proof of collapse, was explained at the slides you quote,.....Those slides present a possible unitary explanation of the collapse, by using the entanglement with the preparation device."

        Well I know many people claim entanglement solves the measurement problem; but there are many more (including me) who disagree. The key issue is that you have to be able to deal with individual measurements, not just ensembles; this is because you don't have an ensemble if you don't have individual events. Diagonalising the density matrix does not reduce all its terms except one to zero, so decoherence does not do the job of showing how individual events happen in the real world.

        There are essentially two ways state vector preparation takes place: (i) dissipative, as in wire polarizers, and (ii) by separation followed by selection, as in Nicol prisms and the Stern Gerlach experiment. The first is non-unitary all the way; the second is unitary until selection takes place, which is effectively a projection. Neither can be described in a unitary way.

        "It is very similar to the Mach-Zehnder interferometer experiment with delayed choice: what we choose to measure determines the way the system interacted with the preparation device. " yes indeed: fully in line with my proposals of the significance of top-down/contextual effects.

        Best wishes

        George

        Dear Professor Ellis,

        "Well I know many people claim entanglement solves the measurement problem; but there are many more (including me) who disagree. The key issue is that you have to be able to deal with individual measurements, not just ensembles; this is because you don't have an ensemble if you don't have individual events. Diagonalising the density matrix does not reduce all its terms except one to zero, so decoherence does not do the job of showing how individual events happen in the real world."

        I fully agree. Decoherence program interprets at will the same density matrix as describing sometimes a statistical ensembles and sometimes the partial trace of an entangled state. By doing this, they mask the fact that their interpretation acts back in time. My proposal is not based on this. I just claim that there are unitary solutions which satisfy apparently incompatible measurements taking place at different times, and give some examples. Given that such unitary solutions exist, no matter how unbelievable may seem to us, it follows that we can't know for sure that the collapse was discontinuous, or was unitary all the time. My point is that a temporal description appears as selecting the initial conditions back in time, but this happens anyway even if we admit discontinuous collapse, so I don't think one should reject only unitary collapse because of this guilt. And I claim that the global consistency has priority over initial conditions, and the global space+time perspective makes this more acceptable. About the examples you provide, I think I discussed the Stern-Gerlach and shown how can happen without projection. And dissipation in the polarizer, can one prove it is non-unitary even if we consider the full system?

        Best wishes,

        Cristi Stoica

        • [deleted]

        Correction: "found in vol.1" should read "found in vol. 2".

        An addendum concerning "quantum theory maybe having influences into the past":

        If quantum theory is a model that assumes unitarity then even I think so. I just see a crucial difference between model and reality and therefore I maintain the criticism I am expressing in my Fig. 3.

        A profane top-down example: Fans of a political systems who experienced its break down may still consider it the future and remember how they already experienced that future.

        Eckard

        • [deleted]

        About that 'geometric algebra rather than octonions' : one can view complex octonions as geometric. After all, it contains complex quaternions, which can act as a Lorentz operator, as well as being isomorphic to the Pauli algebra - thus we get to Hestenes. One might notice a weird fact which could be relevant to the Arrow of Time. For (-+++) the (-) is the 'volume unit' of the octonion division subalgebra, yet for (+---) the (+) is the 'volume unit' of the Split octonion subalgebra ... which is not a division algebra. One usually considers it a free choice of convention whether to use +--- or -+++ but in this case it has further consequences. One suspects that the Arrow of Time should not be divorced from the Arrow of Matter, with the universe made of Hydrogen rather than anti-Hydrogen, or the Arrow of Weak Isospin, acting only on Left Handed fermions. Perhaps this is relevant to Time Reversal in the micro-world, but not in the macro-world. The Lorentz signature is not just relevant to spacetime, but to anti-matter, sort of like using Pauli algebra geometriclly as well as for Spin and Isospin. Pauli also has a connection to a Mass-Decay matrix, which I saw in Peter Renton's book on ElectroWeak on page 456. Anyway, I am a bit surprised that Relativists seem to ignore complex octonions, since it automatically leads to both +--- and -+++ . One might even say that Time has something to do with Color Neutrality - that 'volume unit' of octonion subalgebra being color neutral. That connection you mention about Neutrons - interestingly involves isospin. One needs a notation that distinguishes the different roles the Pauli subalgebra can play - as it relates to geometry or spin or isospin. Physicists seem to just use the same Pauli matrices, which makes things confusing.

        Some time ago Hector Zenil [Sep. 23, 2012 @ 21:17 GMT] posted as follows, and I missed it. Here is a response

        "How robust your hierarchy depicted in Table 2 for a digital computer system is in the light of Turing's universality? From Turing universality we know that for a computation S with input i we can always write another computer program S with empty input computing the same function than S for i. Also one can always decompose a computation S into S and i, so data and software are not of essential (ontological?) different nature."

        Well it's the standard decomposition in current computers. Turing addressed universality but not how to do interesting things. That requires this hierarchical structure - else we'd all have to be writing machine code if we wanted to use computers, so very few people would be using them.

        Yes indeed their is that ambivalence of how one implements things: I emphasize that in my essay. Its the key feature of lower level equivalence classes underlying higher level function.

        "I also wonder if it isn't statistical mechanics the acknowledge that the view you are arguing against is not the general assumption in the practice of science."

        I am arguing that statistical physics is crucially limited in what it can do. It can describe unstructured systems very well. Most of the systems around us are structured in one way or another, and their essential causal properties are not statistical. Computers are an example.

        George

        Joel,

        "One suspects that the Arrow of Time should not be divorced from the Arrow of Matter, with the universe made of Hydrogen rather than anti-Hydrogen, or the Arrow of Weak Isospin, acting only on Left Handed fermions." I like the sound of that. You've studied this more than I have; I may get round to it eventually, then I'll get some advice from you.

        Just one thing: you say "One usually considers it a free choice of convention whether to use +--- or -+++ but in this case it has further consequences".I can't see that this can be a *physical* difference: it's purely a representational convention which one chooses here, and that arbitrariness will follow through to any other linked formalism one chooses. They are all arbitrary by +/-.

        Anyhow this is all off topic. I'm not going to pursue it further here (if I did I'd want to know if there is any relation to twistors .... no don't answer that!)

        George

        Following up my post of Sept 27, 2012@15:51 GMT, here is a second issue that has arisen through these discussions.

        Issue 2: Lagrangian formulation, holonomy, and non-local physics

        In a response to "The Universe Is Not a Computer" by Ken Wharton, I said

        "You stated 'As regards the LSU formalism, this non-local approach is very interesting. You state "Instead of initial inputs (say, position and angle), Fermat's principle requires logical inputs that are both initial and final (the positions of X and Y). The initial angle is no longer an input, it's a logical output.' Yes indeed. What this Lagrangian approach does is very interesting: it puts dynamics into a framework that resembles the process of adaptive selection (the dynamics is offered a variety of choices, and selects one that it finds optimal according to some selection criterion, rejecting the others). This kind of process occurs in various contexts in physics, even though this is not widely recognised; for example it underlies both Maxwell's demon and state vector preparation (as discussed here ). I believe there may be a deep link to the dynamics you describe. This may be worth pursuing."

        This Langrangian kind of approach leads to selection of paths between the initial and final point, and hence of velocities at the starting point, in contrast to the initial value approach where that initial velocity is given ab initio. This theme has come up again in the presentation posted by Cristinel Stoica [Sept 28 2012 @ 06:27] GMT and is actually implicit in the Feynman path integral approach to quantum theory, as so nicely explained is Feynman's book "QED". What happens in determining the classical limit is selection of the 'best' path, after trying all paths (Feynman and Hibbs: pages 29-31). This determination is obviously non-local, and so is influenced by the topology of the path space, as shown so clearly in the crucial Aharanov-Bohm experiment.

        This of course supports the general view that what really underlies physics is parallel transport and holonomy; Yang-Mills theories fit into such a broad picture.

        One of the deepest questions underlying physics is "Why variational principles?" If the dynamics is viewed as resulting from such a process of selection of a particular path from the set of all paths, there is a glimmer of hope for an explanation of this foundational feature, based in adaptive selection. This is one of the key forms of top-down action from the context to the local system, because selection takes place on the basis of some specific predetermined selection criterion, which is therefore (because it determined the outcome) at a higher causal level than that of the system behaviour being selected for.

        At least that's an idea to consider. It needs developing to make it firm - if it works.

        George

        Addendum: Conditional branching in computer programs

        A further case of essentially the same logic occurs in the operation of digital computer programs. Complex programs are built out of simple operations by

        (1) chunking bits of code as a named unit (a module such as a subroutine, or an object in Object Oriented Languages) with local (internal) variables, so that this code can be called by reference to this name; this can be done hierarchically; and

        (2) allowing conditional branching or looping, characterised for example by "if .... then... else ..." or "while A then X else Y". The program continues on one course if a condition T is true, and another one if it is false.

        This is in essence another case of adaptive selection (see here and here ): there are several options, and one is selected in preference to the others if a selection criterion (the truth or falsity of T) is fulfilled. The roads not taken are in essence discarded. The effect is to change the sequence of the underlying operations according to this selection condition.

        If the relevant variable is a global rather than local variable, then top-down causation takes place from the global context to the local module (what happens locally is determined by a global variable). This logical difference then goes on to cause different flows of electrons at the gate level (cf. the discussion of computers in my essay).

        The underlying physics of course allows this logic to operate, indeed it enables it to happen. Essentially the same process of decision choice between branching possibilities happens all over the place in molecular biology (see Gilbert and Epel: Ecological Developmental Biology for details).

        George

          • [deleted]

          Dear George Ellis,

          yesterday evening i watched your fqxi talk about existence on youtube (Copenhagen meeting). I am impressed by your deep and tough-minded manner of analyzing and argumenting against some approaches that claim to have solved the problem of time and the problem of causality in QM.

          I now read your latest paper "Space time and the passage of time" and it confirms my impression of your working style and the results you achieved.

          Because my own interests concerning the fundamentals of physics lie on the problem of causality in QM, QM's different interpretations and the role time could play within those frameworks, allow me to make some remarks on the issue from my point of view:

          For me, the interpretation of the time-dependent (but also the time-independent) Schrödinger equation as a real QM-physical wave-dynamics is a tricky illusion. From my point of view, the interpretation of this wave-function is such, that it only can be interpreted as an indicator for the existence of a new principle: namely the principle that QM always renders the past consistent with what takes place in the present (with present i mean every measurement/interaction that fixes the former future possiblities into a definite single result). In this sense, QM, in my opinion, is able to change past facts ("physical retrodiction"), for example measurement devices (like double-slit apertures and so on), to be consistent with for example delayed choices in delayed choice experiments. So every physical hints/marks are rendered to be consistent with the former unknown future that was given by the uncertainty of QM as well as by the uncertainty of the possible future events that may take place in the macro-realm.

          For me, this all does mean that the Schrödingers wave-function, if at all existent in some physical sense, does collapse, because it cannot be continued in the new context (with context i mean the change of past facts, enabled at the time of an interaction via entanglement of a physical device with a certain other device that was yet unknown at the time a "particle" was entangled with the first device). The Schrödingers wave-function cannot be continued in the new context, because at the moment the system gets decoupled from entanglement, the causal context has changed and the wave-function, interpreted as deterministic, makes no more sense to be continued. So, we now need a new wave-function to further describe the system's possible evolution until a new interaction/measurement occurs ... and so on.

          What my proposal/interpretation really does, is to show that QM could "mimic" causality via intermediate steps of instantaneous information transfers between the "deterministic evolution" of the "wave-function" (surely with the help of a yet unknown "principle" *what choice* is to make is best and due to what criteria - surely a criteria that has something to do with achieving the macrocosmical causal consistence).

          In my framework, this mechanism of "mimicing a strict determinism" has a cause - and therefore is causal - even though in a different sense than we usually use the word "causal". Usually we think of this word as a whole chain of strictly deterministic events and at the end we arrive at paradox results, like for example time-symmetry in QM, or the future that partially influences the present, or that our brain's single steps and our thoughts are fundamentally pre-determined and so on.

          I argue that non of those paradoxes are neccessary.

          For example, in my approach the time-symmetry of QM is just an illusion, because we interpret the wave-function *without* taking entanglement into account. Surely, the many-worlds interpretation (Everett-style) does take entanglement into account, but the price of this interpretation is that it leads itself ad absurdum by assuming a strict determinism that at the same time can (must?) serve as a way out of the non-locality trap. By asssuming a strict determinism, the doors to some adeventuresome conspiracy-theory is opened. Surely, the MW-interpretation not neccessarily incorporates a conspiracy in reference to - for example - the EPR-results and Bell's theorem (because the MW-interpretation does not deny entanglement). But another adeventuresome conspiracy occurs: namely why we humans can come to consistent insights about nature's behaviour if there is overall determinism in nature? To consistently explain that we are nonetheless able to realize that logics and the behaviour of nature have some huge intersection and are consistent/coherent in many regards, one really has to assume very, very special initial conditions at the beginning of our universe. But that would indicate that QM-probabilities must be in fact interpreted as physical (or meta-physical) ingredients enabling the permanent consistency of the whole universe/multiverse.

          So, what i have done with my interpretation (in my humble opinion) is to eliminate the multiverse in favour of multiple "particle"-interactions that allow a non-deterministic view of the future which is open, allow to explain the need and purpose of entanglement for this task, explain why the "wave-function" *must* "collapse" (if it shoulc be at all physically existent and not merely a false interpreted mathematical statement) and to show why time does really flow and why time and space are emergent features/results of QM-acitvities.

          I would be very happy if you could take a look at my own essay and at the same time i want to apologize for predominantely writing about my own work here. I read your essay very properly and also your arxiv paper mentioned above and after having watched your talk i simply was excited about your lines of reasoning that seem - at least for me at this point in time - to be very similar to my own ones.

          Best wishes,

          Stefan Weckbach

            Dear Stefan

            I think your approach is a sensible approach that takes the problematic issues seriously and tries to make sense of how quantum mechanics works, and in particular how, according to delayed choice experiments, it "reaches back" into the past in an apparently acausal way.

            "QM could "mimic" causality via intermediate steps of instantaneous information transfers between the "deterministic evolution" of the "wave-function" (surely with the help of a yet unknown "principle" *what choice* is to make is best and due to what criteria - surely a criteria that has something to do with achieving the macrocosmical causal consistence)." This mechanism you sketch out is, I believe, more or less in accordance with what I have written about in my post above: Oct. 2, 2012 @ 07:24 GMT: yes it is about choice between competing possibilities. Please see that post.

            I'll try to get time to comment on your thread.

            George Ellis