Somehow, my post got messed up. The gt lt symbols I used for quotes messed things up.
So here goes again!
Hello Chris!
I really enjoyed your essay!
When I read the title, I had my doubts. But your presentation is very clear and your arguments are simple and convincing. Clearly, you have been thinking about this for twenty years, as you say. And while I call your arguments "simple and convincing", it is going to take me a while to really internalize this work!
I must say that I was delighted to read your treatment of the measurement process with the target and detector as a joint system. So often, it seems, that people do not understand that the measurement occurs via symmetric interaction. And people often forget that the phase is randomized after a measurement. My friend John Skilling and I are especially sensitive to this since we used this fact to help derive the Born rule in https://arxiv.org/pdf/1712.09725.pdf.
Although we have a cleaner derivation in the paper we are working on now.
It is fascinating how you demonstrate that this randomization of the phase is due to the joint target-detector system being asked to measure itself.
Do I understand that correctly?
In a paper we are currently writing, we write:
[[
Our knowledge of the material world comes from control and observation of interactions of one object with another, traditionally target" and probe" when we are trying to learn about a target by probing it. However, a probe was itself manufactured through interactions and so was itself somewhat uncertain. Hence objects cannot be adequately described by the traditional single classical parameter. No matter how large or small the object, there must always remain some uncertainty. That means that at least one parameter must be added to the traditional one-number-per-property description of an object.
]]
We use this as justification for the Pair Postulate which states that two numbers are needed to describe an object:
[[
for any object, we need at least two to represent existence and uncertainty.
Existence involves numerical quantification and uncertainty involves probability.
]]
We then go on to use symmetries, such as associativity and distributivity (as I describe in my essay), to derive the Feynman rules.
One could imagine performing a measurement on the probe/detector to try to determine its state (in an attempt to remove the uncertainty). But then the second detector one uses to measure the state of the first detector would have an associated uncertainty. And we get ourselves into a state of infinite regress.
I am wondering how you might interpret the perspective I present above in terms of uncomputability?
Thank you again for sharing your wonderful insights and essay!!!
Take care,
Kevin