Dear Mr. Palmer,

thank you for your very interesting and valuable comments.

Indeed, we do agree with most of them. We have tried to show in our essay - within the quite strict limit of length - that there are today several examples from the literature about genuine emergent phenomena (i.e. not reducible to elementary interactions between single parts) or holistic approaches. We obviously do not have a proposal on how to revolutionize science, but we have tried to show that sticking to such narrow pre-assumptions as reductionism (that in our opinion pre-assumes a certain form of ontological realism) or strict determinism are unnecessary prejucdice that limit the scope of science.

Very interesting is also your last comment on the role of mathematics. Indeed, this is a serious concern to which it is difficult to answer. In principle, you are totally right, maths could well be a limiting tool for our science and prevent us to develop entirely new scientific fields. However, science is also a matter of surviving of "less fit" theories, and until there is not a concrete proposal that goes beyond a mathematical description, it would be very though to speculate on it.

Thank you once more, and we wish you the best,

Flavio & Chiara

Dear Mr. Gordon,

thank you for your comments.

I am very sorry if our assay sound arrogant, I didn't mean this. It is more of a reaction against the typical arrogant physicists that approach problems as it was given that at the end everything must be simple, or elegant, or at least complex behaviour are merely a result of interactions of very many simple objects. it ain't necessarily so.

Let me then clarify a semantic misunderstanding. When I say "primitive", this is an abjective referred to "approach" and not to the theories as you have interpreted it. A theory of everything, if by any means possible, should (as conventionally all the theories) have a minimal number of elements. However, minimal in this context could be arbitrarily large, and the theory arbitrarily complex. There is nothing in principle that prevents a theory from being complex.

I would like to point out that you maybe take me too much as an enemy of reductionism. As states in a previous post, I do not stand on any particular anti-reductionist position. I only show that this maybe the case that reductionism is a nice starting point, but a limiting one, an approach that prevent us to explore entirely new theoretical directions. My essay is not a treatise against reductionism, it only takes a step back and look for higher form of philosophical approach to fundamental problems.

Thank you once more.

All good wishes,

Flavio

Dear Mr. Blumschein,

thank you for your contribution.

I am not sure if I am familiar with what you point out, what do you mean by "Bruckner's interpretation of Bell's inequality"?.

As for causality is actually a major concern in my essay; I state: " Since falsificationism requires

some "cause-effect" relations to meaningfully test

theories, then instantaneous signaling would break

this possibility, and any meaning of the current

methodology along with it"

If you then follow the endnote 8, you will find a shoert comment on causality in Brukner's theory.

Thank you for your time.

All the best,

Flavio

Hi Eckhard,

my personal answer to your question is, that for the case of a simulation concerning quantum microscopic behaviour, be it in a computer or a human mind, one has at first to discriminate such a simulation from what nature really does.

The reason for this is grounded in correcting a huge misunderstanding. Because independent from what John Bell said or thought or not said and not thought, for coming to a conclusive result that could be valid for the whole scientific community, one has to present a fully fledged theory of ones underlying assumptions which have gone into the mathematics and therefore also into the simulation.

1) If such a fully fledged alternative theory cannot predict something different and testable against the established theory, no infinity of wordings can decide who is wrong or right.

2) If such a fully fledged theory CAN predict something different and testable against the established theory, BUT those experiments have not yet been conducted, THEN they should be conducted, of course.

3) Until these experiments have been actually performed, one should not use wordings that suggest that the proposed theory MUST and IS inevitably true. Nonetheless doing so merely MIRRORS some bad habits even professional theorists have when talking about their hypothesis, right?

4) When the mentioned experiments deliver a negative result for the new theory, this does not mean that other possible theories must necessarily also deliver negative results. We simply cannot know in advance, right?

Since the proponents of locally realistic hidden-variable theories usually not only hope that somedays their own or another such theory can deliver a positive experimental result, but heavily claim it as if it were an already established truth, I think it is perfectly fair that I am allowed to claim that not one of those possibly formulizable theories will ever yield the desired positive result. In a reply above to Andrew Beckwith I gave a description for the reasons why I believe that this must be the case. In my own essay as well as in my subsequent comments on it on my essay page and on other essay pages, I argue for the possibility that nature isn't fully formalizable by mathematics - in opposition to the philosophical prejudice that it nonetheless should. If my claim would be true, it would explain why a mathlab simulation is totally non-conclusive to decide whether or not a violation of Bell's inequalities necessarily indicates non-locality and the like. Because if nature is indeed not totally formalizable (especially regarding nature's behaviour at the quatum level), then no software program that tries to mathematically catch nature's behaviour has to fail - since this behaviour would not be mathematically formalizable. Hence, the underlying maths of such a simulation has nothing to do with what is going on in nature. As I outlined to Andrew Beckwith above, one cannot have one's cake and at the same time eat it. If one labels oneself to be a Realist, one has to accept logic, and hence Gödel's results, and hence physically realized undecidability - and face the possibility that nature may not fully follow the philosophical prejudice of determinism and complete formalizability.

I think the huge misunderstanding I spoke of lies in the fact that it is possible to formulate locally realistic hidden-variable theories - what means nothing but that one can simulate such a theory - either in a computer or in a human mind, but does in no way a priori necessitate that nature has to behave according to the formulated theories. In this sense the fact that it is possible to formulate such theories which may or may not disprove some of Bell's *logical* assumptions, does not suffice to establish any truth of the new theory without an experimental test. Because Bell's assumptions may well be non-conclusive - concerning what nature really does.

Even being able to conclusively identify such false assumptions, be it in Bell's work or in others, this would be totally non-conclusive regarding the question who is wrong or right about nature's real behaviour.

You and me, we are both longstanding essay authors here on fqxi. We both - at least I for myself - know very well that these contests aren't at all exclusively only about TRUTH, but extensively often also about opinions, camouflaged as already established facts. Therefore there is no need for anyone here to argue that one is a Realist and therefore one's own statements about nature's behaviour must inevitably be necessarily true - when one isn't willing to correctly communicate the own approach *realistically*, namely as a possible truth instead of a necessary truth.

Of course, even the realist has emotions. And therefore he illogically values his approach higher than all others. But mixing the logical with the illogical regularily results in an inconsistency, the OPPOSITE OF REALISM, and hence the "Realist" confuses this paraconsistent logics to self-confirm the absolute priority of his own approach. Don't misunderstand me, my words here aren't adressed only to 'hidden-variable' proponents, but also to professional scientists working in the departments of theoretical physics. I know what will be the consequences of this comment here. It will not be a logical answer, but an emotionally answer - expressed in scoring points. But I am a Realist and consider it as necessary and worth to comment the way I did it here. Therefore there is also really no reason for Robert to be disappointed since we know what these contests are all about, they are in much cases not primarily about stringent logical arguments, but rather about personal emotional musings, the latter not even loosely related to the contest's genuine theme. Therefore most of them can be considered 'not even wrong', to cite Pauli.

Best wishes to you - Stefan.

I truly appreciate your kind response. I hope I did not sound too arrogant or forthright in my comment. It is nice to see we have one thing in common. We do think that there is an air of arrogance among some physicists in academia. I don't blame them, they did everything right - so where did they go do wrong! LOL! Smashing particles together will never reveal the answer (the fundamental ingredients)

https://www.academia.edu/27987699/_Why_Cant_the_LHC_Find_New_Math_

I understand your position where you think that the solution to the theory of everything will not be likely through reductionism but please consider this...

It is not necessary that "complex behaviour are merely a result of interactions of very many simple objects". The complexity of interactions stem from the complexity of complex particles. Physicists still do not realize the complexity of a particle's internal energy structure, even the most simple elementary particles such as an electron and an up quark. (A down quark is NOT an elementary particle, it is a combinant particle of an up quark with an electron - Don't worry, all the math still works out and correlates to the experimental data). The manner in which complex particles interact is complex considering how they are put together with other elementary particles and how they exist among all other particles and within spacetime.

The theory of everything is represented by a model that mathematically supports a linear course of events first creating our 3 spatial dimension spacetime, then primordial photons, then particles that contains mass and everything else that arises from there. It explains what happened to create the big bang, inflation and expansion, actually... everything. Everything can be derived from a humongous number of a component building block entity and the energy associated with their initial alignment. I was impressed that you used the word "entity" and not particle. Particles co-exist with spacetime and the building block entities exist "as" spacetime.

I am not saying this because I "feel" this is so... I am saying this because I found the model and the math that shows that this is the way to go. The key is getting through the Ruby Slipper Conundrum and the hierarchy of energy - I am not asking that you believe a word I say here. I expect to be called a lot of names before this is all over... But I had to respond to your essay because you were so elegant in stating your position, a position that goes against what I consider the actual theory of everything. Reductionism did end up being the solution but the only way to reach it was through the philosophical approach.

Again - Thanks for your response. You are gentlemen and I hope you win!

Dear Ms. Cardelli and Mr. Del Santo & Alan,

May I suggest this failure and reassessment of QM has already begun!?

Please read my essay titled "A Fundamental Misunderstanding" where I show that the correlation found in the EPR experiment (currently attributed to QM entanglement) can be fully explained by Classical Physics. My results even include the latest so called 'loophole-free' experiments that use a Steering Inequality rather than the conventional Bell or CHSH tests.

Best Regards,

Declan Traill

Flavio, Chiara,

I largely agree with what you say about the removal* of philosophical prejudices except your concept of falsification, which I think is not reflecting Popper's. For "...it must be possible for an empirical scientific system to be refuted by experience" to make any sense, 'experience' must be of a 'higher' generality/potency than 'empirical scientific system'. For example: the production of plastic bags was a chemically and economically viable, empirical theory. 'Experience', however, could have told us that it is not a clever idea. W. v. O. Quine knew that any theory can be made 'true' by sufficiently distorting the rest of the world.

*Hegel's aufheben (to sublate) may be a more refined term, which in German means to preserve, to disperse and to elevate.

Heinrich

    • [deleted]

    Dear Flavio and Chiara,

    Excellent work, content and expression. We agree on most. We argue diametric views on reductionism but I agree your grounds and think you'll agree mine, particularly as it proves productive. I think your top place is well deserved.

    More importantly I need your help. You identify that QM's (CHSH >2) limit has never been experimentally violated. My essay, completely unbelievably I know, reports on what may be the first(and repeatable) experiment to do so (building on my last few finalist essays) for >2. Photographs and protocol (see end notes) are included along with assumptions and rationale.

    What's more, Declan Trail's short essay (referencing those papers) provides a computer code matching the ontology and also confirmes violation of the so called 'Steering inequality' (closing the measurement loophole).

    The analog experiment is absolute simplicity! Yet as few really understand the problem and QM's assumptions it needs someone who does but isn't prejudiced by docrine to study and help falsify it. I hope you may qualify! I start by effectively replacing spin up/down with Maxwell's orthogonal state pairs.

    I look forward to your comments questions.

    Very well done for yours. Top marks.

    Very best

    Peter

      Dear Mr. Luediger,

      thank you for your comment. I am not sure that you are using the word "experience" with the same meaning I am using it, namely in the usual meaning of the philosophy of science. What I mean by experience is, to be based on the empirical basis, that is, knowledge resulting from an interaction with the world, with nature, the possibility of gaining new information from cleverly disgned experiments. In your post, you seem to use "experience" as "life experience", some realization based on events that one suffers and thus acquires some awareness afterwards. This is not what I meant, and surely not what Popper meant. In this regard, it is curious that you accuse me of not reflecting Popper's intention using the sentence "...it must be possible for an empirical scientific system to be refuted by experience" that is a quotation from Popper's most famous Logik der Forschung (Logic of Scientific Descovery).

      In conclusion: we have bold ideas (in the form of falsifiable statements), and we go out there to interact with the "world" (experience) and we have a way to discriminate the "truth" from imagination. This is actually Popper's legacy.

      I hope the misunderstand is now clarified.

      I wish you the best of luck with your essay (which I enjoyed reading),

      Flavio

      Dear Peter (you forgot to login, but there is only one author with your name),

      thank you for the very kind words.

      Just an oversight from your message: CHSH inequalities has been violated since 1981 (Aspect's experiments) on a regular basis, finding a maxima value of correlations of 2*sqrt(2) (called Tsirelson's boound). Maximally entanglement bipartite states (Bell's states) can indeed reach that value. It is this latter bound that discriminate between sets the limit of QM, and if experimentally violated it would falsify QM as we know it. In principle this is totally feasible, since there are proposal for more-than-quantum- correlations that still lie within the no-signalling region. I discuss this (the so-called PR box, in particular) in my endnote 21. However, there is so far no actual proposal on how to implement this, in practice. How to prepare a phisical state in a scenario that can implement a PR-box experiment.

      Thank you again.

      Best wishes,

      Flavio

      Flavio,

      You essay is claiming: "we have a way to discriminate the "truth" from imagination. This is actually Popper's legacy."

      To me Popper's legacy includes his reportedly accepted utterance to Einstein:

      "You are a Parmenides". I am not sure whether you are entitled to generalize philosophical reasoning as prejudices.

      Stefan Weckbach distinguished between bird's and frog's view. I feel myself rather a frog who has no chance but to accept some philosophical conjectures, in particular causality and the preference for non-arbitrary references. e.g. the now as the natural one.

      I asked you to ignore your dependency on Brukner's defense of QM by backing Bell's argument, and simply tell me in what McEachern was wrong. While I never dealt with QM, I would accept an actually based on QM computer as a strong argument in favour ot it. However, I admittedly don't trust much in Hendrik van Hees' judgement, for emotional reason. Many years ago, it took me about a year of fierce discussion with him until he apologised. Later on I managed to illustrate my view with MATLAB programs wich were not refuted but simply ignored. That's why I feel symathetic with McEachern who made a similar experience. Maybe McEachern is correct, maybe he is wrong.

      For you convienience I point you to two of McEacher's papers:

      A Classical System for Producing "Quantum Correlations"

      viXra.org/abs/16009.0129

      What Went Wrong with the "interpretation" of the Quantum Theory?

      viXra.org/abs/1707.0162

      If you can, please tell me in what McEachern is wrong.

      I would also appreciate you refuting Alan Kadin's suspicions concerning QM.

      Only as a rule, I consider viXra less trustworthy than arXiv.

      I just learned from Kadin that Pauli (1925?) might have influenced Schrödinger, Heisenberg/Born and maybe Kramers.

      Katz made me aware of something behind Buridan, set theory and EPR.

      Curious,

      Eckard

      Dear Mr. Blumschein,

      thank you for your reply. I surely can accept your new statement (that is very different from the criticism you leveled before): " To me Popper's legacy includes his reportedly accepted utterance to Einstein: 'You are a Parmenides'. I am not sure whether you are entitled to generalize philosophical reasoning as prejudices. "

      But if you look at my essay, you have to notice that we do agree: the only way we have found so far to do science it is to "accept some philosophical conjectures", but then we obviously put them to the experimental test (experience, or empirical content, from my previous post). If they fail, well, we change the conjecture. In Popper's words:"conjectures and refutations".

      I am realy unfamiliar with Robert McEachern views (sorry but I couldn't find much. The first link you attached does not work, and the second point to a poorly organised 41-slide file, that is definitely not possible to be used as study material). On the other hand, I alredy answered to you that I quite don't undestand what you mean by Brukner's interpretation of Bell's inequality. He reflects pretty much the (finally) generally accepted idea that Bell's theorem discriminate between two classes of theories the derived (even in principle devivable) under the assumption of "local realism" (please notice that this is nothing more than my equation (1), p. 6; and nothing more) and the theories that are not. QM formalism violates this. But, this is definitely not enough, because two formalism per se, could well be the result of human imagination. Therefore very many experiments have been conducted - are being conducted in many places in the world while I am writing - and show a violation of this condition. A violation that is however compatible with QM predictions. Does this confirm QM? No, but QM survives the evolutionary game of science. But this is not Brukner's idea, this is a trivial result, undestood firstly by Bell himself, by Bohm, by J.-P. Vigier all of whom were staunch realist. From your post it looks like I am proposing something new and suspicious, but it's not; concerning foundations of QM, I am limiting myself to a review of important results, by now very well established, on the fundamental difference between quantum and classical physics.

      Historically Bell's theorem has been completely overlooked and dismissed as philosophical bullshit for too many years. It is an extraordinary success of a few pockets of resistance against the mainstream pragmatic physicicts who strove for having foundations back into the discourse on quantum physics. Bell's theorem is a momentous result of modern science, and its implications profound.

      I have nothing to contribute now and here, on alleged sensational results that claim that Bell's inequalities are pointless (i.e. they do not say anything genuin). Not even Spekkens model, that has recently created a crisis in our understanding of foundations of QM (it can recover quantum superposition, for instance) is able to reproduce the features of quantum entanglement. I can try to understand what this people have done, and possibly change my mind, obviously (science is critique, self-critique is even more important)!

      Thank you again for the interesting food for thought.

      All good wishes,

      Flavio

      Dear Flavio and Chiara,

      Unless I have misunderstood your essay, it seems to me that what you are proposing is that the methodology followed by us in science determines that which is fundamental in science.

      However, where does one draw the line between methodology and theory? Would not some portion of the methodology followed (determinism, etc.) be part of the scientific theory? Certainly to assert,say, determinism to be true feels not much unlike asserting a new theory to be true.

      One cannot fall back onto the defense that methodology is not based on empirical knowledge which theories are, for we know that methodologies can be refuted by the appropriate empirical data, which directly implies that methodologies must be based on certain empirical data-if they go that way, they must have arrived that way.

      I have not read Karl Popper, and so I must ask you to forgive me if I am ending up blatantly ignoring some evident line of thought contradicting my position.

      Essentially, then, my query is this: Can one ever differentiate what you refer to as "philosophical prejudices" from the remaining statements of the theory? Determinism need not be a philosophical prejudice but merely an implication of classical mechanics.

      If one cannot, then there need to be found other grounds for giving, say, determinism a higher status than other propositions of the theory. I have suggested in my own essay one such ground.

        Dear Aditya,

        thanks for the comments; they are interesting, indeed.

        I think you have quite well understood our essay. The issue you point out it is an actual one, namely how to discriminate the philosophical prejudice from the rest of the theory. This is in fact, the diffcult part. However, there a re ways to do it, by means of a clear falsifiable formulation of the "prejudice". If you read the section about quantum physics, you can see what I mean by this. Kochen-Specker and Bell's theorems are two pivotal instances of this process. They found a way to put to the test some ideas which were considered a prioori assumptions in the philosophical background. What I assert, is also that if we assumer a "pre-falsificationist" methodology, namely an empiricist one, it is virtually impossible to achieve this. It it the theory which guides our experiment to test theories.

        Your essay surely provide also interesting views (I have very positively commented and rated it!), yet I am concerned with the actual practice of scientists, as also Popper partly was.

        Thank you again for your contribution.

        Good luck,

        Flavio

        Dear Flavio,

        Aristarchus' heliocentric world model was refuted on empirical grounds, namely, that the involved rotation of the Earth would cause a permanent eastern storm - which was not observed, however. Empirical observations make sense only in the widest context of theories (and this is what Kantian experience (Erfahrung) is all about). Hence Popper's dictum that sentences can only be falsified by sentences - never by (empirical) observations (see LHC, LIGO, etc.)

        Heinrich

        Flavio,

        Thanks, yes Aspect, Weihs, (with Anton Z) etc etc of course, but I'm referring to a classical violation >2, shocking & quite unbelievable I know, which is why I'd like you to check it out. My essay includes the experimental protocol (and photographs), and I identify how it corresponds to John Bells 'guess' on how it would one day be achieved.

        It doesn't actually 'falsify QM' in toto but does falsify the interpretation that only 'weirdness' can produce the correlations, and offers classical physical explanations for EVERY phenomena within QM including 'superposed states,' apparent non-locality, non-integer spin etc. You appreciate it really would cause major ructions if it's correct so needs rigorous falsification! I promise it's worth the time to look.

        Also see Declan Trails code, giving the same results as my (cheaply repeatable) experiment and ontology.

        You may wish to also check out my (top community placed) 'Red/Green Sock trick' essay 2yrs ago.

        Very best and thanks in anticipation

        Peter

        Chiara and Flavio

        Thanks for a very good article. It is very well in agreement to mine. I have described that there were many errors before Lorentz and these have been covered by more and more errors, and therefore the method of correcting errors cannot longer work.

        This means that we should instead focus on finding the FIRST error. I think you would be interested if you took a look at my article.

        Good luck and regards from _____________ John-Erik Persson

          Hi Flavio, I found your essay very readable and sensible.i do agree with you that the falsification of science is extremely important. You have focused on experimental falsification by comparison of hypothesis to the'real world'. There are many other important ways science can be evaluated and potentially falsified. On logic, on mathematical correctness, on methodology, (such as using appropriate controls and replication), on statistical correctness and statistical significance.

          Example: Einstein's light clock thought experiment is built on an incorrect assumption about light. Light is a periodic phenomenon and it is a mathematical fact that periodic motion is invariant under translation. So the conclusion moving clocks run slow can not be made from that experiment, as set out.

          (Although the moving clock will be seen to run slow because light must travel further from the clock to the stationary observer over the duration of a tick to be seen.That is the important difference in what is happening rather than the incorrect longer path between mirrors.)

          The clock itself is not affected by how it is seen and a co-moving observer could see the not slowed tick just the same as when the clock is stationary.This leads to a strange situation where a real life experiment is carries out purporting to support the moving clocks run slow conclusion, based on the incorrect assumption about light. I don't think the number of replications has been sufficient. Also since it should not have been slowed because of the fact about periodic motion perhaps there have been some unaccounted for factors affecting the frequency matching timing. This may seem irrelevant but I'm trying to show that reliance on experimental falsification alone may not be adequate, and it may bot be necessary if other kinds of analysis have been done, identifying error.

          Another issue is that there is reward for publishing and not for identifying own errors or omissions delaying or halting work, and for negative findings not considered worth publishing. There is also the problem of what happens when work is falsified or discredited by others. It can affect credibility of science as a whole, reputations and livelihoods. So the best part, that allows true progress is also not without problems.

          Kind regards Georgina

            Hi Flavio, actually the mistake I wrote about is probably better described as a physics error that leads to the wrong calculation, rather than a mathematical error (Which would be just getting the maths wrong.) I think the fact that the thought experiment involves mathematics, gives the false impression that the conclusion must be correct because the calculation is correct. There being a kind o bias in physics in favour of mathematics because of its precision and objectivity. However that precision and objectivity does not make its use infallible.There can be correct mathematics for an incorrect theory, hypothesis, thought experiment or model.I thought that an interesting bias worth mentioning.As in your essay you talk about bias, such as in favour of reductionism.

            Hi Flavio:

            I highly appreciate Your statement "Commonly accepted views on foundations of science, of fundamental entities are here rejected" see also my proposals in my introduction:

            Neil Turok said recently: "And so we have to go back and question those founding principles and find whatever it is, whatever new principle will replace them.". Cheers Leo