Dear Erik Hoel,

your essay is quite interesting to me, because it discusses the emergence of new causal powers at some higher hierarchical levels. Your identification of the core concept as error correction is interesting to me. It seems that it indicates that nature, and especially the feature of time, is a consequence of nature's metalaw to keep up the consistence of propositional logic. Therefore, according to my own analysis (see my essay), it follows that a mere reductionistic description of nature must remain essentially incomplete.

Thanks for a thought-provoking essay!

Best wishes,

Stefan Weckbach

    Dear Eric,

    this is a great essay, tackling key issues of the theme topic head on by taking emergence seriously. It is a great companion to my own essay, which looks at the microphysical structures which make such emergence possible.

    Just one comment: multiple realisability is indeed fundamental to real emergence, as you clearly state. With Auletta and Jaeger I have argued that this is a key marker of top-down causation, which is in my view a key feature enabling the genuine emergence you discuss. This is developed further in my book that you mention.

    Regards

    George Ellis

      Hi Erik,

      This is a very well written and interesting essay. I enjoyed you writing on the clear distinction between the micro and macro-scale (and reminded me to read more about causal emergence). It will take me a few readings to fully take in the equations for effective information, etc, but I agree with you on most points. I have your paper on macro beating micro opened up- the result sounds very intriguing. I too study agents using stochastic Markov finite state models and found the section on teleology very encouraging.

      I have a submission titled 'Intention is Physical', which I think you might enjoy, especially the part on prediction and error-correction. Take a look if you have the chance and any comments/questions/feedback is always welcome.

      Cheers

      Natesh

        So this is how I see it. You provide some necessary conditions, probably not all. Natesh (who commented below) provided some sufficient conditions for learning, though he might have been a bit too strict (maybe there can be agency with less). We are not quite there yet, but we seem to be narrowing it down. In turn, Sophia Magnusdottir discussed the role of learning about the environment and about ourselves in consciousness, and I tried to describe the act of learning to observe. So far for now, hopefully I'll manage to link even more essays, with not too wishfully connected links...

        best!

        ines.

        Thanks Ines - good call, I'll check out all those essays.

        Best,

        Erik

        Dear Erik,

        this is a nice essay. I particularly like your example in which the "lag time" is greater than the "turnover rate of the microscopic building blocks". While writing my essay I also thought about this property of typical agents to continuously replace their microscopic components -- but I decided to ignore this aspect. Do you think that this property is necessary for a macroscopic entity to become an agent?

        I require that entities are not "too rigid" (yes, I only vaguely define this notion) if they are to become an agent. Being too rigid (in this sense) would in particular exclude replacement of micro-elements, though that was not my primary reason for imposing this condition.

        Cheers, Stefan

          Thank you so much Stefan. I just read (and voted for) your essay, and I love how you're also discussing macrostates. I particularly like your analogy of how entropy (or irreversibility) arises only at the macroscale, and saying that this is analogous to how goals arise solely at the macroscale. In my view, the corollary analogy would be in terms of error-correction: only error-correction for causal relationships can occur at the macroscale.

          You bringing rigidity into this in your essay is an interesting avenue - I didn't explore what multiple-realizability means for the physical properties (only the causal/informational ones). It makes me think it's worth investigating whether there are certain physical requirements (necessities) for causal emergence to occur.

          All the best!

          Erik P Hoel

          Eric,

          This is a brilliant exposition of the irreducibility of goal-oriented behavior. But as a rationalization of agent causal emergence it inevitably fails. Romeo's goal-oriented desire isn't contained in the combined deterministic structure of behavior of agent in environment; the teleology, the final cause, is presupposed by the determined causal steps he takes to achieve his end.

          Think of a robotic vacuum. It can be observed to perform its relentless roaming around the floor as a fully determined causal system. But its actions presuppose the teleology of its designer to provide a product for their customer, and the teleology of the owner to clean the floor.

            Whoops! Replied to the wrong Stefan above -> Lots of Stefans in this contest (all with good essays - correlation or causation?)

            Thank you so much Stefan. I just read (and voted for) your essay, and I love how you're also discussing macrostates. I particularly like your analogy of how entropy (or irreversibility) arises only at the macroscale, and saying that this is analogous to how goals arise solely at the macroscale. In my view, the corollary analogy would be in terms of error-correction: only error-correction for causal relationships can occur at the macroscale.

            You bringing rigidity into this in your essay is an interesting avenue - I didn't explore what multiple-realizability means for the physical properties (only the causal/informational ones). It makes me think it's worth investigating whether there are certain physical requirements (necessities) for causal emergence to occur.

            All the best!

            Erik P Hoel

            Hey Stefan! Sorry I accidentally posted this comment to you - it's addressed to the other Stefan below (lots of Stefans, all with good essays!)

            Just finished and voted for your essay: "In Search of the Meaning of Meaning." I agree with your setup of the problem. I certainly agree that one of the big missing ingredients is consciousness, and we don't exactly know what a theory of consciousness would look like right now (although check out Integrated Information Theory for the best one yet, in my biased opinion). You clearly argue that the eliminativist position for consciousness entails the elimination of goals and meaning, which I would generally agree with. Although I wouldn't agree that we need to bring god into the equation - I think consciousness is mysterious enough! I'd like to see your statement about propositional logic more worked out -> although I agree that it's possible that some things only really exist on the macroscale.

            Sorry again for the Stefan-related mixup - thanks for the comment and the essay!

            Hey James - thanks so much for the compliment.

            I think you're right that the case of Romeo isn't explored enough, although I disagree that it fails as an example of a certain type of causal emergence.

            If I had to sum up my point with Romeo's brain, it's that causal relationships don't always inhere locally to the system itself -> IF you considered the system in isolation, you'd totally miss that there are causal relationships *within* that system (or between parts of it). In the language of analytic philosophy, I'd say that causal relationships that don't supervene locally (which Romeo's brain is an example of) are those we should call teleological. So in this sense, the causal path between his desire to kiss and the act of kissing *is* deterministic (if you trigger the desire to kiss, Romeo inevitably makes his way to his Juliet). I'm not sure exactly what you mean in saying that the relationship presupposed all the causal steps he takes to achieve his end. I don't think it had to because the causal steps are precisely multiply-realizable (the path is variable). I suspect that this (reasonable) disagreement may concern semantics: should we really call this teleology, or just the appearance of it?

            Thanks so much for your thoughtful comment James!

            • [deleted]

            Hi George - so great to talk to you and so nice of you to comment.

            You're right that your essay makes a very nice compliment - just read and voted for it now. I particularly liked your focus on biomolecules as logic gates.

            I was interested in something you said in the comments of your own essay, which is how you no longer use the term top-down causation? I talk a bit about this in the technical notes: I agree that the term top-down causation can be confusing. Most people think of it as: if x supervenes on y at time t, then x determines the state of y (or influences it) also at the same time point t. But this is a logical impossibility. So I think the layering analogy is more apt for describing what's really going on. The challenge to the layer cake hypothesis (all causal structure is across different spatiotemporal scales) is making sure that gerrymandered or redundant scales aren't included: that entities aren't multiplied beyond necessity.

            All the best - thanks so much for the read and the essay,

            Erik P Hoel

            Not sure why I got logged out and posted as anonymous (I keep having technical problems with my FQXi comments haha) but that's really me!

            https://en.m.wikipedia.org/wiki/Complex_system

            Two questions:

            Why the terms linear/nonlinear are absent from the text and why use 'information', more useful in digital representation, than Signal/Noise more adequate to the analog world we inhabit (and concepts like negative feedback, transfer function, etc).

            The low level description obeys to linear equations, where superposition rules are strict. The higher levels are nonlinear by nature or construction, f.i. hysteresis, losses, noise are non-reversible. Any transistor, diode, flip-flop, magnet, etc are examples. The ADN is digital and coded/decoded with transcription errors included (I take it as a law: a huge amount of data always has a large quantity of errors/noise). The components of all digital devices are, at low level, analog nonlinear components.

            About the goal: is it real or apparent? A roly-poly toy, or the navy ships, do they want to stay up ? It appears so but in reality they are forced to stay up by construction. The same can be said about floating icebergs: no designer, no goal,...

            https://en.m.wikipedia.org/wiki/Roly-poly_toy

            My viewpoint is based in IT and electronic eng foundations by formation and practice.

              Hey Natesh! Thanks for getting in touch. So excellent to hear that you're interested in causal emergence - I totally agree that working with simple definable systems (like Markov processes) is the way to go. We can all wave our hands about emergence until the end of time but until you really drill down and give proof of principle examples I think it's always going to be wishy washy. So I really appreciate the rigorous approach in your own essay (just read and voted for). I'm going to take a few reads to grok all the math (I've been meaning to get more into Friston too; your essay is a nice compliment to his ideas).

              I was especially interested in your statement of "We can view the upper levels of the hierarchical model in the brain as the source of only intentions and make a strong case that intention is physical." I would like to see that done out directly: looking at upper vs lower levels and seeing how dissipation is being done at each scale.

              All the best - glad you got in contact, and thanks for the read and the essay,

              Erik P Hoel

              Hi Erik,

              first and foremost: congratulations on an excellent essay! I was unaware of your work on causal emergence; I will proceed to reading it post haste. I had also had Judea Pearl's monograph on causality on my reading list for quite a while, I have now bumped it up.

              My only criticism, but it may be due to something I have misread, is that you equate purposive agency with causal emergence but never clearly state it. It feels as if you have the answer to the initial question but stop just short of explicitly stating it (even though it can be rather readily inferred).

              In my (less learned) essay I also rely on mutual information to see agency emerging through its intersection with purpose as a process, but I do so within the framework of the Information Theory of Individuality (which claims notably that the levels can be detected without a priori knowledge of their existence). I would be curious to hear your thoughts as to how ITI relates to causal emergence.

              Finally, you say that "Struck by Cupid's arrow, Romeo will indefinitely pursue his goal of kissing Juliet, and to the experimenter's surprise Sd will inexorably, almost magically, always lead to Sk." Would you conclude that this indefinite pursuance of love constitutes Romeostasis? (Sorry. Really, really sorry.)

              Thanks a lot!

                Hi Robin - thanks for commenting. Never apologize for a pun, I loved it.

                In terms of purposive behavior and causal emergence; causal emergence can occur without purposive behavior. But I think purposive behavior couldn't exist without there being accompanying causal emergence. I hope it's clear that I think agents causally emerge, assisted by their purposive behavior.

                I just read and voted for your essay, and I think it is actually a great overview of some really serious issues. Good to see Smolin, Krakauer, and Braitenberg all tied together in one essay. As to your question about ITI (which I had not heard of until now, so thank you), I remember meeting Krakauer in 2016 and he briefly said it was impossible for there to be any extra information at the macroscale (which, if you're only considering macroscales as zipped compressions is definitely true; however, the theory of causal emergence points out that they can be encodings, not just compressions) so I know he didn't have causal emergence in mind in defining ITI. However, I do think ITI sounds useful for defining the boundaries of systems (another choice is its anagram, IIT: Integrated Information Theory).

                Thanks so much for your comment and your essay!

                Erik P Hoel

                Hi Helder!

                Good question about why I'm using discrete and finite systems formalize causal emergence, rather than analog concepts (like feedback, etc). The first reason is that this allows supervening scales to be easily defined and modeled. For instance, one can generate the full space {S} of possible supervening descriptions of any particular system, and then search across that space, as we did in Hoel et al. (2013) "Quantifying causal emergence." Another reason is that information theory, such as mutual information is most often represented as between two finite and discrete variables. A third is that the causal calculus of Pearl is also often represented in terms of Markov chains. So showing how these all can be synthesized is much more direct in these types of systems (applicable to things like cellular automata, etc).

                But this doesn't mean linearity / nonlinearity and related concepts doesn't come into play, it just wasn't addressed in this essay. See Hoel (2016) "When the map is better than the territory" of a discussion on how symmetry breaking is critical for causal emergence.

                Thanks so much for reading!

                Erik P Hoel

                Dear Erik,

                Great Essay! The way you address physical entities, called agents, seem to be related to what I called operators, as I defined in my essay:

                http://fqxi.org/community/forum/topic/2846

                Both of them are defined with autopoesis functions in mind, though I isolate a specific type of reaction which I believed gave birth to life on earth, which are benchmarks where chemical clocks regulate themselves. They'd have the whole oceans for them and they'd evolve at first by struggling to be stable against perturbation.

                I'd like to know your view, so that I can build a positive feedback.

                  Hi Erik

                  Thanks for that.

                  Well I have been persuaded that it may be better to talk about causation as horizontal, emergence as bottom up, and realisation as top-down. But partly its to do with the three different (interelated) aspects of emergence: evolutionary, developmental, and functional. The first two are diachronic and the last synchronic. It is in the third case that the issue of supervenience arises.

                  However what is important is still the issue that it is the higher levels that decide what will be done and the lower levels that carry out the work, which your group have discussed in terms of higher levels having greater causal powers than lower levels. That is a key aspect.

                  Best regards

                  George

                  (they log you out after a while I think and you have to log in again)