Hey Stefan! Sorry I accidentally posted this comment to you - it's addressed to the other Stefan below (lots of Stefans, all with good essays!)

Just finished and voted for your essay: "In Search of the Meaning of Meaning." I agree with your setup of the problem. I certainly agree that one of the big missing ingredients is consciousness, and we don't exactly know what a theory of consciousness would look like right now (although check out Integrated Information Theory for the best one yet, in my biased opinion). You clearly argue that the eliminativist position for consciousness entails the elimination of goals and meaning, which I would generally agree with. Although I wouldn't agree that we need to bring god into the equation - I think consciousness is mysterious enough! I'd like to see your statement about propositional logic more worked out -> although I agree that it's possible that some things only really exist on the macroscale.

Sorry again for the Stefan-related mixup - thanks for the comment and the essay!

Hey James - thanks so much for the compliment.

I think you're right that the case of Romeo isn't explored enough, although I disagree that it fails as an example of a certain type of causal emergence.

If I had to sum up my point with Romeo's brain, it's that causal relationships don't always inhere locally to the system itself -> IF you considered the system in isolation, you'd totally miss that there are causal relationships *within* that system (or between parts of it). In the language of analytic philosophy, I'd say that causal relationships that don't supervene locally (which Romeo's brain is an example of) are those we should call teleological. So in this sense, the causal path between his desire to kiss and the act of kissing *is* deterministic (if you trigger the desire to kiss, Romeo inevitably makes his way to his Juliet). I'm not sure exactly what you mean in saying that the relationship presupposed all the causal steps he takes to achieve his end. I don't think it had to because the causal steps are precisely multiply-realizable (the path is variable). I suspect that this (reasonable) disagreement may concern semantics: should we really call this teleology, or just the appearance of it?

Thanks so much for your thoughtful comment James!

  • [deleted]

Hi George - so great to talk to you and so nice of you to comment.

You're right that your essay makes a very nice compliment - just read and voted for it now. I particularly liked your focus on biomolecules as logic gates.

I was interested in something you said in the comments of your own essay, which is how you no longer use the term top-down causation? I talk a bit about this in the technical notes: I agree that the term top-down causation can be confusing. Most people think of it as: if x supervenes on y at time t, then x determines the state of y (or influences it) also at the same time point t. But this is a logical impossibility. So I think the layering analogy is more apt for describing what's really going on. The challenge to the layer cake hypothesis (all causal structure is across different spatiotemporal scales) is making sure that gerrymandered or redundant scales aren't included: that entities aren't multiplied beyond necessity.

All the best - thanks so much for the read and the essay,

Erik P Hoel

Not sure why I got logged out and posted as anonymous (I keep having technical problems with my FQXi comments haha) but that's really me!

https://en.m.wikipedia.org/wiki/Complex_system

Two questions:

Why the terms linear/nonlinear are absent from the text and why use 'information', more useful in digital representation, than Signal/Noise more adequate to the analog world we inhabit (and concepts like negative feedback, transfer function, etc).

The low level description obeys to linear equations, where superposition rules are strict. The higher levels are nonlinear by nature or construction, f.i. hysteresis, losses, noise are non-reversible. Any transistor, diode, flip-flop, magnet, etc are examples. The ADN is digital and coded/decoded with transcription errors included (I take it as a law: a huge amount of data always has a large quantity of errors/noise). The components of all digital devices are, at low level, analog nonlinear components.

About the goal: is it real or apparent? A roly-poly toy, or the navy ships, do they want to stay up ? It appears so but in reality they are forced to stay up by construction. The same can be said about floating icebergs: no designer, no goal,...

https://en.m.wikipedia.org/wiki/Roly-poly_toy

My viewpoint is based in IT and electronic eng foundations by formation and practice.

    Hey Natesh! Thanks for getting in touch. So excellent to hear that you're interested in causal emergence - I totally agree that working with simple definable systems (like Markov processes) is the way to go. We can all wave our hands about emergence until the end of time but until you really drill down and give proof of principle examples I think it's always going to be wishy washy. So I really appreciate the rigorous approach in your own essay (just read and voted for). I'm going to take a few reads to grok all the math (I've been meaning to get more into Friston too; your essay is a nice compliment to his ideas).

    I was especially interested in your statement of "We can view the upper levels of the hierarchical model in the brain as the source of only intentions and make a strong case that intention is physical." I would like to see that done out directly: looking at upper vs lower levels and seeing how dissipation is being done at each scale.

    All the best - glad you got in contact, and thanks for the read and the essay,

    Erik P Hoel

    Hi Erik,

    first and foremost: congratulations on an excellent essay! I was unaware of your work on causal emergence; I will proceed to reading it post haste. I had also had Judea Pearl's monograph on causality on my reading list for quite a while, I have now bumped it up.

    My only criticism, but it may be due to something I have misread, is that you equate purposive agency with causal emergence but never clearly state it. It feels as if you have the answer to the initial question but stop just short of explicitly stating it (even though it can be rather readily inferred).

    In my (less learned) essay I also rely on mutual information to see agency emerging through its intersection with purpose as a process, but I do so within the framework of the Information Theory of Individuality (which claims notably that the levels can be detected without a priori knowledge of their existence). I would be curious to hear your thoughts as to how ITI relates to causal emergence.

    Finally, you say that "Struck by Cupid's arrow, Romeo will indefinitely pursue his goal of kissing Juliet, and to the experimenter's surprise Sd will inexorably, almost magically, always lead to Sk." Would you conclude that this indefinite pursuance of love constitutes Romeostasis? (Sorry. Really, really sorry.)

    Thanks a lot!

      Hi Robin - thanks for commenting. Never apologize for a pun, I loved it.

      In terms of purposive behavior and causal emergence; causal emergence can occur without purposive behavior. But I think purposive behavior couldn't exist without there being accompanying causal emergence. I hope it's clear that I think agents causally emerge, assisted by their purposive behavior.

      I just read and voted for your essay, and I think it is actually a great overview of some really serious issues. Good to see Smolin, Krakauer, and Braitenberg all tied together in one essay. As to your question about ITI (which I had not heard of until now, so thank you), I remember meeting Krakauer in 2016 and he briefly said it was impossible for there to be any extra information at the macroscale (which, if you're only considering macroscales as zipped compressions is definitely true; however, the theory of causal emergence points out that they can be encodings, not just compressions) so I know he didn't have causal emergence in mind in defining ITI. However, I do think ITI sounds useful for defining the boundaries of systems (another choice is its anagram, IIT: Integrated Information Theory).

      Thanks so much for your comment and your essay!

      Erik P Hoel

      Hi Helder!

      Good question about why I'm using discrete and finite systems formalize causal emergence, rather than analog concepts (like feedback, etc). The first reason is that this allows supervening scales to be easily defined and modeled. For instance, one can generate the full space {S} of possible supervening descriptions of any particular system, and then search across that space, as we did in Hoel et al. (2013) "Quantifying causal emergence." Another reason is that information theory, such as mutual information is most often represented as between two finite and discrete variables. A third is that the causal calculus of Pearl is also often represented in terms of Markov chains. So showing how these all can be synthesized is much more direct in these types of systems (applicable to things like cellular automata, etc).

      But this doesn't mean linearity / nonlinearity and related concepts doesn't come into play, it just wasn't addressed in this essay. See Hoel (2016) "When the map is better than the territory" of a discussion on how symmetry breaking is critical for causal emergence.

      Thanks so much for reading!

      Erik P Hoel

      Dear Erik,

      Great Essay! The way you address physical entities, called agents, seem to be related to what I called operators, as I defined in my essay:

      http://fqxi.org/community/forum/topic/2846

      Both of them are defined with autopoesis functions in mind, though I isolate a specific type of reaction which I believed gave birth to life on earth, which are benchmarks where chemical clocks regulate themselves. They'd have the whole oceans for them and they'd evolve at first by struggling to be stable against perturbation.

      I'd like to know your view, so that I can build a positive feedback.

        Hi Erik

        Thanks for that.

        Well I have been persuaded that it may be better to talk about causation as horizontal, emergence as bottom up, and realisation as top-down. But partly its to do with the three different (interelated) aspects of emergence: evolutionary, developmental, and functional. The first two are diachronic and the last synchronic. It is in the third case that the issue of supervenience arises.

        However what is important is still the issue that it is the higher levels that decide what will be done and the lower levels that carry out the work, which your group have discussed in terms of higher levels having greater causal powers than lower levels. That is a key aspect.

        Best regards

        George

        (they log you out after a while I think and you have to log in again)

        Dear Erik,

        Excellent essay, I liked it very much, both how it is written, and the ideas. The result that open emergent systems can be able to win the fight with the underlying, more fundamental level which apparently gives us all the reasons to think it will make them very unstable, seems to me a breakthrough, a long awaited answer to an important question. Congratulations! If I understand well, this solves the tension between fundamental lower levels and emergent levels without the latter having to break causality of the possible microstates of the former, by using loops that include the environment. A similar tension, but not necessarily related to agents with goals, happens between the classical level and the quantum level, where the quantum level determines the classical level but at the same time it is constrained by it. Unfortunately, in this case it seems there is no way to solve the tension without the quantum level giving up in the face of the classical level, by the wavefunction collapse (I think this has some problems, e.g. it breaks the conservation laws, but there is another way, I explained it in this older essay).

        Best regards,

        Cristi Stoica

        The Tablet of the Metalaw

          Dear Erik,

          I really enjoyed part 4. "Teleology as breaks in the internal causal chain" so thank you.

          Is it essentially an account of how causes can cause us to think there is purpose is causes? If our brain was in a vat we wouldnt have teleology, much like we wouldnt know stuff far away from the vat?

          Thanks Jack

          http://fqxi.org/community/forum/topic/2722

            Thanks so much Cristi - so glad you found it enjoyable. You immediately hit on one of most interesting questions of this research: how do we related the causal work of the microstates to that of the macrostates. We don't want to multiply entities beyond necessity and have things be overdetermined. There's a few different options - you're right that when it comes to teleological causation (as outlined here) there's less conflict. Just in general causal overfittig and underfitting are nice schemas that outline how it may be non-overlapping in some cases. I give two further options in the endnotes: supercedence (macro entirely constrains or controls micro) or layering (macro contributes what it does above and beyond the micro but micro also contributes). Both of these are viable positions: we argued supersedence in the first paper on causal emergence (Hoel et al 2013) and I argued layering in the second (Hoel 2016).

            I just read and greatly enjoyed your own essay - great explanation of how to "zoom" in and out of the different scales and what that means in terms of coarse-grains and thermodynamics. At some point the research on causal emergence should be connected to thermodynamics, given exactly what you're talking about.

            Thanks again!

            Erik P Hoel

            Thanks Daniel, I appreciate it.

            I made a comment on your essay so we can have the discussion there - thanks for linking!

            Erik P Hoel

            Dr. Hoel,

            You have composed a very impressive discussion about the workings of agents and the role they play in pursuing goals and intentions. Your hierarchy of science, in some ways, parallels the line of thought I chose to develop in my essay.

            One thing that I seem to miss (although it may be there and I am just not aware of it) is that, your central theme is 'How agents causally emerge from their underlying microphysics,' but you never really address the theme. You never say how they emerge; at least, I did not see it in the essay. In fact, in your abstract you state,

            "I argue that agents, with their associated intentions and goal-oriented behavior, can actually causally emerge from their underlying microscopic physics,"

            but in Section 5 of your essay you say,

            "Ultimately, this means that attempting to describe an agent down at the level of atoms will always be a failure of causal model fitting."

            The two statements appear contradictory to me. I will concede that I am not knowledgeable in this field, and the consistency may either escape me or be beyond me.

            Can you say in a brief summary paragraph how agents causally emerge from their underlying microphysics?

            Regards,

            Bill Stubbs.

              Dear Eric,

              Beautiful essay, congratulations.

              Your agents emerge from the quantum state, my reality (including agents) emerge from the state below the quantum scale, behind the wall of Planck.

              So also the the quantum cale is an emergent phenomenon.

              I wonder what your valued thoughts are of my contribution "The Purpose of Life

              best regards

              Wilhelmus de Wilde

                Thanks very much Jack.

                To answer your question: I wouldn't say that the brain in a vat has no teleology, I'd say it does, it's just that its causal relationships don't locally supervene (you couldn't find them no matter how hard you looked). So it's precisely by comparing brains in vats to brains in bodies that you see the causal structure of the brain is actually much more rich than it first appears in isolation. It's those non-locally supervening causal relationships that are teleological (or, if you want to hedge, merely appear teleological)

                Will check out your essay post-haste,

                All the best,

                Erik P Hoel

                Thanks for the read Bill! I'm not sure why those two things would be contradictory in your mind. But I suspect it might have something to do with how I'm using the word "emergence" and what you associate with it.

                It's worth noting that emergence can be used semantically in two ways (above you can see George Ellis's comment about this as well). You can use it to say "the patterns emerged from the simple chemical interaction." In this manner it usually means getting something complex from something very simple: it's fundamentally historical. This isn't the usage herein.

                There is another way to talk about emergence. For instance, if I had a bunch of NOR logic gates, and I hooked them up to make a complicated circuit that enacts many different kinds of logic functions (like ORs and ANDs and NOTs), you would say that the circuit and other logic functions "emerged" from the underlying NOR gates. This is the way I use emergence in the paper.

                Since you asked, here's a brief summary of causal emergence (takes in deep breath): the causal structure of systems can be treated mathematically as a communication channel over which states are sent over time (much like sending messages), and it turns out that describing/observing/intervening upon the system in terms of a higher scale can actually make the channel transmit more information because these higher-scale descriptions are a form of channel coding. Whoof!

                Thanks very much Wilhelmus. I didn't explore anything down at the quantum level for my own essay. To me it seems that relying on quantum effects to explain agents is like trying to solve a single mystery by combining two mysteries, which generally just makes everything even more mysterious, but I'm eager to read your essay and find out. I will check it out there.

                All the best,

                Erik P Hoel