Dear Erik,

I really enjoyed part 4. "Teleology as breaks in the internal causal chain" so thank you.

Is it essentially an account of how causes can cause us to think there is purpose is causes? If our brain was in a vat we wouldnt have teleology, much like we wouldnt know stuff far away from the vat?

Thanks Jack

http://fqxi.org/community/forum/topic/2722

    Thanks so much Cristi - so glad you found it enjoyable. You immediately hit on one of most interesting questions of this research: how do we related the causal work of the microstates to that of the macrostates. We don't want to multiply entities beyond necessity and have things be overdetermined. There's a few different options - you're right that when it comes to teleological causation (as outlined here) there's less conflict. Just in general causal overfittig and underfitting are nice schemas that outline how it may be non-overlapping in some cases. I give two further options in the endnotes: supercedence (macro entirely constrains or controls micro) or layering (macro contributes what it does above and beyond the micro but micro also contributes). Both of these are viable positions: we argued supersedence in the first paper on causal emergence (Hoel et al 2013) and I argued layering in the second (Hoel 2016).

    I just read and greatly enjoyed your own essay - great explanation of how to "zoom" in and out of the different scales and what that means in terms of coarse-grains and thermodynamics. At some point the research on causal emergence should be connected to thermodynamics, given exactly what you're talking about.

    Thanks again!

    Erik P Hoel

    Thanks Daniel, I appreciate it.

    I made a comment on your essay so we can have the discussion there - thanks for linking!

    Erik P Hoel

    Dr. Hoel,

    You have composed a very impressive discussion about the workings of agents and the role they play in pursuing goals and intentions. Your hierarchy of science, in some ways, parallels the line of thought I chose to develop in my essay.

    One thing that I seem to miss (although it may be there and I am just not aware of it) is that, your central theme is 'How agents causally emerge from their underlying microphysics,' but you never really address the theme. You never say how they emerge; at least, I did not see it in the essay. In fact, in your abstract you state,

    "I argue that agents, with their associated intentions and goal-oriented behavior, can actually causally emerge from their underlying microscopic physics,"

    but in Section 5 of your essay you say,

    "Ultimately, this means that attempting to describe an agent down at the level of atoms will always be a failure of causal model fitting."

    The two statements appear contradictory to me. I will concede that I am not knowledgeable in this field, and the consistency may either escape me or be beyond me.

    Can you say in a brief summary paragraph how agents causally emerge from their underlying microphysics?

    Regards,

    Bill Stubbs.

      Dear Eric,

      Beautiful essay, congratulations.

      Your agents emerge from the quantum state, my reality (including agents) emerge from the state below the quantum scale, behind the wall of Planck.

      So also the the quantum cale is an emergent phenomenon.

      I wonder what your valued thoughts are of my contribution "The Purpose of Life

      best regards

      Wilhelmus de Wilde

        Thanks very much Jack.

        To answer your question: I wouldn't say that the brain in a vat has no teleology, I'd say it does, it's just that its causal relationships don't locally supervene (you couldn't find them no matter how hard you looked). So it's precisely by comparing brains in vats to brains in bodies that you see the causal structure of the brain is actually much more rich than it first appears in isolation. It's those non-locally supervening causal relationships that are teleological (or, if you want to hedge, merely appear teleological)

        Will check out your essay post-haste,

        All the best,

        Erik P Hoel

        Thanks for the read Bill! I'm not sure why those two things would be contradictory in your mind. But I suspect it might have something to do with how I'm using the word "emergence" and what you associate with it.

        It's worth noting that emergence can be used semantically in two ways (above you can see George Ellis's comment about this as well). You can use it to say "the patterns emerged from the simple chemical interaction." In this manner it usually means getting something complex from something very simple: it's fundamentally historical. This isn't the usage herein.

        There is another way to talk about emergence. For instance, if I had a bunch of NOR logic gates, and I hooked them up to make a complicated circuit that enacts many different kinds of logic functions (like ORs and ANDs and NOTs), you would say that the circuit and other logic functions "emerged" from the underlying NOR gates. This is the way I use emergence in the paper.

        Since you asked, here's a brief summary of causal emergence (takes in deep breath): the causal structure of systems can be treated mathematically as a communication channel over which states are sent over time (much like sending messages), and it turns out that describing/observing/intervening upon the system in terms of a higher scale can actually make the channel transmit more information because these higher-scale descriptions are a form of channel coding. Whoof!

        Thanks very much Wilhelmus. I didn't explore anything down at the quantum level for my own essay. To me it seems that relying on quantum effects to explain agents is like trying to solve a single mystery by combining two mysteries, which generally just makes everything even more mysterious, but I'm eager to read your essay and find out. I will check it out there.

        All the best,

        Erik P Hoel

        Thanks George. I definitely agree that emergence can be taken in a historical sense (evolution, development, complexity from simplicity) and a level sense (function, scale, causation). It's only in the latter sense that it involves issues of supervenience.

        I had a question about your comment: what exactly do you mean by "it is the higher levels that decide what will be done and the lower levels that carry out the work?" Can you explain that a bit more? It may illuminate some of our differences in approach. Because if the lower levels are carrying out the causal work in the system, didn't the lower levels really make that previous higher level decision?

        What we are saying is in a sense the opposite of this: causal emergence is only when a macroscale outstrips the microscale in terms of information and causal work, so proving that causal emergence occurs involves directly assessing the causal structure at both the microscale and the macroscale and comparing the two. I think it's really helpful to use simple but well-defined systems like Markov chains for this exact reason: you can derive all the possible supervening scales along with the full causal structure.

        All the best,

        Erik P Hoel

        Dear Erik,

        Thank you for your well written and detailed essay. I thoroughly enjoyed reading it. As others have commented, I particularly agree with your analysis on emergence across multiple levels of scale and particularly liked the way you tied them all together in your conclusion, "purposeless microscale descriptions are like a low dimensional slice of a high dimensional object". I voted on your essay a few days ago, but just thought I'd give you a more detailed reply on how much I enjoyed it.

        Regards,

        Robert

          Thanks so much Robert! Very nice to hear it. Just finished your essay - I greatly enjoyed your breakdown of Maxwell's Demon, and it got me thinking about the long history of cases of trying to get something from nothing. In Maxwell's case, it's a violation of the 2nd law. I think for a long time people have thought of emergence in that way - it's almost like getting something from nothing, because how could you possibly gain any information or causal work going up to a macroscale? It seems like squeezing something from nothing, and I think it is this that's the intuitive force behind the "exclusion argument." But there's a few cases where some people have figured out how to squeeze something from nothing (metaphorically, obviously). One of those is Claude Shannon's noisy-channel theorem. At first it really seems a really noisy channel can only transmit very low amounts of information. Then Shannon showed that through channel coding the information can be radically increased - without altering the channel! By saying that causal emergence comes from treating a system's causal structure as a channel, and that macroscales are encodings for the channel, I'm piggy-backing on Shannon's "something from nothing" proof. So causal emergence is kind of like getting something something from nothing (without altering the system).

          Anyways, just wanted to let you know your essay inspired me to think about it with a new analogy.

          Thanks so much!

          Erik P Hoel

          Dear Erik Hoel,

          just rated your essay and gave it a high score. Your concept of causal emergence is intriguing and you should further investigate it. It also poses interesting teleological questions.

          Best wishes,

          Stefan Weckbach

            Thanks so much Stefan, I'm so glad you found it intriguing. I definitely plan on investigating it further (when I find the time!). In terms of teleology, I think you're right. However, I'm always wary of those kinds of words, so my own personal stance is to try to explain what looks like teleology (apparent teleology) without coming to overt metaphysical conclusions.

            Thanks so much for reading and rating!

            Erik P Hoel

            Hi Eric P. Hoel,

            I offer a complementary suggestion. In addition to pushing along trying to elaborate on the usual assumptions, you might also pause and see what challenges those underlying assumptions. I have an essay that introduces some of the challenges facing the scientific vision of life,

            http://fqxi.org/community/forum/topic/2783

            If nothing else it might introduce some additional puzzles to mull over.

            I hope things are going well for you.

            Ted Christopher

              Thanks so much George! Actually, Larry was on my PhD thesis committee at UW-Madison. He does excellent work.

              There's a handful of analytic philosophers who have thought about these issues, starting with Yablo. There's also List and Menzies, as well as Shapiro and Sober. All these people do incredible work and have all touched on issues related to causal emergence at some point or another, although most are focused more on problems of mental causation. None have, as far as I know, argued explicitly for the theory laid out here and elsewhere.

              One constant problem that I have with this research is the consequence of framing it in terms of the exclusion problem. It's a good way to frame it because it hammers the problem home, but it's a bad way because the exclusion argument is a well-known philosophical issue and people then immediately assume this is a philosophical solution to a philosophical problem. But as I indicate in the essay, I'm using the exclusion argument as a stand-in for a more general issue concerning causal structure, information, and model choice.

              Ultimately, I think this requires a scientific (or mathematical) theory, composed of: A) formalizing supervenience as changes in scale or as highlighting only subsets of the system's state-space; B) some sensitive measure of causation and/or information (I've used information theory and Pearl's causal calculus) that can handle things like noise, is proven to be related to various important causal properties, doesn't give nonsensical answers for simple scenarios, etc; C) actually checking and proving that B can be higher across various scales made with A; D) explaining why it's theoretically even possible that the macro can beat the micro; E) hopefully some applications.

              Originally we argued in 2013 for the D that macroscales reduce the noise in the system (over both the past and the future), and that's how causal emergence occurs. I think there's another interesting way of framing it, which is that macroscales can be thought of as codes (as I argue here and elsewhere), and the macro can beat the micro because of Shannon's noisy-channel coding theorem. Hopefully both these explanations help with E: actual applications.

              Dear Dr. Erik P Hoel,

              Please excuse me for I have no intention of disparaging in any way any part of your essay.

              I merely wish to point out that "Everything should be made as simple as possible, but not simpler." Albert Einstein (1879 - 1955) Physicist & Nobel Laureate.

              Only nature could produce a reality so simple, a single cell amoeba could deal with it.

              The real Universe must consist only of one unified visible infinite physical surface occurring in one infinite dimension, that am always illuminated by infinite non-surface light.

              A more detailed explanation of natural reality can be found in my essay, SCORE ONE FOR SIMPLICITY. I do hope that you will read my essay and perhaps comment on its merit.

              Joe Fisher, Realist

              Hi Ted - thanks so much for stopping by. I strongly agree that we all make assumptions.

              I checked out your essay and was very glad to see you mention Rafael Yuste - he's my principal investigator here at Columbia University. I did want to say that, while I disagree with some of your examples seriously challenging contemporary neuroscience, I absolutely agree with you that little attention has been paid in neuroscience to the consequences of hydrocephalus. IF it's true that people are operating normally but have drastically reduced gray matter (such as 10 to 20%) we're going to need to drastically rethink some things. However, if I remember correctly recent research has questioned these numbers.

              http://blogs.discovermagazine.com/neuroskeptic/2015/07/26/is-your-brain-really-necessary-revisited/#.WMVsVRLyuRs

              Thanks so much for reading!

              Erik P Hoel

              Hi Erik

              Your essay is awesome, you basically crushed it. The point about the kinetics of the system and the signal propagation time setting scales for "identity" was the first point that I found truly insightful, it reminded me of the idea that if we wanted to imagine something crazy like the universe being one big mind then it would never be able to actually finish a thought because it is expanding faster than it is possible to send signals back and forth across its entirety, and probably can't cross the percolation threshold for correlations as a result. The other point I loved was about needing to include the environment in Romeo's causal structure. In the phenomenological tradition, it was clear since Husserl that "consciousness" can only ever be "consciousness of phenomena", it's only really since philosophy of mind took this very ahistorical turn against "reductionism", I feel, that we lost sight of this fact. All of your other claims are correct, lucid, and I think should be uncontroversial for anyone familiar with modern neuroscience, information theory or stochastic dynamics. But it's exceptionally well-argued and clear. If you have a moment to look at my own entry at any point, I think we make many of the same arguments though I wrote in a somewhat different style. In any case I'd appreciate any feedback you could offer.

              Again, total slayer of an essay.

              Joe

              Thanks so much Joe - highly appreciate it. Although I wouldn't say any of this is "uncontroversial"! Try getting funding for it hahaha.

              I just finished your essay, which I enjoyed, particularly your writing. There's lots of stuff going on in there but just wanted to mention the relationship to my own essay here, which is your segment on higher-level explanations. You say: "In this way, we compress our explanations of phenomena, with the useful result that they can be communicated and shared with fewer bits, thus requiring less work to understand."

              I completely agree - this is totally necessary for human communication, or something like science where we communicate facts or data to one another. What I argue in my essay is that there's another possibility for an information theory metaphor beyond just compression for these types of cases: coding. So higher-level explanations aren't always *merely* compressions, sometimes they are also codes; in addition to being compressed, they also error-correct, meaning they can have in theory more information than whatever underlies them.

              Thanks for commenting, and I enjoyed your essay,

              Erik P Hoel