Essay Abstract

In goal-directed behavior, a large number of possible initial states end up in the pursued goal. The accompanying information loss implies that goal-oriented behavior is in one-to-one correspondence with an open subsystem whose entropy decreases in time. Yet ultimately, the laws of physics are reversible, so entropy variations are necessarily a consequence of the way a system is described. In order to reconcile different levels of description, systems capable of yielding goal-directed behavior must transfer the information about initial conditions to other degrees of freedom outside the boundaries of the agent. To operate steadily, they must consume ordered degrees of freedom provided as input, and be dispensed of disordered outputs that act as wastes from the point of view of the aimed objective. Broadly speaking, hence, goal-oriented behavior requires metabolism, even if conducted by non-living agents. Here I argue that a physical system may or may not display goal-directed behavior depending on what exactly is defined as the agent. The borders of the agent must be carefully tailored so as to entail the appropriate information balance sheet. In this game, observers play the role of tailors: They design agents by setting the limits of the system of interest. Their computation may be iterated to produce a hierarchy of ever more complex agents, aiming at increasingly sophisticated goals, as observed in darwinian evolution. Brain-guided subjects perform this creative observation task naturally, implying that the observation of goal-oriented behavior is a goal-oriented behavior in itself. Minds evolved to cut out pieces of reality and endow them with intentionality, because ascribing intentionality is an efficient way of modeling the world, and making predictions. One most remarkable agent of whom we have indisputable evidence of its goal-pursuing attitude is the self. Notably, this agent is simultaneously the subject and the object of observation.

Author Bio

Ines Samengo has a PhD in Physics, after which she switched to computational neuroscience, with a HFSP postoc with Prof. Alessandro Treves (Trieste), and then an Alexander von Humboldt fellowship with Prof. Andreas Herz (Berlin). She presently works in Bariloche, Argentina, as a researcher of CONICET, applying information-theoretical tools and dynamical-systems theory to the analysis of neural activity in behaving animals, aiming at disclosing the relevant features in the encoding and transmission of sensory information. She is also a professor in Instituto Balseiro, in charge of "Probability and Stochastic Processes" and "Information Theory" in Engineering in Telecommunications.

Download Essay PDF File

Professor Samengo,

I truly enjoyed reading your essay. I suspect that you would be interested in "apoptosis" in the olfactory system-- which learns new smells, which is always learning new smells.

This system depends on the breath, begins with the scent of self, involves neurons, involves consciousness, involves intention. For example, a person whose olfactory system has not become habituated will have the goal of purchasing a home that is not downwind of smelly factories. Prey learn the scent of the wolf. Animals learn the scent of death.

The authors of the following papers, which include the mathematician Keith Devlin, describe a "two-sorted logic." On the LHS of their equations we see models supporting languages of physics, involved with other-- and "higher"-- level models which in the limit support languages of intention, goals, the self. All with respect to the example of the olfactory system.

Clearly, you speak many such languages. The papers are here and here.

I also suspect you might like the inverse relationship principle of "informationalism":

"The main idea of informationalism is to take the inverse relationship between information and possibility as a guiding tenet. The Inverse Relationship Principle: Whenever there is an increase in available information there is a corresponding decrease in possibilities, and vice versa." John Barwise, Information and Impossibilities.

When I read your essay, I imagined a reader considering the above statement after almost every paragraph wherein you write about decreasing entropy.

But I can't say I agree with you on every point you make. I will say that I have no wish to argue-- I just want to make a friendly comment. So where you say:

"The construction of knowledge can be argued to be the essence of mind."

I would say-- but only to myself:

"The construction of beliefs can be argued to be the essence of mind. But in my experience, the essence of the self is to know."

I've attempted to apply the inverse relationship principle to physics here.

Very Best Regards,

Lee Bloomquist

    Thanks for your comments, I will follow the trails you mention about olfaction and informationalism. Best!

    Dear Professor Samengo,

    Thanks for your well written and insightful essay. Wonderful to read the perspectives of one in your field of research.

    In your essay, you emphasize the importance of the observer in prescribing agency. However, in all of the examples you repeatedly mention, only the self-driving car is not itself an animate entity. But, i think that a strong argument can be made that the self-driving car is an extension of human agency. That is, it's not that we simply ascribe agency to a machine, the machine is a complex expression of our own agency. My point is then that I think that rather than agency being determined by an observer, that agency is an intrinsic characteristic of any animate entity. That characteristic is a unique level of complexity that might be described as 'regulation' or even 'control'. Regulation of what?...of inner and outer conditions (i.e. body and environment). It is this non-material expression of 'regulation' that defines intrinsic agency, and is such only because it does not depend on being observed. Would you disagree?

    Thanks again for your essay, and if you get a chance, i would appreciate your thoughts on my own.

    http://fqxi.org/community/forum/topic/2790

    Yours,

    William Ekeson

      Hi, thanks for your thoughts! In my essay, I focused in systems exhibiting agency (those where entropy decreases) in general terms. I avoided focusing in animate systems, because outside religious beliefs, I would not truly know how to define them. Where to draw the line? A virus can be thought of as a type of robot, albeit not constructed by us: A small machine devoted to well-programmed actions (self-replication). And from viruses to humans there seems to be a continuum of animate-ness. Man-made robots, moreover, not always express our own agency, they can be complex enough as to have emergent behaviour we never intended, not only in the bad sense (by miss-functioning), also in the good sense, surpassing our expectations. Think of computers that play chess better than humans. We do not even know how they do it, because they learn from their own experience, and their own mistakes. Their strategy gets hard-wired into their plastic architecture, but not in a way we can explicitly read it out. I do not believe, however, that they are animate, not yet at least, and I dare not speak of the future. I therefore believe it is not a bad idea to first try to understand angency, because I believe it is a simpler problem than consciousness.

      So did you go for the big question? I will surely read your essay in due time!

      Thanks again,

      ines.

      Hi Ines,

      Thanks for your reply.

      Hmm. I had never thought of the term 'animate' as having religious connotations...i should check my bible more often ;) However, i do like your way of describing them as systems where entropy decreases...at least to an extent.

      That is, don't viruses then qualify as systems exhibiting agency? Clearly, their action expresses regulation/control via their inner and outer environs. I don't have a problem defining them as animate entities, albeit very simple ones. For me, another requirement for true agency is that their main objective is intrinsic survival (both individually and generationally). That is, their intrinsic inner and outer surroundings must be directly involved in the process of evolution...only this way can their entropy truly decrease. Thus, the virus qualifies but the man-made machine does not. That is, although the machine can perform functions that decrease entropy, it cannot do so such that its inner and outer surroundings are fundamentally oriented towards its own perpetuation. My brother drives a back hoe. Such machines can do things that no human being can do, but that does not make them animate beings. In the same way, we can make machines that might 'think' much better than ourselves, but i don't think they have any more agency than a back-hoe. Although, I think such agency could become possible in a man-made cyber-universe where some cyber-equivilent to a relationship between inner and outer environs gains the capacity to regulate those environs. What the status of such a cyber-agency would be in our material universe will be a question for folk who are younger that I. I would love to hear your response to the above.

      Cheers,

      William

      • [deleted]

      Professor Samengo,

      Quite a cogent piece that profoundly raises deep questions about self, goals, and the relations of the animate and inanimate. "Physics does not make sense, observers make sense of it." and "Life may not even be fundamentally different from non-life." Your opening paragraph intermixes images of life (dogs, owls, soccer players and self-driving cars) and non-life, though the latter is a construct of humans with a programmed goal. Your following paragraphs tie the examples together quite nicely.

      "The notion of goal-oriented behavior that is used here always brings about an entropy reduction" Generally, living organisms seek order, but as we age we lose order. Reproduction is a way of sustaining a replacement order for your DNA which provides a solid foundation for storing and exhibiting order?

      If life becomes extinct on Earth, we transfer meaning to other intelligent life residing in far away planets?

      Thanks for a thought-provoking essay.

      Jim Hoover

        I guess you focus goal on self-perpetuation, thereby imposing some qualitative constraints, whereas I work with just the quantitative constraint of entropy reduction. I do so, because I have no clear notion of what are the ultimate requirements to be alive, so I am not sure how to define survival. Is a single RNA molecule that self-replicates alive? Ideas and cultural traits can self-replicate, they are born, and they sometimes disappear. Computers surely do self replicate (both the software and the hardware), and they can be created and destroyed. Feeling insecure in this realm, I guess I prefer to impose no qualitative restrictions...

        Thanks for your thoughts!

        ines.

        Hi, James,

        > Generally, living organisms seek order, but as we age we lose order

        I sadly agree...

        > Reproduction is a way of sustaining a replacement order for your DNA which

        > provides a solid foundation for storing and exhibiting order?

        I would say yes, and add that we can not only reproduce biologically (making children), but also culturally: raising children, interacting with other people, discussing essays..

        > If life becomes extinct on Earth, we transfer meaning to other intelligent

        > life residing in far away planets?

        I wish I new... I just keep hoping so! Another alternative is that we evolve into something else that can make something useful with what we have constructed thus far.

        Thanks for your comments!

        inés.

        Professor Samengo,

        Hope you check out my essay.

        Jim

        Ines,

        GOOOOOAAAAAALLLLLLLL!

        Many artists attempt to draw self-portraits. So they draw themselves. Then they draw themselves drawing themselves and they try to do this recursively. You have observed yourself observing yourself. Bravissima!

        A concept I had not adequately considered is the ability of the observer to define the system boundaries ... as a chemical engineer, I have used this concept thousands of times to solve heat and material balances, but I had never thought about it in the way that you describe. It seems to be a very central concept.

        Best Regards and Good Luck,

        Gary Simpson

          dear ines,

          "Therefore, entropy reduction and goal-oriented behavior are in a

          one-to-one correspondence."

          hooray! it is a relief to find someone else who has this same premise also be part of their essay. also i love that you also use maxwell's demon as well: it's such a well-known and simple model that helpfully demonstrates how entropy can be beaten with simple rules.

          i was however hoping that you would be able to connect these things to answer the essay's main question (or a variant of the same), "how may mathematical laws give rise to aims and intentions", and would ask if, in retrospect since your essay's submission, if you had any further insights on this question?

            Ha, ha! Touche'! You caught me in an infinite loop of recursive narcissism! I will look into your essay as soon as I find a bit of time...

            best! ines.

            I will certainly do so, just give me a bit of time, it's hard to keep up. Best! ines.

            Professor Ines,

            A very interesting essay. Enjoyed it a lot reading it. I do agree with you on the importance of observers and the observer's demarcation of what the system is. Thought the following line is very interesting "What is interesting in goal-directed behavior if the observer is allowed to engineer the very definition of the agent, in order to get the desired result? Plants grow because what we define as a plant is the stuff that grows every spring, and not the dirt left on the ground every autumn". Think we will both agree that information theory has tremendous potential to provide the missing links.

            I will have to ponder over the idea of ascribing goals to any entropy reduction in a system. I am wondering if that is too narrow a definition. After all, a (conscious) observer should be capable of ascribing a system as performing a computation (and hence the goal of performing that computation,) even with no entropy change(?)

            I do think we agree on some fundamental ideas. I describe emergence of goal and intentions in physical systems as a tradeoff between dissipation and complexity. In the spirit of Landauer, I have a submission titled 'Intention is Physical'. Due to lack of space, I did not have the chance to talk about the role of observers, but I have a few ideas on how such observers who ascribe goals might emerge in the first place (to be left for future work). I would love your comments and feedback if you have the chance to read it.

            Cheers

            Natesh

              Hi Natesh, I actually read your essay, and liked it a lot. Congratulations, good work!

              "Religions put belief first." by Lee Bloomquis.

              Lee: There is an elegance in your brevity.

              Perpetual attempts to discover our ignorance out of every successful step is the best way to evolve towards ontological reality. However, we betray ourselves when we fall in love with a success path that appears to validate a theory while ignoring that experimental evidences are only limited responses of the chosen interactants in our setup. This eventually leads us to develop an abiding "belief" in a "working" theory!

              ChandraSekhar Roychoushuri