Hi Larissa,

I was pleasantly surprised reading your essay. Reminded me of "Vehicles" by Valentino Braitenberg only with the vehicles replaced by animats which are much more interesting goal directed creatures.

Many other scientists would be very tempted to say this completes the essay question by saying that the MUH (Mathematical Universe Hypothesis) is true. And I was completely surprised by: "While we cannot infer agency from observing apparent goal-directed behavior, by the principle of sufficient reason, something must cause this behavior (if we see an antelope running away, maybe there is a lion). On a grander scale, descriptions in terms of goals and intentions can hint at hidden gradients and selection processes in nature, and inspire new physical models."

I believe you agenda is something like: Let us pursue this concept of agency and see where it takes us. This is the essence of science.

Thanks for your excellent essay,

Don Limuti

Question: Is there a way to "play" with your animates online?

    Dear Don,

    Thank you for your nice comment. The artificial evolution of the animats takes quite a bit of computational power, so there is no easy way yet to play around with them. However, there is a little video of one evolution and the behavior of one animat on http://integratedinformationtheory.org/animats.html

    There is, however, an online interface to calculate the integrated information of little systems of logic gates: http://integratedinformationtheory.org/calculate.html

    Best regards,

    Larissa

    Hi Larissa,

    I wrote you a longer e-mail that I just sent, but in general I found your essay well-written and extremely stimulating. I'm still not entirely convinced that you've answered your own question concerning whether or not systems can have "goals." You suggest that perfect fitness is a goal, but to me, a goal is an internal thing whereas it would seem to me that perfect fitness is largely a response to external stimuli (and by external, I include things like viruses and illness since I'm thinking of goals as related to consciousness here). But maybe I'm wrong. Who knows. Nice essay, though.

    Ian

      Thank you.

      Making a choice based on internal memory, as opposed to being triggered by external events, is certainly a step towards autonomy, but again you need some internal trigger that induces you to look up that good or bad experience in your memory, compare with the current situation, and decide how to (re)act. You mention that 'doing something for no reason' - perhaps the perfect form of agency - could be achieved with just a little noise inside the system. I also thought about this. You mention it cursorily, but I wonder whether this couldn't in fact be the key to implement agency. Quantum fluctuations have already been envisaged (e.g. by Lloyd) as the random generators at the basis of the computational universe edifice: maybe they play a role also in triggering reactions that appear otherwise as self-triggered, spontaneous actions.

      Best regards

      Tommaso

      Dear Tommaso,

      Noise could play an important role for innovation, exploration, and creativity. Yet, if you take autonomy to be causal power of the system itself, noise would not count since it doesn't actually come from within the system but literally out of nowhere. The causal power of the system itself would go down with noise, just as it would decrease through external inputs that drive the system. But I think the divide is just that we have two different views on autonomy (paralleled by the different possible views on free will). One emphasizes the 'free' part: 'being able to act otherwise', making choices without reason. The other emphasizes the 'will' part: 'being determined by oneself as opposed to outside stimuli'. A willed decision would be one that strongly depends on you, your memories, and internal structure, and your best friend can easily predict your choice. This latter sense of autonomy is possible in a deterministic world.

      Best regards,

      Larissa

      Dear Larissa,

      this is a nice summary of some of your own and related work. Now I want to learn more about integrated information theory. Thank you!

      After reading many essays here I start seeing crosslinks everywhere...

      When you wrote "Think of a Markov Brain as a finite cellular automaton with inputs and outputs. No mysteries." it immediately reminded me of Joe Brisendine's description of bacterial chemotaxis.

      And later, when you wrote "one might ask whether, where, and how much information about the environment is represented in the animat's Markov Brain" I had to think of Sofia Magnúsdóttir's essay who qualitatively analyzes the role of models which an agent must have about its environment.

      I'd love to replace (in my essay) my clumsy conditions of being "sufficiently rigid and sufficiently flexible" by something less vague; maybe concepts from integrated information theory could help.

      Cheers, Stefan

      Hi Ian,

      Thanks for your comment. I'll be answering your email shortly. For the discussion here, I agree with you that having goals is necessarily intrinsic. That's why I put 'goal' in quotes any time that I referred to it as 'apparently having goals as subscribed to the agent by some outside observer'. The essay tries to make the point, that neither of the animats actually intrinsically has the goal of perfect fitness, although an outside observer would be tempted to describe their behavior as 'having the goal to catch and avoid blocks'.

      I then give a necessary condition for having any kind of intrinsic information, that is being an integrated system that is to some extent causally autonomous from the environment. I moreover claim that the only way to find intrinsic goals is to look at the agents' intrinsic cause-effect structure and that correlations with the environment won't get us there. What kind of cause-effect structure would correspond to having a goal intrinsically I cannot answer (yet). But there is hope that it is possible since we know that humans have goals intrinsically.

      Best,

      Larissa

      Dear Larissa,

      I read your essay with interest but found the technical descriptions of the animats technically beyond my comprehension, although I am very interested in Cellular Automata CA which seem to resemble Markov Brains? Anyway you have certainly attempted a serious answer to the essay question.

      My Beautiful Universe Model is a type of CA.

      I was interested that you were a sleep researcher - I have recently been interested in how the brain generates and perceives dreams, and noted some interesting observations experienced on the threshold of waking up when I saw ephemeral geometrical patterns superposed on faint patterns in the environment. As if the brain was projecting templates to fit to the unknown visual input.

      Another more severe experience along these lines was 'closed eye' hallucinations I experienced due to surgical anesthesia. which I documented here. The anaesthesia seems to have suspended the neural mechanism that seem to separate dreams from perceived reality and I could see both alternately while the experience lasted.

      I wish you the best in your researches. It is probably probably beyond your interest but do have a look at my fqxi essay.

      Cheers

      Vladimir

        Larissa,

        We are Borg. Species a1-c3, you will be assimilated. We are Borg. Resistance is futile:-)

        Many thanks for an essay that was both enjoyable and enlightening. I wonder if the animats figure out that they are size 2?

        Are there any simulations where the animats of size 1 and size 3 also evolve using similar rules? BTW, what would an animat of size 1 eat? Are there any simulations where the animats can cooperate to attack larger animats? Maybe I run from a lion but me and my buddies will attack a lion if we've got some weapons ..... and have been drinking some courage:-)

        You clearly present the meaning of useful information and the difference between information and being ... that is a key concept that many of the essays do not present.

        Best Regards and Good Luck,

        Gary Simpson

          Dear Larissa,

          thanks for a genuinely insightful essay. At several points, I was afraid you'd fall for the same gambit that's all too often pulled in this sort of discussion---namely, substituting meaning that an external observer sees in an agent's behaviour for meaning available to the agent itself. At each such juncture, you deftly avoided this trap, pointing out why such a strategy just won't do. This alone would have made the essay a very worthwhile contribution---it's all too often that, even in scholarly discussion on this issue, people seem insufficiently aware of this fallacy, and (often inadvertently) try to sell mere correlation---say, the covariance of some internal state with an external variable---as being sufficient for representation.

          But you go even further, giving an argument why the presence of integrated information signals the (causal) unity of a given assemblage. Now, it's not quite clear to me why, exactly, such causal unity ought to bestow meaning available to the agent. I agree with your stipulation that intrinsic meaning can't arise from knowing: that simply leads to vicious regress (the homunculus fallacy).

          Take the above example of correlated internal states and external variables: in order to represent an external variable by means of an internal state, their covariance must, in some sense, be known---in the same way that (my favorite example) one lamp lit at the tower of the Old North Church means 'the British will attack by land' only if whoever sees this lamp also knows that 'one if by land, two if by sea'. Without this knowledge, the mere correlation between the number of lamps and the attack strategy of the British forces does not suffice to decipher the meaning of there being one lamp. But such knowledge itself presupposes meaning, and representation; hence, any analysis of representation in such terms is inevitably circular.

          But it's not completely clear to me, from your essay, how 'being' solves this problem. I do agree that, if it does, IIT seems an interesting tool to delineate boundaries of causally (mostly) autonomous systems, which then may underlie meaningful representations. I can also see how IIT helps 'bind' individual elements together---on most accounts, it's mysterious how the distinct 'parts' of experience unify into a coherent whole; to take James' example, how from ten people thinking of one word of a sentence each an awareness of the whole sentence arises. But that doesn't really help getting at those individually meaningful units to be bound together, at least, not that I can see...

          Anyway, even though I don't quite understand, on your account, how they work, I think that the sort of feedback structures you identify as being possible bearers of meaning are exactly the right kinds of thing. (By the way, a question, if I may: does a high phi generally indicate some kind of feedback, or are there purely feedforward structures achieving high scores?)

          The reason I think so is that, coming from a quite different approach, I've homed in on a special kind of feedback structure that I think serves at least as a toy model of how to achieve meaning available to the agent myself (albeit perhaps an unnecessarily baroque one): that of a von Neumann replicator. Such structures are bipartite, consisting of a 'tape' containing the blueprint of the whole assembly, and an active part capable of interpreting and copying the tape, thus making them a simple model of self-reproduction (whose greatest advantage is its open-ended evolvability). In such a structure, the tape influences the active part, which in turn influences the tape---a change in the active part yields a change in the tape, through differences introduced in the copying operation, while the changed tape itself leads to the construction of a changed active part. Thus, the two elements influence another in a formally similar way to the two nodes of your agents' Markov Brains.

          What may be interesting is that I arrive at this structure from an entirely different starting point---namely, trying to exorcize the homunculus mentioned above by creating symbols whose meaning does not depend on external knowledge, but which are instead meaningful, in some sense, to themselves.

          But that's enough advertisement for my essay; I didn't actually want to get into that so much, but as I said, I think that there may be some common ground both of our approaches point towards. Hence, thanks again for a very thought-provoking essay that, I hope, will go far in this contest!

          Cheers,

          Jochen

            Hi Gary,

            Thank you for your time and the fun comment.

            We are looking at social tasks where more than one animat are interacting in the same environment. There are interesting distinctions that need to be explored further. Something like swarming behavior may require very little integration as it can be implement by very simple rules that only depend on the current sensory input. Real interaction, by contrast, increases context dependency and thus on average lead to higher integration. All work in progress.

            Best regards,

            Larissa

            Dear Vladimir,

            Thank you for your comment and that you took the time to read my essay. Indeed, Markov Brains are very related to cellular automata, the only difference is that each element can have a different update function and that the Markov Brain has inputs from an environment and outputs to an environment (but this could also be seen as a section of a cellular automata within a larger system).

            I am very sympathetic to the idea that the universe is in some ways a giant CA. Partly because it would make the connection between my own work and fundamental physics very straightforward, and partly because of the underlying simplicity and beauty.

            I am not really a sleep researcher myself. Yet, dreams are an important part of consciousness research. You might find the following work by my colleagues of interest: http://biorxiv.org/content/early/2014/12/30/012443.short

            It shows that the responses to seeing a face while dreaming for example are very similar to those of actually seeing a face while awake. Being awake can in this view be seen as a "dream guided by reality". At least some hallucinations then are a mixture of the two states.

            All the best,

            Larissa

            Dear Peter,

            Thank you very much for your insightful comment. I now had the time to read your essay too and liked it a lot. I completely agree that there is a fundamental problem how selection can arise in the first place. I hope I made this clear in my essay at the very beginning. In my work, I program selection into the world. What I want to demonstrate is that even if there is a clear cut selection algorithm for a specific task, this doesn't necessarily lead to fit agents that have intrinsic goals. As you rightly point out it is a big question where such selection mechanisms arise from in nature.

            Best regards,

            Larissa

            Dear Jochen,

            Thank you for reading and the nice comment. I have finally had the time to look at your essay and indeed I think we very much start from the same premise that meaning must be intrinsic. First, to your question: Feedforward structures have no integrated information (by definition), because there is always elements that lack causes or effects within the system, no matter how the system boundaries are drawn.

            I think the role that the replicators take in your essay is taken by a mechanism's cause-effect repertoire in IIT. By being a set of elements in a state, these elements constrain the past of the system and the future of the system, because they exclude all states that are not compatible with their own current state. The cause-effect repertoire is an intrinsic property of the mechanism within the system. It's what it is. However, by itself, a mechanism and it's cause-effect repertoire do not mean anything yet. It is the entire structure of all mechanisms as a whole that results in intrinsic meaning. For example, if there is a mechanisms that correlates with 'apples' in the environment, by itself it cannot mean apples. This is because the meaning of apples requires a meaning of 'fruit', 'not vegetable', 'food', 'colors' etc etc. Importantly, also things that are currently absent in the environment contribute to the meaning of the stuff that is present. The entire cause-effect structure is what the system 'is' for itself.

            What is left to be demonstrated by IIT is that it is indeed possible to 'lock in' the meaning of a mechanisms through the other mechanisms in the cause-effect structure. There is work in progress to demonstrate how this might work for spatial representation.

            Best regards,

            Larissa

            Dear Larissa,

            thanks for your answer! I will definitely keep an eye on the further development of IIT. Is there some convenient review material on its latest version to get me up to speed?

            Your mention of 'absences' as causally relevant evokes the ideas of Terrence Deacon, I wonder if you're familiar with them? He paints an interesting picture on the emergence of goal-directedness ('teleodynamics', as he calls it) from underlying thermodynamic and self-organizing ('morphodynamic') processes via constraints---thus, for instance, self-organization may constrain the thermodynamic tendency towards local entropy maximization, leading instead to stable structures. These constraints are then analyzed in terms of absences.

            Cheers,

            Jochen

            Dear Larissa Albantakis,

            Thank you for your wonderfully readable and equally rigorous essay on the Tale of Two Animats. The depth of your analysis on artificial "intelligence" is impressive and I also appreciate the point you make regarding, "physics, as a set of mathematical laws governing dynamical evolution, does not distinguish between an agent and its environment." I have not seen that particular perspective before. Thank you for the fresh insight and entertaining read and I have also in the meantime rated your essay.

            Regards,

            Robert

            Larissa,

            A clever presentation with perhaps a human-paralleled animat development. Do we assume the same primitive neurological processes (humans 1.5 million years ago ( use of fire, how does the reproduction fit in)in the animats' beginnings?

            In my essay, I say this about AI systems: "artificially intelligent systems humans construct must perceive and respond to the world around them to be truly intelligent, but are only goal-oriented based on programmed goals patterned on human value systems." Not being involved in evolutionary neuroscience, I doubt the truly causally autonomous capabilities of animats, but perhaps the future. I know we should never judge future events based on current technology and understandings -- a type 0 civilization that we are.

            Your causal analysis and metaphorical venture in AI evolution are forward thinking and impressive.

            I too try to apply metaphor -- amniotic fluid of the universe to human birth and dynamics:I speculate about discovering dark matter in a dynamic galactic network of complex actions and interactions of normal matter with the various forces -- gravitational, EM, weak and strong interacting with orbits around SMBH. I propose that researchers wiggle free of labs and lab assumptions and static models.

            Hope you get a chance to comment on mine.

            Jim Hoover

            Thank you Larissa for your response and references. It is amazing how much information brain imaging has provided, and yes dreams and reality are inextricably linked by the neural mechanisms that perceive them- the details of how that actually works out is of interest. In the half-awake perceptions I have mentioned and with eyes wide open and the mind alert, I can actually see ephemeral geometrical shapes that the mind seems to throw at , say, a patch of light in the ceiling, as if it is trying to identify or classify it in some way.

            I suspect that in normal vision incoming signals are constantly being studied in the same way as perception takes its course. This can be a while field of experimental study, using dark-adapted subjects shown very faint images and seeing if such visual inputs (or outputs?) are seen. Have you come across anything like this elsewhere?

            Best wishes

            Vladimr

            Larissa,

            Interesting experiment, findings and analysis, well presented. More a review than an essay perhaps but I do value real science over just opinion. The findings also agree with my own analysis so my cognitive architecture is bound to marry with it!

            You covered a lot but your plain English style made it easy to follow. Bonus points for both!

            My own essay agrees many points; "...one of two (or several) possible states, and which state it is in must matter to other mechanisms the state must be "a difference that makes a difference"" and the 'feedback' mechanism from results of 'running scenario's' drawn from input and memory (imagination).

            You also identify that; "What is left to be demonstrated by IIT is that it is indeed possible to 'lock in' the meaning of a mechanisms through the other mechanisms in the cause-effect structure. There is work in progress to demonstrate how this might work for spatial representation. Do you think that combining the feedback loops with the the hierarchically 'layered' architecture of propositional dynamic logic (PDL) may not well serve this purpose? A higher level decision then served by cascades of lower level decisions?

            Is the conclusion; "....a system cannot, at the same time, have information about itself in its current state and also other possible states.?" your own analysis or adopted theory? May not 'self awareness' be the recognition of 'possible states' and even current state? (i.e. "OK, I'm in a bad mood/hurry/overexcited etc, sorry")? or do you refer just to the causal mechanisms?

            You seemed to shy away from the role of maths, which I think was sensible. Let me know if I'm wrong inferring maths has the role of a useful abstracted 'tool' rather than any any causal foundation. I also certainly agree your (or the?) conclusions and thank you for the valuable input into the topic and a pleasant and interesting read.

            I hope you'll review and comment on mine. Don't be put off by the word 'quantum' in the title and last sections as many are. Your brain seems to work well in 'analytical mode' so you should follow the classical causal mechanism just fine. (Do also see the video/s of the 3D dynamics if you've time - links above).

            Well done, thanks, and best of luck in the contest.

            Peter

              Dear Peter,

              Thank you for you time and the nice comment. A hierarchically layered architecture is certainly the way to go for increasingly invariant concepts based on more lower level specific features. I.e. a grid of interconnected elements may be sufficient to intrinsically create a meaning of locations. Invariant concepts like a bar or a pattern will definitely require a hierarchy of levels.

              As for the statement about information of the system about itself in its current state: This is simple logic and certainly has been voiced before, I think with respect to turing machines and cellular automata, Seth Lloyd also mentioned a similar idea but in terms of prediction (that a system can never predict its entire next state). Note that I meant the entire current state, not just part of it. Yes, the system can have memory of course. But it is important to realize that any memory that the system has, has to be physically instantiated in its current physical state. So all there is at any given moment is the current state of the system and any measure that compares multiple such states is necessarily not intrinsic.

              Best regards,

              Larissa