Dear Larissa,

I read your essay with interest but found the technical descriptions of the animats technically beyond my comprehension, although I am very interested in Cellular Automata CA which seem to resemble Markov Brains? Anyway you have certainly attempted a serious answer to the essay question.

My Beautiful Universe Model is a type of CA.

I was interested that you were a sleep researcher - I have recently been interested in how the brain generates and perceives dreams, and noted some interesting observations experienced on the threshold of waking up when I saw ephemeral geometrical patterns superposed on faint patterns in the environment. As if the brain was projecting templates to fit to the unknown visual input.

Another more severe experience along these lines was 'closed eye' hallucinations I experienced due to surgical anesthesia. which I documented here. The anaesthesia seems to have suspended the neural mechanism that seem to separate dreams from perceived reality and I could see both alternately while the experience lasted.

I wish you the best in your researches. It is probably probably beyond your interest but do have a look at my fqxi essay.

Cheers

Vladimir

    Larissa,

    We are Borg. Species a1-c3, you will be assimilated. We are Borg. Resistance is futile:-)

    Many thanks for an essay that was both enjoyable and enlightening. I wonder if the animats figure out that they are size 2?

    Are there any simulations where the animats of size 1 and size 3 also evolve using similar rules? BTW, what would an animat of size 1 eat? Are there any simulations where the animats can cooperate to attack larger animats? Maybe I run from a lion but me and my buddies will attack a lion if we've got some weapons ..... and have been drinking some courage:-)

    You clearly present the meaning of useful information and the difference between information and being ... that is a key concept that many of the essays do not present.

    Best Regards and Good Luck,

    Gary Simpson

      Dear Larissa,

      thanks for a genuinely insightful essay. At several points, I was afraid you'd fall for the same gambit that's all too often pulled in this sort of discussion---namely, substituting meaning that an external observer sees in an agent's behaviour for meaning available to the agent itself. At each such juncture, you deftly avoided this trap, pointing out why such a strategy just won't do. This alone would have made the essay a very worthwhile contribution---it's all too often that, even in scholarly discussion on this issue, people seem insufficiently aware of this fallacy, and (often inadvertently) try to sell mere correlation---say, the covariance of some internal state with an external variable---as being sufficient for representation.

      But you go even further, giving an argument why the presence of integrated information signals the (causal) unity of a given assemblage. Now, it's not quite clear to me why, exactly, such causal unity ought to bestow meaning available to the agent. I agree with your stipulation that intrinsic meaning can't arise from knowing: that simply leads to vicious regress (the homunculus fallacy).

      Take the above example of correlated internal states and external variables: in order to represent an external variable by means of an internal state, their covariance must, in some sense, be known---in the same way that (my favorite example) one lamp lit at the tower of the Old North Church means 'the British will attack by land' only if whoever sees this lamp also knows that 'one if by land, two if by sea'. Without this knowledge, the mere correlation between the number of lamps and the attack strategy of the British forces does not suffice to decipher the meaning of there being one lamp. But such knowledge itself presupposes meaning, and representation; hence, any analysis of representation in such terms is inevitably circular.

      But it's not completely clear to me, from your essay, how 'being' solves this problem. I do agree that, if it does, IIT seems an interesting tool to delineate boundaries of causally (mostly) autonomous systems, which then may underlie meaningful representations. I can also see how IIT helps 'bind' individual elements together---on most accounts, it's mysterious how the distinct 'parts' of experience unify into a coherent whole; to take James' example, how from ten people thinking of one word of a sentence each an awareness of the whole sentence arises. But that doesn't really help getting at those individually meaningful units to be bound together, at least, not that I can see...

      Anyway, even though I don't quite understand, on your account, how they work, I think that the sort of feedback structures you identify as being possible bearers of meaning are exactly the right kinds of thing. (By the way, a question, if I may: does a high phi generally indicate some kind of feedback, or are there purely feedforward structures achieving high scores?)

      The reason I think so is that, coming from a quite different approach, I've homed in on a special kind of feedback structure that I think serves at least as a toy model of how to achieve meaning available to the agent myself (albeit perhaps an unnecessarily baroque one): that of a von Neumann replicator. Such structures are bipartite, consisting of a 'tape' containing the blueprint of the whole assembly, and an active part capable of interpreting and copying the tape, thus making them a simple model of self-reproduction (whose greatest advantage is its open-ended evolvability). In such a structure, the tape influences the active part, which in turn influences the tape---a change in the active part yields a change in the tape, through differences introduced in the copying operation, while the changed tape itself leads to the construction of a changed active part. Thus, the two elements influence another in a formally similar way to the two nodes of your agents' Markov Brains.

      What may be interesting is that I arrive at this structure from an entirely different starting point---namely, trying to exorcize the homunculus mentioned above by creating symbols whose meaning does not depend on external knowledge, but which are instead meaningful, in some sense, to themselves.

      But that's enough advertisement for my essay; I didn't actually want to get into that so much, but as I said, I think that there may be some common ground both of our approaches point towards. Hence, thanks again for a very thought-provoking essay that, I hope, will go far in this contest!

      Cheers,

      Jochen

        Hi Gary,

        Thank you for your time and the fun comment.

        We are looking at social tasks where more than one animat are interacting in the same environment. There are interesting distinctions that need to be explored further. Something like swarming behavior may require very little integration as it can be implement by very simple rules that only depend on the current sensory input. Real interaction, by contrast, increases context dependency and thus on average lead to higher integration. All work in progress.

        Best regards,

        Larissa

        Dear Vladimir,

        Thank you for your comment and that you took the time to read my essay. Indeed, Markov Brains are very related to cellular automata, the only difference is that each element can have a different update function and that the Markov Brain has inputs from an environment and outputs to an environment (but this could also be seen as a section of a cellular automata within a larger system).

        I am very sympathetic to the idea that the universe is in some ways a giant CA. Partly because it would make the connection between my own work and fundamental physics very straightforward, and partly because of the underlying simplicity and beauty.

        I am not really a sleep researcher myself. Yet, dreams are an important part of consciousness research. You might find the following work by my colleagues of interest: http://biorxiv.org/content/early/2014/12/30/012443.short

        It shows that the responses to seeing a face while dreaming for example are very similar to those of actually seeing a face while awake. Being awake can in this view be seen as a "dream guided by reality". At least some hallucinations then are a mixture of the two states.

        All the best,

        Larissa

        Dear Peter,

        Thank you very much for your insightful comment. I now had the time to read your essay too and liked it a lot. I completely agree that there is a fundamental problem how selection can arise in the first place. I hope I made this clear in my essay at the very beginning. In my work, I program selection into the world. What I want to demonstrate is that even if there is a clear cut selection algorithm for a specific task, this doesn't necessarily lead to fit agents that have intrinsic goals. As you rightly point out it is a big question where such selection mechanisms arise from in nature.

        Best regards,

        Larissa

        Dear Jochen,

        Thank you for reading and the nice comment. I have finally had the time to look at your essay and indeed I think we very much start from the same premise that meaning must be intrinsic. First, to your question: Feedforward structures have no integrated information (by definition), because there is always elements that lack causes or effects within the system, no matter how the system boundaries are drawn.

        I think the role that the replicators take in your essay is taken by a mechanism's cause-effect repertoire in IIT. By being a set of elements in a state, these elements constrain the past of the system and the future of the system, because they exclude all states that are not compatible with their own current state. The cause-effect repertoire is an intrinsic property of the mechanism within the system. It's what it is. However, by itself, a mechanism and it's cause-effect repertoire do not mean anything yet. It is the entire structure of all mechanisms as a whole that results in intrinsic meaning. For example, if there is a mechanisms that correlates with 'apples' in the environment, by itself it cannot mean apples. This is because the meaning of apples requires a meaning of 'fruit', 'not vegetable', 'food', 'colors' etc etc. Importantly, also things that are currently absent in the environment contribute to the meaning of the stuff that is present. The entire cause-effect structure is what the system 'is' for itself.

        What is left to be demonstrated by IIT is that it is indeed possible to 'lock in' the meaning of a mechanisms through the other mechanisms in the cause-effect structure. There is work in progress to demonstrate how this might work for spatial representation.

        Best regards,

        Larissa

        Dear Larissa,

        thanks for your answer! I will definitely keep an eye on the further development of IIT. Is there some convenient review material on its latest version to get me up to speed?

        Your mention of 'absences' as causally relevant evokes the ideas of Terrence Deacon, I wonder if you're familiar with them? He paints an interesting picture on the emergence of goal-directedness ('teleodynamics', as he calls it) from underlying thermodynamic and self-organizing ('morphodynamic') processes via constraints---thus, for instance, self-organization may constrain the thermodynamic tendency towards local entropy maximization, leading instead to stable structures. These constraints are then analyzed in terms of absences.

        Cheers,

        Jochen

        Dear Larissa Albantakis,

        Thank you for your wonderfully readable and equally rigorous essay on the Tale of Two Animats. The depth of your analysis on artificial "intelligence" is impressive and I also appreciate the point you make regarding, "physics, as a set of mathematical laws governing dynamical evolution, does not distinguish between an agent and its environment." I have not seen that particular perspective before. Thank you for the fresh insight and entertaining read and I have also in the meantime rated your essay.

        Regards,

        Robert

        Larissa,

        A clever presentation with perhaps a human-paralleled animat development. Do we assume the same primitive neurological processes (humans 1.5 million years ago ( use of fire, how does the reproduction fit in)in the animats' beginnings?

        In my essay, I say this about AI systems: "artificially intelligent systems humans construct must perceive and respond to the world around them to be truly intelligent, but are only goal-oriented based on programmed goals patterned on human value systems." Not being involved in evolutionary neuroscience, I doubt the truly causally autonomous capabilities of animats, but perhaps the future. I know we should never judge future events based on current technology and understandings -- a type 0 civilization that we are.

        Your causal analysis and metaphorical venture in AI evolution are forward thinking and impressive.

        I too try to apply metaphor -- amniotic fluid of the universe to human birth and dynamics:I speculate about discovering dark matter in a dynamic galactic network of complex actions and interactions of normal matter with the various forces -- gravitational, EM, weak and strong interacting with orbits around SMBH. I propose that researchers wiggle free of labs and lab assumptions and static models.

        Hope you get a chance to comment on mine.

        Jim Hoover

        Thank you Larissa for your response and references. It is amazing how much information brain imaging has provided, and yes dreams and reality are inextricably linked by the neural mechanisms that perceive them- the details of how that actually works out is of interest. In the half-awake perceptions I have mentioned and with eyes wide open and the mind alert, I can actually see ephemeral geometrical shapes that the mind seems to throw at , say, a patch of light in the ceiling, as if it is trying to identify or classify it in some way.

        I suspect that in normal vision incoming signals are constantly being studied in the same way as perception takes its course. This can be a while field of experimental study, using dark-adapted subjects shown very faint images and seeing if such visual inputs (or outputs?) are seen. Have you come across anything like this elsewhere?

        Best wishes

        Vladimr

        Larissa,

        Interesting experiment, findings and analysis, well presented. More a review than an essay perhaps but I do value real science over just opinion. The findings also agree with my own analysis so my cognitive architecture is bound to marry with it!

        You covered a lot but your plain English style made it easy to follow. Bonus points for both!

        My own essay agrees many points; "...one of two (or several) possible states, and which state it is in must matter to other mechanisms the state must be "a difference that makes a difference"" and the 'feedback' mechanism from results of 'running scenario's' drawn from input and memory (imagination).

        You also identify that; "What is left to be demonstrated by IIT is that it is indeed possible to 'lock in' the meaning of a mechanisms through the other mechanisms in the cause-effect structure. There is work in progress to demonstrate how this might work for spatial representation. Do you think that combining the feedback loops with the the hierarchically 'layered' architecture of propositional dynamic logic (PDL) may not well serve this purpose? A higher level decision then served by cascades of lower level decisions?

        Is the conclusion; "....a system cannot, at the same time, have information about itself in its current state and also other possible states.?" your own analysis or adopted theory? May not 'self awareness' be the recognition of 'possible states' and even current state? (i.e. "OK, I'm in a bad mood/hurry/overexcited etc, sorry")? or do you refer just to the causal mechanisms?

        You seemed to shy away from the role of maths, which I think was sensible. Let me know if I'm wrong inferring maths has the role of a useful abstracted 'tool' rather than any any causal foundation. I also certainly agree your (or the?) conclusions and thank you for the valuable input into the topic and a pleasant and interesting read.

        I hope you'll review and comment on mine. Don't be put off by the word 'quantum' in the title and last sections as many are. Your brain seems to work well in 'analytical mode' so you should follow the classical causal mechanism just fine. (Do also see the video/s of the 3D dynamics if you've time - links above).

        Well done, thanks, and best of luck in the contest.

        Peter

          Dear Peter,

          Thank you for you time and the nice comment. A hierarchically layered architecture is certainly the way to go for increasingly invariant concepts based on more lower level specific features. I.e. a grid of interconnected elements may be sufficient to intrinsically create a meaning of locations. Invariant concepts like a bar or a pattern will definitely require a hierarchy of levels.

          As for the statement about information of the system about itself in its current state: This is simple logic and certainly has been voiced before, I think with respect to turing machines and cellular automata, Seth Lloyd also mentioned a similar idea but in terms of prediction (that a system can never predict its entire next state). Note that I meant the entire current state, not just part of it. Yes, the system can have memory of course. But it is important to realize that any memory that the system has, has to be physically instantiated in its current physical state. So all there is at any given moment is the current state of the system and any measure that compares multiple such states is necessarily not intrinsic.

          Best regards,

          Larissa

          Dear Larissa

          I appreciate your essay. You spent a lot of effort to write it. If you believed in the principle of identity of space and matter of Descartes, then your essay would be even better. There is not movable a geometric space, and is movable physical space. These are different concepts.

          I inform all the participants that use the online translator, therefore, my essay is written badly. I participate in the contest to familiarize English-speaking scientists with New Cartesian Physic, the basis of which the principle of identity of space and matter. Combining space and matter into a single essence, the New Cartesian Physic is able to integrate modern physics into a single theory. Let FQXi will be the starting point of this Association.

          Don't let the New Cartesian Physic disappear! Do not ask for himself, but for Descartes.

          New Cartesian Physic has great potential in understanding the world. To show potential in this essay I risked give "The way of the materialist explanation of the paranormal and the supernatural" - Is the name of my essay.

          Visit my essay and you will find something in it about New Cartesian Physic. After you give a post in my topic, I shall do the same in your theme

          Sincerely,

          Dizhechko Boris

          Larissa,

          Yes, I understand. Well explained, thanks.

          I see your score now slipped down again! Too many 'trolls' applying 1's (Mines had 11, but I refuse to respond). Normally scoring gets crazy in the last few hours!

          I hope you get to read, score and comment on mine (not long now!) I think you could bring a good perspective to the hypotheses which I think are complementary to your analysis.

          Very Best wishes

          Peter

          Dear Larissa,

          very interesting essay. I wrote my PhD thesis about physical models of evolution including the evolution of networks. Evolution is goal-oriented. Here, there are two processes, mutation and selection. Mutation produces new information (=species) and selection is a global interaction among the species giving a goal to the process. In a more refined model of Co-evolution, the selection itself is formed by the interaction between the species, so again you will get a direction or goal. So, I think from this point of view, your model perfectly fits.

          Maybe I have one question: you are an expert in networks and I wrote about the brain network and its dynamics (using methods from math and physics). Please could you have a look on my essay?

          Thanks in advance and good luck in the contest (I gave you the highest rating)

          All the best

          Torsten

          Dear Sirs!

          Physics of Descartes, which existed prior to the physics of Newton returned as the New Cartesian Physic and promises to be a theory of everything. To tell you this good news I use «spam».

          New Cartesian Physic based on the identity of space and matter. It showed that the formula of mass-energy equivalence comes from the pressure of the Universe, the flow of force which on the corpuscle is equal to the product of Planck's constant to the speed of light.

          New Cartesian Physic has great potential for understanding the world. To show it, I ventured to give "materialistic explanations of the paranormal and supernatural" is the title of my essay.

          Visit my essay, you will find there the New Cartesian Physic and make a short entry: "I believe that space is a matter" I will answer you in return. Can put me 1.

          Sincerely,

          Dizhechko Boris

          Dear Larissa,

          I've read with amusing interest your essay. It's a fair way to tell valuable concepts.

          I also love computer simulations of authoms, in order to understand complexity, that infact could be the result of few very simple rules acted by a multiplicity of individuals.

          If you have time to have a look of my paper you could find it interesting.

          Best regards,

          Claudio

          Larissa,

          You attempt to model the generation of improved fitness. The overall animat model system is given a highest-level ruling algorithm and given the equivalent of initial values. Each animat model has controlling "Markov brain" logic gate algorithms, and probably another higher-level algorithm controlling the "Markov brain" algorithm.

          But it is an invalid assumption to consider that algorithms must already exist in primitive living things, so your model cannot be considered a model of actual reality.

          It is unfortunate that you conclude so much from so little evidence.

          Lorraine

          3 months later