Dear Vladimir,

Thank you for your comment and that you took the time to read my essay. Indeed, Markov Brains are very related to cellular automata, the only difference is that each element can have a different update function and that the Markov Brain has inputs from an environment and outputs to an environment (but this could also be seen as a section of a cellular automata within a larger system).

I am very sympathetic to the idea that the universe is in some ways a giant CA. Partly because it would make the connection between my own work and fundamental physics very straightforward, and partly because of the underlying simplicity and beauty.

I am not really a sleep researcher myself. Yet, dreams are an important part of consciousness research. You might find the following work by my colleagues of interest: http://biorxiv.org/content/early/2014/12/30/012443.short

It shows that the responses to seeing a face while dreaming for example are very similar to those of actually seeing a face while awake. Being awake can in this view be seen as a "dream guided by reality". At least some hallucinations then are a mixture of the two states.

All the best,

Larissa

Dear Peter,

Thank you very much for your insightful comment. I now had the time to read your essay too and liked it a lot. I completely agree that there is a fundamental problem how selection can arise in the first place. I hope I made this clear in my essay at the very beginning. In my work, I program selection into the world. What I want to demonstrate is that even if there is a clear cut selection algorithm for a specific task, this doesn't necessarily lead to fit agents that have intrinsic goals. As you rightly point out it is a big question where such selection mechanisms arise from in nature.

Best regards,

Larissa

Dear Jochen,

Thank you for reading and the nice comment. I have finally had the time to look at your essay and indeed I think we very much start from the same premise that meaning must be intrinsic. First, to your question: Feedforward structures have no integrated information (by definition), because there is always elements that lack causes or effects within the system, no matter how the system boundaries are drawn.

I think the role that the replicators take in your essay is taken by a mechanism's cause-effect repertoire in IIT. By being a set of elements in a state, these elements constrain the past of the system and the future of the system, because they exclude all states that are not compatible with their own current state. The cause-effect repertoire is an intrinsic property of the mechanism within the system. It's what it is. However, by itself, a mechanism and it's cause-effect repertoire do not mean anything yet. It is the entire structure of all mechanisms as a whole that results in intrinsic meaning. For example, if there is a mechanisms that correlates with 'apples' in the environment, by itself it cannot mean apples. This is because the meaning of apples requires a meaning of 'fruit', 'not vegetable', 'food', 'colors' etc etc. Importantly, also things that are currently absent in the environment contribute to the meaning of the stuff that is present. The entire cause-effect structure is what the system 'is' for itself.

What is left to be demonstrated by IIT is that it is indeed possible to 'lock in' the meaning of a mechanisms through the other mechanisms in the cause-effect structure. There is work in progress to demonstrate how this might work for spatial representation.

Best regards,

Larissa

Dear Larissa,

thanks for your answer! I will definitely keep an eye on the further development of IIT. Is there some convenient review material on its latest version to get me up to speed?

Your mention of 'absences' as causally relevant evokes the ideas of Terrence Deacon, I wonder if you're familiar with them? He paints an interesting picture on the emergence of goal-directedness ('teleodynamics', as he calls it) from underlying thermodynamic and self-organizing ('morphodynamic') processes via constraints---thus, for instance, self-organization may constrain the thermodynamic tendency towards local entropy maximization, leading instead to stable structures. These constraints are then analyzed in terms of absences.

Cheers,

Jochen

Dear Larissa Albantakis,

Thank you for your wonderfully readable and equally rigorous essay on the Tale of Two Animats. The depth of your analysis on artificial "intelligence" is impressive and I also appreciate the point you make regarding, "physics, as a set of mathematical laws governing dynamical evolution, does not distinguish between an agent and its environment." I have not seen that particular perspective before. Thank you for the fresh insight and entertaining read and I have also in the meantime rated your essay.

Regards,

Robert

Larissa,

A clever presentation with perhaps a human-paralleled animat development. Do we assume the same primitive neurological processes (humans 1.5 million years ago ( use of fire, how does the reproduction fit in)in the animats' beginnings?

In my essay, I say this about AI systems: "artificially intelligent systems humans construct must perceive and respond to the world around them to be truly intelligent, but are only goal-oriented based on programmed goals patterned on human value systems." Not being involved in evolutionary neuroscience, I doubt the truly causally autonomous capabilities of animats, but perhaps the future. I know we should never judge future events based on current technology and understandings -- a type 0 civilization that we are.

Your causal analysis and metaphorical venture in AI evolution are forward thinking and impressive.

I too try to apply metaphor -- amniotic fluid of the universe to human birth and dynamics:I speculate about discovering dark matter in a dynamic galactic network of complex actions and interactions of normal matter with the various forces -- gravitational, EM, weak and strong interacting with orbits around SMBH. I propose that researchers wiggle free of labs and lab assumptions and static models.

Hope you get a chance to comment on mine.

Jim Hoover

Thank you Larissa for your response and references. It is amazing how much information brain imaging has provided, and yes dreams and reality are inextricably linked by the neural mechanisms that perceive them- the details of how that actually works out is of interest. In the half-awake perceptions I have mentioned and with eyes wide open and the mind alert, I can actually see ephemeral geometrical shapes that the mind seems to throw at , say, a patch of light in the ceiling, as if it is trying to identify or classify it in some way.

I suspect that in normal vision incoming signals are constantly being studied in the same way as perception takes its course. This can be a while field of experimental study, using dark-adapted subjects shown very faint images and seeing if such visual inputs (or outputs?) are seen. Have you come across anything like this elsewhere?

Best wishes

Vladimr

Larissa,

Interesting experiment, findings and analysis, well presented. More a review than an essay perhaps but I do value real science over just opinion. The findings also agree with my own analysis so my cognitive architecture is bound to marry with it!

You covered a lot but your plain English style made it easy to follow. Bonus points for both!

My own essay agrees many points; "...one of two (or several) possible states, and which state it is in must matter to other mechanisms the state must be "a difference that makes a difference"" and the 'feedback' mechanism from results of 'running scenario's' drawn from input and memory (imagination).

You also identify that; "What is left to be demonstrated by IIT is that it is indeed possible to 'lock in' the meaning of a mechanisms through the other mechanisms in the cause-effect structure. There is work in progress to demonstrate how this might work for spatial representation. Do you think that combining the feedback loops with the the hierarchically 'layered' architecture of propositional dynamic logic (PDL) may not well serve this purpose? A higher level decision then served by cascades of lower level decisions?

Is the conclusion; "....a system cannot, at the same time, have information about itself in its current state and also other possible states.?" your own analysis or adopted theory? May not 'self awareness' be the recognition of 'possible states' and even current state? (i.e. "OK, I'm in a bad mood/hurry/overexcited etc, sorry")? or do you refer just to the causal mechanisms?

You seemed to shy away from the role of maths, which I think was sensible. Let me know if I'm wrong inferring maths has the role of a useful abstracted 'tool' rather than any any causal foundation. I also certainly agree your (or the?) conclusions and thank you for the valuable input into the topic and a pleasant and interesting read.

I hope you'll review and comment on mine. Don't be put off by the word 'quantum' in the title and last sections as many are. Your brain seems to work well in 'analytical mode' so you should follow the classical causal mechanism just fine. (Do also see the video/s of the 3D dynamics if you've time - links above).

Well done, thanks, and best of luck in the contest.

Peter

    Dear Peter,

    Thank you for you time and the nice comment. A hierarchically layered architecture is certainly the way to go for increasingly invariant concepts based on more lower level specific features. I.e. a grid of interconnected elements may be sufficient to intrinsically create a meaning of locations. Invariant concepts like a bar or a pattern will definitely require a hierarchy of levels.

    As for the statement about information of the system about itself in its current state: This is simple logic and certainly has been voiced before, I think with respect to turing machines and cellular automata, Seth Lloyd also mentioned a similar idea but in terms of prediction (that a system can never predict its entire next state). Note that I meant the entire current state, not just part of it. Yes, the system can have memory of course. But it is important to realize that any memory that the system has, has to be physically instantiated in its current physical state. So all there is at any given moment is the current state of the system and any measure that compares multiple such states is necessarily not intrinsic.

    Best regards,

    Larissa

    Dear Larissa

    I appreciate your essay. You spent a lot of effort to write it. If you believed in the principle of identity of space and matter of Descartes, then your essay would be even better. There is not movable a geometric space, and is movable physical space. These are different concepts.

    I inform all the participants that use the online translator, therefore, my essay is written badly. I participate in the contest to familiarize English-speaking scientists with New Cartesian Physic, the basis of which the principle of identity of space and matter. Combining space and matter into a single essence, the New Cartesian Physic is able to integrate modern physics into a single theory. Let FQXi will be the starting point of this Association.

    Don't let the New Cartesian Physic disappear! Do not ask for himself, but for Descartes.

    New Cartesian Physic has great potential in understanding the world. To show potential in this essay I risked give "The way of the materialist explanation of the paranormal and the supernatural" - Is the name of my essay.

    Visit my essay and you will find something in it about New Cartesian Physic. After you give a post in my topic, I shall do the same in your theme

    Sincerely,

    Dizhechko Boris

    Larissa,

    Yes, I understand. Well explained, thanks.

    I see your score now slipped down again! Too many 'trolls' applying 1's (Mines had 11, but I refuse to respond). Normally scoring gets crazy in the last few hours!

    I hope you get to read, score and comment on mine (not long now!) I think you could bring a good perspective to the hypotheses which I think are complementary to your analysis.

    Very Best wishes

    Peter

    Dear Larissa,

    very interesting essay. I wrote my PhD thesis about physical models of evolution including the evolution of networks. Evolution is goal-oriented. Here, there are two processes, mutation and selection. Mutation produces new information (=species) and selection is a global interaction among the species giving a goal to the process. In a more refined model of Co-evolution, the selection itself is formed by the interaction between the species, so again you will get a direction or goal. So, I think from this point of view, your model perfectly fits.

    Maybe I have one question: you are an expert in networks and I wrote about the brain network and its dynamics (using methods from math and physics). Please could you have a look on my essay?

    Thanks in advance and good luck in the contest (I gave you the highest rating)

    All the best

    Torsten

    Dear Sirs!

    Physics of Descartes, which existed prior to the physics of Newton returned as the New Cartesian Physic and promises to be a theory of everything. To tell you this good news I use «spam».

    New Cartesian Physic based on the identity of space and matter. It showed that the formula of mass-energy equivalence comes from the pressure of the Universe, the flow of force which on the corpuscle is equal to the product of Planck's constant to the speed of light.

    New Cartesian Physic has great potential for understanding the world. To show it, I ventured to give "materialistic explanations of the paranormal and supernatural" is the title of my essay.

    Visit my essay, you will find there the New Cartesian Physic and make a short entry: "I believe that space is a matter" I will answer you in return. Can put me 1.

    Sincerely,

    Dizhechko Boris

    Dear Larissa,

    I've read with amusing interest your essay. It's a fair way to tell valuable concepts.

    I also love computer simulations of authoms, in order to understand complexity, that infact could be the result of few very simple rules acted by a multiplicity of individuals.

    If you have time to have a look of my paper you could find it interesting.

    Best regards,

    Claudio

    Larissa,

    You attempt to model the generation of improved fitness. The overall animat model system is given a highest-level ruling algorithm and given the equivalent of initial values. Each animat model has controlling "Markov brain" logic gate algorithms, and probably another higher-level algorithm controlling the "Markov brain" algorithm.

    But it is an invalid assumption to consider that algorithms must already exist in primitive living things, so your model cannot be considered a model of actual reality.

    It is unfortunate that you conclude so much from so little evidence.

    Lorraine

    3 months later
    6 days later

    Dear Larissa,

    First, let me congratulate you for winning the essay contest. Unfortunately, I could not get to see your essay prior to the result. Even belated, let me try to understand the idea of the essay. Your statements are referred to by '>' symbol, while my comments by '=>'.

    > this essay discusses necessary requirements for intrinsic information, autonomy, and meaning.

    => While a specific requirement has been discussed, but we are not given a picture how information becomes abstract, or what exactly is the meaning. It largely dealt with probability aspect of information, not the actual semantics of information.

    > only the integrated one forms a causally autonomous entity above a background of external influences.

    => In fact, if the natural causation remains entirely deterministic, all outcomes are pre-set, autonomy is not really there, but only pre-determined looping of signals and states occur. It seems, indeterminism, even if limited, is a necessary requirement.

    > Any form of representation is ultimately a correlation measure between external and internal states.

    => While it is entirely agreeable that 'representation is ultimately a correlation', but not between the external and internal states alone. In fact, this is hardly the case as 'representation is a correlation with emerged semantics, that includes state descriptions (a projection of reality), as well as all possible abstraction of semantics that we are capable of'.

    > The state must be "a difference that makes a difference".

    => I am not sure, but is it said in a sense that observed states are relative measures that makes a difference?

    > A mechanism M has inputs that can influence it and outputs that are influenced by it. By being in a particular state m, M constrains the possible past states of its inputs.

    => Since M is a mechanism, not a physical entity, therefore, m is a particular instance of the mechanism, which should not be treated as state of matter. In several places, it is used in a manner that has this dual meaning.

    > We can quantify the cause and effect information of a mechanism M in its current state mt within system Z as the difference D between the constrained and unconstrained probability distributions over Z's past and future states.

    => Z is a system, I suppose a Markov Brain, which may have many physical elements. So, what is Z's state? Is it a particular instance of specific states for its elements? And how do we understand the state mt, since it is a mechanism -- a logical or relational component of the connectivity. May I suppose mt is a specific set of probabilistic or deterministic Logic Gates? And if they are, then p(Zt-1 | mt) would be the probability of finding elements of Z in a particular specification of states at a time t-1 given a specific configuration of Logic Gates (LG). And p(Zt-1) would be the LG independent probability of the same set of states irrespective. Furthermore, as per the referred statement, the differences between the two probabilities constitutes the information of the cause and effect. That is, one is not talking about the meaning (semantics) of the information, but only of the probability of their occurrence (causal connection).

    > All causally relevant information within a system Z is contained in the system's cause-effect structure.

    => Indeed, but in this essay, it is only the information about the physical states. If you think of intention, you know, intention is not just a state of matter. In any case, you have also concluded, "What kind of cause-effect structure is required to experience goals, and which environmental conditions could favor its evolution, remains to be determined."

    > If the goals that we ascribe to a system are indeed meaningful from the intrinsic perspective of the system, they must be intrinsic information, contained in the system's cause-effect structure.

    => In my view, the assertion is accurate. In this essay though 'intrinsic perspective' is ascribable only to the 'intrinsic correlation' with the states of matter, not with the semantics (meaning) of 'intention of goals'. It is pertinent to note, all physical entities, as they evolve, they follow the path of 'least action', (usual units Energy x Time). So, if we ascribe 'goal of performing least action, all mechanical entities naturally do that.

    > This is simply because a system cannot, at the same time, have information about itself in its current state and also other possible states.

    => Information is what a state correlates with emerging from observed relative state descriptions of interacting entities as per natural causation. Therefore, a state of an entity is not a correlating information in itself. Furthermore, there is no unique definition of state, a state is what is observed by another interacting entity (an observer). Therefore, a system can never be said to have information about its current state, since the state is not defined unless observed. In my reckoning too, it naturally requires a recurrent system to know what state it was in a while ago, but the new state is yet another state. For example, temporal relation being one central element of state description, therefore, current state is always a new state. By the way, resultant state is also a consequence of the relation among observed states of the context, which is what sets the basis for the emergence of abstract semantics. Scientists usually talk about the emergence, but never lay down the formal mechanism of abstraction.

    > Any memory the system has about its past states has to be physically instantiated in its current cause-effect structure.

    => I suppose, this is what won you the day. But it need not be re-instantiated as long as the present state is causally dependent on the past state configuration and relation.

    > The cause-effect structure of a causally autonomous entity describes what it means to be that entity from its own intrinsic perspective.

    => There is a big leap here. It is unclear 'what it means to be that entity'. Though it may be so, but why and how it is so, is not worked out in this essay. Considering all an element correlates with is the states of other connected elements, therefore, the constrained element may have state information about other elements. Furthermore, while the process of unification is outlined by stating each element affects every other in turn, but how does the semantic unification takes place is left out.

    > In this view, intrinsic meaning might be created by the specific way in which the mechanisms of an integrated entity constrain its own past and future states.

    => The expression of mere possibility, "intrinsic meaning might be created by the specific way", is right, since the actual emergence of such a meaning is not detailed in the essay.

    This is an essay, which I presumed, I could read without an external help. Publications cited in this essay are not for evidence or for suggestive further links, but as the background. I could not follow the methods of the operation of Markov Brain, since rules of evolution, formation of connectivity, parametric values are not defined here. For example, why would they develop connections at all, unless the rule is especially coded? This turned out to be the hardest read for me, yet, I am not fully confident. I had to learn rules of the methods from Wiki and cited texts. Similarly, I could not have any idea of how you calculated the value of R to be 0.6 at some point, even though I understood the idea of eqn.1.

    Hearty congratulations again on winning the contest -- a remarkable feat indeed!

    Rajiv

      8 days later

      Dear Rajiv,

      Thank you very much for your thorough reading of my essay!

      Let me try to address some of the issues you raised. Many have to do with (1) the difference between the 'intrinsic information' that I outlined, which is a measure of causal constraint, and the classic notion of information: Shannon information, (2) a distinction between causal constraint and predictability, and (3) taking the intrinsic perspective of the system itself vs. the typical extrinsic perspective of an observer of the system (as is the common perspective in physics).

      Causal autonomy and (in)determinism:

      Of course, in a deterministic system, given the full state of the system at some point in time, the system's future evolution is entirely predictable. What happens happens. However, performing this prediction takes a "god's" perspective of the entire system.

      The notion of causal autonomy defined in my essay applies from the intrinsic perspective of the system, which may be an open subsystem S of a larger system Z. What is measured is the mechanistic constraints of S on the direct past and future state of the subsystem S itself, using a counterfactual notion of causation. Roughly, this notion of causation is about what within the entire state of the system actually constraints what (not all mechanisms constrain each other all the time). So locally, right now, constraints on the system can come from within the system S and/or from outside the system. If all parts of the system S constrain each other mechanistically, above a background of constraints from outside of the system, S is causally autonomous to some degree (which can be measured).

      Indeterminism within the system will actually only result in less intrinsic constraints.

      Representation:

      I think we can connect here in the sense that what I am arguing in my essay is that any emerged semantics have to come from the intrinsic cause-effect structure of the system itself. The crucial point is that, whatever intrinsic meaning there is does not mean what it means because of a correlation with the outside world. There may be (and should be) a correlation between the world and the intrinsic cause-effect structure of the system, however the intrinsic semantics (what it means for the system itself) must arise purely from the intrinsic structure, and cannot arise because of the correlation. Only external observers can make use of a correlation between something external and something internal to the system, not the system itself.

      Mechanism and system states:

      The way a mechanism is define here is specifically a set of elements within a physical system (with cause-effect power). A mechanism must have at least two states. So M is a physical entity (it can be observed and manipulated) and m is its current state. A neuron or a logic gate for example could be a mechanism.

      Z is a system, let's say 4 binary logic gates {ABCD}, which at any moment is in a state, let's say ABCD = 1011 in its current state.

      Intrinsic information and semantics:

      You wrote:

      "p(Zt-1 | mt) would be the probability of finding elements of Z in a

      particular specification of states at a time t-1 given a specific

      configuration of Logic Gates (LG). And p(Zt-1) would be the LG

      independent probability of the same set of states irrespective.

      Furthermore, as per the referred statement, the differences between

      the two probabilities constitutes the information of the cause and

      effect."

      This is correct. Let's say mt is an AND gate 'A' in state ON with 2 inputs ('B' and 'C'). p(BCt-1|A=1) is 1 for state BC = 11 and 0 for all other states (BC = 00, 01, 10). Now the crucial point is that this is what it means to be an AND gate in state '1'. So the shape of the distribution is the semantics of the mechanism in its state. If the AND gate is OFF ('0') it will specify a different distribution over BC. That means that the cause-effect structure of the system with the AND gate ON will be different from the cause-effect structure of the AND gate OFF. Of course this is a very impoverished notion of semantics and it remains to be shown that in a sufficiently complex system, 'interesting' semantics can be constructed from compositions of these probability distributions (cause-effect repertoires). What I'm arguing is that the composition of all cause-effect repertoires is the only kind of information that is available to the system itself, so if there is intrinsic meaning, it must come from this intrinsic information (the system's cause-effect structure). It can't come from anywhere else (like a correlation with the external world). Certainly, though, my essay does not give a satisfying answer as to where and how exactly the semantics can be found in the cause-effect structure.

      Intrinsic goals:

      You wrote:

      "It is pertinent to note, all

      physical entities, as they evolve, they follow the path of 'least

      action', (usual units Energy x Time). So, if we ascribe 'goal of

      performing least action, all mechanical entities naturally do that."

      Precisely. Something like "goal of least action" would not be an intrinsic goal. It's like reaching perfect fitness in the task. The animat may do that without it possibly being a meaningful concept for the animat itself. Both the feedforward and the integrated animat evolved to perfect fitness. Yet, the feedforward one cannot even be seen as an intrinsic agent, a causally autonomous entity, in the first place.

      "knowing" vs. "being"

      You wrote:

      "... Therefore, a state of an entity is not a correlating information in itself. ..."

      This is crucial and is what I intended to express in the last paragraph of section III. A system, from the intrinsic perspective, does not "have" information about itself, it does not "know" about itself. Instead it specifies information by being what it is, at the current moment: the system (with all its mechanisms) in a specific state. And this is all there is.

      From the essay: "The cause-effect structure of a causally autonomous entity describes what it means to be that entity from its own intrinsic perspective."

      You wrote:

      => There is a big leap here. It is unclear 'what it means to be that

      entity'. Though it may be so, but why and how it is so, is not worked

      out in this essay. Considering all an element correlates with is the

      states of other connected elements, therefore, the constrained element

      may have state information about other elements. Furthermore, while

      the process of unification is outlined by stating each element affects

      every other in turn, but how does the semantic unification takes place

      is left out.

      I completely agree. My argument is merely that a) something like intrinsic meaning obviously exists for us humans, b) the only way it could possibly arise (in a non-dualist framework) is from the system's cause-effect structure. And I hope that I have given convincing arguments that the only intrinsic information there is, is the system's cause-effect structure, while something like correlations with the world can only be meaningful for an external observer.

      References:

      The word limit unfortunately didn't allow for more detail about the evolutionary algorithm. However, in some sense this is irrelevant for the rest of the argument. That the animats in Fig. 2 are two solution that actually evolved via selection and adaptation merely makes it more convincing as a model of natural systems with seeming agency. While I'm very happy about your interest in animat evolution, references to the actual scientific papers should rather be seen as proof that I didn't just make things up, but are not crucial for the argument made. Same for the R = 0.6 value. The only relevant point is that the R that was measured is too low to possible allow for an intrinsic representation of all task-relevant features (less than 1 bit) even in animats that perform the task well.

      Thank you again for your insightful comments!

      Best,

      Larissa

      17 days later

      Dear Larissa Albantakis,

      You wrote: "I hope that I have given convincing arguments that the only intrinsic information there is, is the system's cause-effect structure, while something like correlations with the world can only be meaningful for an external observer."

      I am ashamed for having overlooked your essay so far due to its cryptic title.

      I still see you a bit attacking open doors, at least from my perspective. Admittedly, I share in part the opinion of Ritz, of course not his preference for emission theory, in his dispute that ended in the famous agreement to disagree.

      Since you are with Templeton World Charity Foundation, I don't expect you to accept in public my criticism of unreasonable humanity as endangering mankind.

      Eckard Blumschein

      Write a Reply...