Dear Simon,

Good to hear from you. Your comment made my day, as you indeed captured the essence of my essay. The animats are such a great model system as they force one to consider the implementation of suggested potential solutions to intrinsic meaning, based on "information processing", "models about the environment", etc. Most of the time these ideas are presented abstractly, sound really great, and resonate with many people, but on closer consideration fail to pass the implementation test.

With respect to the question of dynamics vs. computation, and whether the gas in the seminar room performs a computation, David Chalmers addressed a similar point here: Chalmers, D.J. (1996). Does a rock implement every finite-state automaton? Synthese 108, 309-333. It's about mapping any kind of computation onto a system that can assume various states. I think the conclusion is that in order to say that two systems perform the same computation, it is not sufficient for them to have a dynamical sequence of states that can be mapped onto each other. Instead, there has to be a mapping of all possible state transitions, which basically means the same causal structure, i.e. a mapping of the causal implementation.

Along these lines, computation, in my view, requires knowing all counterfactuals. I.e. to know that an AND gate is an AND gate and performs the AND computation, it is not sufficient to know that it transitions from 11 -> 1. One needs to know all possible input states (all possible counterfactuals) and the resulting output state.

With respect to game theory, I know that Chris Adami and Arend Hintze successfully applied the animats to solve games such as the prisoner's dilemma, but we haven't measured their integrated information in such environments yet. Memory does play a crucial role for evolving integration. Games that can be solved merely by "reflexes" based on current sensory inputs will produce mostly feedforward systems. Evaluating the animats on multiple-game versions with different pay-off matrices should indeed be interesting. Thank you for bringing that up! Relatedly, we are currently evaluating "social" animats that can sense other agents and mostly replicated the past results.

Best regards,

Larissa

Dear Larissa Albantakis,

Nice essay on animats,

Your ideas and thinking are excellent for eg...

By examining the informational and causal properties of artificial organisms ("animats") controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning.

Some of the animats even lack the conditions to be separate causal entities from their environment. Yet, observing their behavior affects our intrinsic mechanisms. For this reason, describing certain types of directed behaviors as goals, in the extrinsic sense, is most likely useful to us from an evolutionary perspective.

A Good idea, I fully agree with you............

..................... At this point I want you to ask you to please have a look at my essay, where ...............reproduction of Galaxies in the Universe is described. Dynamic Universe Model is another mathematical model for Universe. Its mathematics show that the movement of masses will be having a purpose or goal, Different Galaxies will be born and die (quench) etc...just have a look at my essay... "Distances, Locations, Ages and Reproduction of Galaxies in our Dynamic Universe" where UGF (Universal Gravitational force) acting on each and every mass, will create a direction and purpose of movement.....

I think intension is inherited from Universe itself to all Biological systems

For your information Dynamic Universe model is totally based on experimental results. Here in Dynamic Universe Model Space is Space and time is time in cosmology level or in any level. In the classical general relativity, space and time are convertible in to each other.

Many papers and books on Dynamic Universe Model were published by the author on unsolved problems of present day Physics, for example 'Absolute Rest frame of reference is not necessary' (1994) , 'Multiple bending of light ray can create many images for one Galaxy: in our dynamic universe', About "SITA" simulations, 'Missing mass in Galaxy is NOT required', "New mathematics tensors without Differential and Integral equations", "Information, Reality and Relics of Cosmic Microwave Background", "Dynamic Universe Model explains the Discrepancies of Very-Long-Baseline Interferometry Observations.", in 2015 'Explaining Formation of Astronomical Jets Using Dynamic Universe Model, 'Explaining Pioneer anomaly', 'Explaining Near luminal velocities in Astronomical jets', 'Observation of super luminal neutrinos', 'Process of quenching in Galaxies due to formation of hole at the center of Galaxy, as its central densemass dries up', "Dynamic Universe Model Predicts the Trajectory of New Horizons Satellite Going to Pluto" etc., are some more papers from the Dynamic Universe model. Four Books also were published. Book1 shows Dynamic Universe Model is singularity free and body to collision free, Book 2, and Book 3 are explanation of equations of Dynamic Universe model. Book 4 deals about prediction and finding of Blue shifted Galaxies in the universe.

With axioms like... No Isotropy; No Homogeneity; No Space-time continuum; Non-uniform density of matter(Universe is lumpy); No singularities; No collisions between bodies; No Blackholes; No warm holes; No Bigbang; No repulsion between distant Galaxies; Non-empty Universe; No imaginary or negative time axis; No imaginary X, Y, Z axes; No differential and Integral Equations mathematically; No General Relativity and Model does not reduce to General Relativity on any condition; No Creation of matter like Bigbang or steady-state models; No many mini Bigbangs; No Missing Mass; No Dark matter; No Dark energy; No Bigbang generated CMB detected; No Multi-verses etc.

Many predictions of Dynamic Universe Model came true, like Blue shifted Galaxies and no dark matter. Dynamic Universe Model gave many results otherwise difficult to explain

Have a look at my essay on Dynamic Universe Model and its blog also where all my books and papers are available for free downloading...

http://vaksdynamicuniversemodel.blogspot.in/

Best wishes to your essay.

For your blessings please................

=snp. gupta

  • [deleted]

Dear James,

Thank you for your comment and taking the time to read my essay! Indeed, in these artificial evolution experiments, some kind of selection bias has to be assumed that leads to certain systems being preferred over others. In the absence of biased selection, causal structure may emerge, but will not be stable for more than a couple of generations.

I read your essay about spontaneity with much interest. A possible connection could be that in the described causal analysis we assume any element within the system that is not being constrained as maximum entropy and the cause-effect power of a mechanism is evaluated also in comparison of maximum entropy. Certainly though my analysis starts by assuming physical elements with at least two states that can causally constrain each other and leaves room for more fundamental concepts.

The point I want to make with the essay is actually quite similar to Searl's Chinese Room argument, but aims at a partial solution at least. The two animats perform the same task, but in the feedforward case there is no system that could possible have any understanding of the environment (or anything else), as there is not system from the intrinsic perspective in the first place. This animat would correspond to the lookup tables. The other animat does have a small but nevertheless integrated core that constrains itself and thus at least forms a minimal system that exists from the intrinsic perspective above a background of influences from the environment.

Best regards,

Larissa

Dear Larissa,

nice and dense essay! One of the aspects that intrigued me most and that, I believe, adds much originality to your work, is the attempt to tackle goal-oriented behaviour under the perspective of the 'intrinsic' features of the agent - beyond what appears to the external observer. However, I'm still trying to understand clearly the sense in which the use of internal cause-effect information, based on conditional state distributions and the IIT tools, should yield a 'more internalised' notion of goal-oriented behaviour for an open subsystem than, say, the plain detection of a local entropy decrease. In which sense is the former more internal? Does it refer to an issue of internal interconnection architecture, high Phi values, and ultimately growing consciousness?

One of the most attractive (at least to me) hard questions related to the 2017 Essay Contest is the difference between re-acting and acting: when and how does the ability to act spontaneously, as opposed to reacting (to, say, the arrival of pieces of different sizes) arise in artificial or natural systems? As far as I have seen, none of the essays has tackled this issue directly. What (new?) information-theoretic 'trick' is required for obtaining an animat that starts doing something autonomously and for no reason, i.e., not as a reaction to some external stimulus? In your opinion, is it conceivable to characterize (and synthesize) this skill just in the framework of IIT [... yielding an animat that stops catching pieces and says "Larissa, give me a break!" :-] ?

Another small question: in the simulation of [8] it seems that fitness increases visibly, while Phi doesn't. In general, shouldn't one expect them to roughly grow together?

Thank you!

Tommaso

http://fqxi.org/community/forum/topic/2824

    Dear Tommaso,

    Thank you very much for your comment and insightful questions. By contrast to something like measures of local entropy decreases, the IIT formalism does not just yield a quantity (integrated information) but also a characterization of the system, its cause-effect structure, which is the set of all system mechanisms that constrain the past and future states of the system itself. The mechanisms specify everything within the system that makes a difference to the system itself. In this way I don't just find out whether there is an intrinsic system in the first place, but also get a picture of its capacity for 'understanding', of what matters to the system and what cannot possibly matter to the system because it doesn't have the right mechanism to pick it up. I hope this helped. Characterizing more precisely how intrinsic meaning could arise from the cause-effect structure is work in progress in our lab.

    I completely agree on your point regarding 'acting' vs. 'reacting'. In fact, this is basically the topic of my fellowship project for the next 3 years. Our goal is to quantify when and how strongly an action was caused by the system as opposed to the environment. Autonomous action here means that the system's action is not entirely driven by its current sensory inputs from the environment. Making a choice based on memory, however, would count as autonomous. If you look at the website (ref [8]) under the 'task simulation' tab and set it to trial 55, for example, you can see that the animat already shows a little bit of autonomous behavior in that sense. It first follows the block, then goes in the other direction, then follows again. This means that its behavior didn't just depend on the sensory inputs, but is context-dependent on its own internal state. This is a little different than your definition of autonomy ('doing something for no reason'). That could be achieved with just a little noise inside the system.

    As for your last question: the issue with a trial-by-trial correlation of Phi and fitness is that an animat can always have more Phi than is necessary, as there is no real cost on being more integrated than needed the way the simulation is set up. Moreover, fitness can also increase due to e.g. a connection from a sensor to a motor (a reflex), which would not increase the integration. In practice, for complex tasks, there should be a lower limit on the amount of integration that is required for a given task given constraints on the number of elements, connections, and time available to perform the computations as integrated systems are more economical than feedforward systems.

    Best regards,

    Larissa

    Dear Larissa,

    I carefully read your essay. Your approach and mine are radically different, but this precisely could be a sufficient reason to have a good discussion.

    Your essay has a great merit. You honestly describe the constraints a given system has to master so that we can ascribe to the system in question. " A system can only 'process' information to the extent that it has mechanisms to do so." And "The cause-effect structure of a system in a state specifies the information intrinsic to the system, as opposed to correlations between internal and external variables. If the goals that we ascribe to a system are indeed meaningful from the intrinsic perspective of the system, they must be intrinsic information, contained in the system's cause-effect structure (if there is no mechanism for it, it does not matter to the system)." Finally "Yet, the system itself does not 'have' this intrinsic information. Just by 'processing' information, a system cannot evaluate its own constraints. This is simply because a system cannot, at the same time, have information about itself in its current state and also other possible states."

    Shortly speaking, for the concept "goal" related to any system to have a meaning, the system in question must be equipped by a a lot of natural or artificial devices, whereas the set of the latter is supposed to be configured in an exactly determined way.

    Suggesting that the forgoing is easy to say, but much less easy to realize, and even to model, you are absolutely right.

    Well, but do you not agree that the problem is much more fundamental?

    To specify the information intrinsic to any system, the required internal causal structure of this system must be "able to specify" information, and this "ability" presupposes other "abilities" like the "ability" to recognize information before and after being specified. So, the more fundamental question is: where do these "abilities" come from?

    Yes, "by 'processing' information, a system cannot evaluate its own constraints", but the very fact of evoking systems "'processing' information" already implies the presence of "information processors" within these systems, and once again, we have to ask the more fundamental question is: where do these "information processors" come from?

    And so on.

    These "more fundamental" questions which until further notice have no answers converge to the problem of generalized irreversibility. In a classical manner going back to Clausius, generalized irreversibility can be formulated as follows: For any system S apparently violating irreversibility, there is a "wider" system S' "comprising" S, so that at the level of S', irreversibility is reestablished. In the classical formulation, notions like "wider systems" or "systems 'comprising' other systems" are rather vague, and so not really appropriated for taking into account intrinsic information or integrated information you are focusing on.

    Now, in order to touch the essential without too formal developments, let us consider the good old Maxwell's Demon operating on its Boltzmannian gas. In your eyes, Maxwell's Demon perhaps belongs to ancient history, whereas most authors, for diverging motivations going from Landauer's "principle" to whatever it may be, believe that the Demon is not able to accomplish its mission. But on the other hand, the Demon represents an intuitive mean to grasp the more fundamental problem being behind all superstructure problems concerning integrated information. So let us do as if Maxwell's Demon could do its job.

    Operating in the well-known way, the Demon pushes the gas back to its ordered initial state. Under single-step selection conditions, the improbability of the transition would be tremendously high. Considered alone, the gas expresses a genuine irreversibility violation. In fact, the gas is not alone, because of the Demon's presence. Here the "wider system" reestablishing irreversibility is to be interpreted as a system with integrated information, and so all the questions arising with regard to information integration arise again. It is easy to "imagine" - like Maxwell - the existence of the Demon. By contrast, it would be hard - infinitely hard - to equip a mesoscopic, perhaps I should say microscopic device so that it is able to detect instantaneously the motion state - velocity, acceleration, direction - of each molecule present in the neighborhood of the gate, knowing that in the sole neighborhood of the gate you find an unimaginable number of molecules. Further, the microscopic Demon has to be able to take instantaneously the good decision. And then, the Demon must be conditioned to be a serious, conscious scientist respectively meticulously the experimental protocol, and not a trouble-maker misusing its quasi-infinite intelligence to make bad jokes or something else, and this point presupposes moral qualities. And so on. Yet, the foregoing is not a bit easy caricature. A task like re-ordering a disordered gas - a simple task in comparison with other tasks implying aims and/or intentions - needs information integration we cannot master, neither technologically, nor intellectually. I think, you agree.

    But now we arrive to the essential: Beyond integration problems, there remains the more fundamental problem of generalized irreversibility. Even if the Demon, against Landauer, Szilard, Brillouin, Costa de Beauregard ..., actually managed to "generate work by entropy reduction", generalized irreversibility would not violated: the transition of the gas from maximal disorder to initial order under single-step selection conditions is tremendously improbable, yes, but the "emergence" of the Demon under the same single-step selection conditions is infinitely more improbable. So, as well as within any case of generalized irreversibility, the apparent irreversibility violation by the gas is "paid" by a given higher improbability at the level of the "wider" system consisting of the gas and the Demon.

    As long as the devices required by information integration are given, information integration is hard to formalize, hard to realize, but at least we can conceive it to some extent.

    By contrast, in a context like evolution where the devices required by information integration are not given, we have to ask where they come from, and at this level of analysis we merely are lost.

    So, in my own paper Daring Group-theoretic Foundations of Biological Evolution despite Group Theory I try to tackle the problem at source, at the fundamental level concerning irreversibility.

    Just because of the differences between your really interesting paper and mine, a discussion about both papers would be a pleasure for me.

    All the best; good luck

    Peter

      Hi Larissa,

      I was pleasantly surprised reading your essay. Reminded me of "Vehicles" by Valentino Braitenberg only with the vehicles replaced by animats which are much more interesting goal directed creatures.

      Many other scientists would be very tempted to say this completes the essay question by saying that the MUH (Mathematical Universe Hypothesis) is true. And I was completely surprised by: "While we cannot infer agency from observing apparent goal-directed behavior, by the principle of sufficient reason, something must cause this behavior (if we see an antelope running away, maybe there is a lion). On a grander scale, descriptions in terms of goals and intentions can hint at hidden gradients and selection processes in nature, and inspire new physical models."

      I believe you agenda is something like: Let us pursue this concept of agency and see where it takes us. This is the essence of science.

      Thanks for your excellent essay,

      Don Limuti

      Question: Is there a way to "play" with your animates online?

        Dear Don,

        Thank you for your nice comment. The artificial evolution of the animats takes quite a bit of computational power, so there is no easy way yet to play around with them. However, there is a little video of one evolution and the behavior of one animat on http://integratedinformationtheory.org/animats.html

        There is, however, an online interface to calculate the integrated information of little systems of logic gates: http://integratedinformationtheory.org/calculate.html

        Best regards,

        Larissa

        Hi Larissa,

        I wrote you a longer e-mail that I just sent, but in general I found your essay well-written and extremely stimulating. I'm still not entirely convinced that you've answered your own question concerning whether or not systems can have "goals." You suggest that perfect fitness is a goal, but to me, a goal is an internal thing whereas it would seem to me that perfect fitness is largely a response to external stimuli (and by external, I include things like viruses and illness since I'm thinking of goals as related to consciousness here). But maybe I'm wrong. Who knows. Nice essay, though.

        Ian

          Thank you.

          Making a choice based on internal memory, as opposed to being triggered by external events, is certainly a step towards autonomy, but again you need some internal trigger that induces you to look up that good or bad experience in your memory, compare with the current situation, and decide how to (re)act. You mention that 'doing something for no reason' - perhaps the perfect form of agency - could be achieved with just a little noise inside the system. I also thought about this. You mention it cursorily, but I wonder whether this couldn't in fact be the key to implement agency. Quantum fluctuations have already been envisaged (e.g. by Lloyd) as the random generators at the basis of the computational universe edifice: maybe they play a role also in triggering reactions that appear otherwise as self-triggered, spontaneous actions.

          Best regards

          Tommaso

          Dear Tommaso,

          Noise could play an important role for innovation, exploration, and creativity. Yet, if you take autonomy to be causal power of the system itself, noise would not count since it doesn't actually come from within the system but literally out of nowhere. The causal power of the system itself would go down with noise, just as it would decrease through external inputs that drive the system. But I think the divide is just that we have two different views on autonomy (paralleled by the different possible views on free will). One emphasizes the 'free' part: 'being able to act otherwise', making choices without reason. The other emphasizes the 'will' part: 'being determined by oneself as opposed to outside stimuli'. A willed decision would be one that strongly depends on you, your memories, and internal structure, and your best friend can easily predict your choice. This latter sense of autonomy is possible in a deterministic world.

          Best regards,

          Larissa

          Dear Larissa,

          this is a nice summary of some of your own and related work. Now I want to learn more about integrated information theory. Thank you!

          After reading many essays here I start seeing crosslinks everywhere...

          When you wrote "Think of a Markov Brain as a finite cellular automaton with inputs and outputs. No mysteries." it immediately reminded me of Joe Brisendine's description of bacterial chemotaxis.

          And later, when you wrote "one might ask whether, where, and how much information about the environment is represented in the animat's Markov Brain" I had to think of Sofia Magnúsdóttir's essay who qualitatively analyzes the role of models which an agent must have about its environment.

          I'd love to replace (in my essay) my clumsy conditions of being "sufficiently rigid and sufficiently flexible" by something less vague; maybe concepts from integrated information theory could help.

          Cheers, Stefan

          Hi Ian,

          Thanks for your comment. I'll be answering your email shortly. For the discussion here, I agree with you that having goals is necessarily intrinsic. That's why I put 'goal' in quotes any time that I referred to it as 'apparently having goals as subscribed to the agent by some outside observer'. The essay tries to make the point, that neither of the animats actually intrinsically has the goal of perfect fitness, although an outside observer would be tempted to describe their behavior as 'having the goal to catch and avoid blocks'.

          I then give a necessary condition for having any kind of intrinsic information, that is being an integrated system that is to some extent causally autonomous from the environment. I moreover claim that the only way to find intrinsic goals is to look at the agents' intrinsic cause-effect structure and that correlations with the environment won't get us there. What kind of cause-effect structure would correspond to having a goal intrinsically I cannot answer (yet). But there is hope that it is possible since we know that humans have goals intrinsically.

          Best,

          Larissa

          Dear Larissa,

          I read your essay with interest but found the technical descriptions of the animats technically beyond my comprehension, although I am very interested in Cellular Automata CA which seem to resemble Markov Brains? Anyway you have certainly attempted a serious answer to the essay question.

          My Beautiful Universe Model is a type of CA.

          I was interested that you were a sleep researcher - I have recently been interested in how the brain generates and perceives dreams, and noted some interesting observations experienced on the threshold of waking up when I saw ephemeral geometrical patterns superposed on faint patterns in the environment. As if the brain was projecting templates to fit to the unknown visual input.

          Another more severe experience along these lines was 'closed eye' hallucinations I experienced due to surgical anesthesia. which I documented here. The anaesthesia seems to have suspended the neural mechanism that seem to separate dreams from perceived reality and I could see both alternately while the experience lasted.

          I wish you the best in your researches. It is probably probably beyond your interest but do have a look at my fqxi essay.

          Cheers

          Vladimir

            Larissa,

            We are Borg. Species a1-c3, you will be assimilated. We are Borg. Resistance is futile:-)

            Many thanks for an essay that was both enjoyable and enlightening. I wonder if the animats figure out that they are size 2?

            Are there any simulations where the animats of size 1 and size 3 also evolve using similar rules? BTW, what would an animat of size 1 eat? Are there any simulations where the animats can cooperate to attack larger animats? Maybe I run from a lion but me and my buddies will attack a lion if we've got some weapons ..... and have been drinking some courage:-)

            You clearly present the meaning of useful information and the difference between information and being ... that is a key concept that many of the essays do not present.

            Best Regards and Good Luck,

            Gary Simpson

              Dear Larissa,

              thanks for a genuinely insightful essay. At several points, I was afraid you'd fall for the same gambit that's all too often pulled in this sort of discussion---namely, substituting meaning that an external observer sees in an agent's behaviour for meaning available to the agent itself. At each such juncture, you deftly avoided this trap, pointing out why such a strategy just won't do. This alone would have made the essay a very worthwhile contribution---it's all too often that, even in scholarly discussion on this issue, people seem insufficiently aware of this fallacy, and (often inadvertently) try to sell mere correlation---say, the covariance of some internal state with an external variable---as being sufficient for representation.

              But you go even further, giving an argument why the presence of integrated information signals the (causal) unity of a given assemblage. Now, it's not quite clear to me why, exactly, such causal unity ought to bestow meaning available to the agent. I agree with your stipulation that intrinsic meaning can't arise from knowing: that simply leads to vicious regress (the homunculus fallacy).

              Take the above example of correlated internal states and external variables: in order to represent an external variable by means of an internal state, their covariance must, in some sense, be known---in the same way that (my favorite example) one lamp lit at the tower of the Old North Church means 'the British will attack by land' only if whoever sees this lamp also knows that 'one if by land, two if by sea'. Without this knowledge, the mere correlation between the number of lamps and the attack strategy of the British forces does not suffice to decipher the meaning of there being one lamp. But such knowledge itself presupposes meaning, and representation; hence, any analysis of representation in such terms is inevitably circular.

              But it's not completely clear to me, from your essay, how 'being' solves this problem. I do agree that, if it does, IIT seems an interesting tool to delineate boundaries of causally (mostly) autonomous systems, which then may underlie meaningful representations. I can also see how IIT helps 'bind' individual elements together---on most accounts, it's mysterious how the distinct 'parts' of experience unify into a coherent whole; to take James' example, how from ten people thinking of one word of a sentence each an awareness of the whole sentence arises. But that doesn't really help getting at those individually meaningful units to be bound together, at least, not that I can see...

              Anyway, even though I don't quite understand, on your account, how they work, I think that the sort of feedback structures you identify as being possible bearers of meaning are exactly the right kinds of thing. (By the way, a question, if I may: does a high phi generally indicate some kind of feedback, or are there purely feedforward structures achieving high scores?)

              The reason I think so is that, coming from a quite different approach, I've homed in on a special kind of feedback structure that I think serves at least as a toy model of how to achieve meaning available to the agent myself (albeit perhaps an unnecessarily baroque one): that of a von Neumann replicator. Such structures are bipartite, consisting of a 'tape' containing the blueprint of the whole assembly, and an active part capable of interpreting and copying the tape, thus making them a simple model of self-reproduction (whose greatest advantage is its open-ended evolvability). In such a structure, the tape influences the active part, which in turn influences the tape---a change in the active part yields a change in the tape, through differences introduced in the copying operation, while the changed tape itself leads to the construction of a changed active part. Thus, the two elements influence another in a formally similar way to the two nodes of your agents' Markov Brains.

              What may be interesting is that I arrive at this structure from an entirely different starting point---namely, trying to exorcize the homunculus mentioned above by creating symbols whose meaning does not depend on external knowledge, but which are instead meaningful, in some sense, to themselves.

              But that's enough advertisement for my essay; I didn't actually want to get into that so much, but as I said, I think that there may be some common ground both of our approaches point towards. Hence, thanks again for a very thought-provoking essay that, I hope, will go far in this contest!

              Cheers,

              Jochen

                Hi Gary,

                Thank you for your time and the fun comment.

                We are looking at social tasks where more than one animat are interacting in the same environment. There are interesting distinctions that need to be explored further. Something like swarming behavior may require very little integration as it can be implement by very simple rules that only depend on the current sensory input. Real interaction, by contrast, increases context dependency and thus on average lead to higher integration. All work in progress.

                Best regards,

                Larissa

                Dear Vladimir,

                Thank you for your comment and that you took the time to read my essay. Indeed, Markov Brains are very related to cellular automata, the only difference is that each element can have a different update function and that the Markov Brain has inputs from an environment and outputs to an environment (but this could also be seen as a section of a cellular automata within a larger system).

                I am very sympathetic to the idea that the universe is in some ways a giant CA. Partly because it would make the connection between my own work and fundamental physics very straightforward, and partly because of the underlying simplicity and beauty.

                I am not really a sleep researcher myself. Yet, dreams are an important part of consciousness research. You might find the following work by my colleagues of interest: http://biorxiv.org/content/early/2014/12/30/012443.short

                It shows that the responses to seeing a face while dreaming for example are very similar to those of actually seeing a face while awake. Being awake can in this view be seen as a "dream guided by reality". At least some hallucinations then are a mixture of the two states.

                All the best,

                Larissa

                Dear Peter,

                Thank you very much for your insightful comment. I now had the time to read your essay too and liked it a lot. I completely agree that there is a fundamental problem how selection can arise in the first place. I hope I made this clear in my essay at the very beginning. In my work, I program selection into the world. What I want to demonstrate is that even if there is a clear cut selection algorithm for a specific task, this doesn't necessarily lead to fit agents that have intrinsic goals. As you rightly point out it is a big question where such selection mechanisms arise from in nature.

                Best regards,

                Larissa