-- "'self=(self)' (I interpret this as a kind of set notation?)"

Yes! It's the language of "non-wellfounded sets" where the set need not be "founded" in different objects.

-- "The homunculus regress is vicious, because it needs to be completed before, so to speak, the first element of the hierarchy is done---i.e. before a given representation has meaning to the lowest-order homunculus..."

In "self = (self)" there is no hierarchy of order between "selves." There is only one "self": "self = (self)."But I do think that hierarchy is relevant. In "The Knowledge Level Hypothesis," there is a hierarchy of analyses-- One could analyze the system in terms of the wave functions of the electrons in circuit components; or in terms of the voltage and amp levels at circuit components; or in terms of microcode in the processor; or in terms of the assembly language; or in terms of the higher level language used (e.g. C++, Pharoh); or in terms of the formal specification of the algorithms involved; or finally, in terms of the "knowledge level" where there are knowledge, *goals,* and actions. The knowledge level hypothesis says there is no higher level than this useful for analysis.

-- "...an algorithm that calls itself before producing a certain output: no matter how long the algorithm is run, the output is never produced."

As I understand it that's the classic "halting problem." Pragmatically, in a real world computer the called routine would never return to the memory address of execution. But I want to mean something different. "self = (self)" will terminate when all it's possibilities are zeroed. But during its lifetime, it's possibilities are not all zeroed!

Dear Stefan Weckbach,

thank you for reading my essay, and especially for your comments. I think one thing I must clarify right off the bat: I don't claim for my von Neumann minds to be a model of full-fledged consciousness, by which I mean especially phenomenal consciousness---the 'what-it's-likeness' of being in a conscious state.

But I think this problem can be usefully separated from the problem of intentionality---that is, from the question of how mental states come to be about things external to them. So, while I am mute on the issue of consciousness, per se, I try and at least outline a possible solution to the question of how mental representations can in fact come to represent something.

To this end, I draw an analogy with von Neumann replicators in a CA-environment: they contain information by simply being shaped, evolutionarily, by that environment; they can access their own information, and generate appropriate behaviour. In this sense, they're like a picture that can look at itself, thus getting rid of the homunculus.

So the way a mental representation arises is roughly this: an organism with a CA-brain encounters a certain environment, and receives certain stimuli; these stimuli set up certain conditions within the CA-brain; the CA-brain contains some population of von Neumann replicators, and, via a selection process, this population will eventually come to be shaped by, or adapted to, those CA-conditions---and with them, the environment that set them up (if only indirectly---but then, all access to the world is ultimately indirect).

In this way, replicators surviving the selection process contain information about the environment. Moreover, this information is accessible to themselves, for e.g. building a copy. But equally well, the information may be used to guide behaviour. Thus, the dominant replicator (whatever, exactly, that may mean) comes to be in a position to guide behaviour (it gets put into the driving seat), and then steers the organism in accord with the information it retrieves about itself, and by extension, the environment.

None of this, as I said, entails that there's anything it is like to be such a CA-brained organism. I think the problem of phenomenology, the hard problem of conscious, lies elsewhere, and will probably require entirely new tools for its solution. In fact, I share your suspicion that data processing does not suffice to create any kind of inner world---but note that my approach shouldn't be construed as 'only' data processing: while one can use cellular automata in this way, a cellular automaton pattern is just a concrete physical entity, like a stone or a telephone; and it's really in this sense that I conceive of them, as entities in their own right, rather than just data structures. But again, I see no reason to believe that even this ought to suffice for phenomenal experience.

Dear Lawrence,

thank you for commenting. I'm not sure the selection mechanism you outline really works---I see a danger of hidden circularity: how do I select for a 'better match' to the environment, without already having a representation of the environment in hand? Whatever tells me that a given replicator matches the exterior well already contains the information that I want to evolve within the replicator, so I could just use that instead. Or am I misunderstanding you?

Regarding Dennett, yes, there is some similarity to his multiple drafts: as different replicators come to dominance, different 'versions' of experience---or at least, of beliefs about the world---arise in the agent, as in the example where an agent mistakes a jacket in a dark room for a stranger. There, the difference between both is clear, but there might also be cases where the earlier set of beliefs is erased, such that there is no introspective evidence of having believed otherwise, but where we can extract it by behavioural experiments---much as in Dennett's model.

Your suggestion towards a maximum entropy principle is interesting. Indeed, in some sense, we should be able to arrive at the 'most sensible' set of beliefs of an agent about the world in terms of maximizing the entropy---in a sense, we should find the set of beliefs with maximum entropy regarding the constraints set up by the environment. I wonder if this is possible with a sort of genetic/evolutionary approach?

Dear Harry,

thank you for your comment. The topic of this essay contest is 'Wandering Towards a Goal - How can mindless mathematical laws give rise to aims and intentions?'.

To me, the key words here are goal, aims, and intentions: in order to have either, agents need the capacity for intentionality---that is, they need to be capable of having internal mental state directed at, or about, things (or events, or occurrences) in the world. To have the goal of climbing Mount Everest, say, you need to be able to have thoughts about Mount Everest; to intend an action, you need to be able to plan that action, for which you again need to be able to think about it, and so on.

Consequently, it is this aboutness---the aforementioned intentionality---that is the prerequisite to all goal-directed behaviour; my model then proposes a way of how such intentionality might arise in a natural world. Agents within which something equivalent to this model is implemented are able to represent the outside world (or parts thereof) to themselves, and consequently, to formulate goals, have aims, and take intentional action. Thus, the model is a proposal for how goals, aims, and intentions may come about in a natural world governed by 'mindless mathematical laws'.

Does this help answer your concern?

Hello Mr Szangolies,

Thans for sharing your work.Congratulations also.It is an intersting appraoch considering the works of structure of von neuman.It is always a question of hard drive and memmory and input and output with Of course an arythmetic method of translation, logic and an unity of checking also logic in its generality.But the add to this unity of codes in considering the mind and intentions seem really difficult considering the main gravitational codes different than photons and binar informations.That is why an AI is possible with the structure of von Neumann, but not a free mind like us the humans because I beleive that gravitation and souls are linked.We cannot approach the main singularities, personal.Like all singularities in fact.

Best Regards

    It is true what.How can we define whzt is a meaning, how to quantify the importance of a meaning for the synchros and sortings of codes and informations.The nature seems utilising spherical volumes and rotations.Lawrence is right in saying that sélections with environments are important.How to rank with an unbiversal logic in fact.

    Hi to both of you,

    Lawrence, what is this maximum entropy ? a maximum Shannon entropy because if it is the maximum thermodynamical entropy or the maximum gravitational entropy ,it is different.Could you tell me more please?

    if we consider that informations and the Shannon entropy can reach an infinity, it is more logic than a maximum.The potential is infinite simply like for the electromagnetic and gravitational informations when we superimpose nor add these informations.A machine mimating the universe could be relevant for the sortings and synchros of codes.The evolutive point of vue is always relevant.

    A maximum entropy so in theory of information is when we have all probabilities without constraints for the message, the signals.But I don't see how this concept could be derived ? for what aims?could you explain me please?

    Yes, and "Wandering towards a goal" in the context of mathematical physics suggests to me the "fixed point problem."

    Say that in a shopping mall your goal is the sporting goods store. So you stand in front of the map of the mall.

    What enables you to plan a path towards your goal is that there is, on the map in front of you, the point "You are here." Which is where you are actually standing in the mall.

    Without this "fixed point" you would be faced with a random walk towards the goal (If, like me most times, you are unwilling to ask strangers).

    The fixed point-- "You are here"-- enables you more efficiently and effectively to move towards your goal.

    So to me, the key in an effective use of a map for moving towards a goal is FIRST to know where you are. (First understand "self.")

    After "self" is identified both in the real world and on the map, then a goal can be located on the map and a route planned towards that goal in the real world.

    But before that-- you have to know where the "self" is, and where it is imaged on the map.

    There may be a potentially useful mnemonic for this: When going towards a goal, first know what the self is so you can locate it in both places-- in other words, "Know thyself."

    Well, the basic idea of maximum entropy methods is that you should always choose the probability distribution with the maximal amount of entropy (information-theoretic Shannon entropy) that is compatible with your data. In this way, you guarantee that you haven't made any unwarranted assumptions about anything not covered by your data (this is very informal, but I hope the point will get across).

    So in a sense, it just says that in an environment about which you have incomplete information, the optimal (in some Bayesian sense) strategy is to assume the maximum uncertainty compatible with the data you already have.

    Joe Fisher, thanks for your comment. I will have a look at your essay.

    Lee Bloomquist, it's interesting that you mention fixed points---in some abstract sense, the self-reproduction I study is a fixed point of the construction relation: the output is the same as the input.

    In fact, you can more stringently formulate von Neumann's construction as a fixed point theorem, as shown by Noson Yanofsky in his eminently readable paper "A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points". It essentially elaborates on and introduces Lawvere's classic "Diagonal arguments and Cartesian Closed Categories", showing how to bring Gödel's theorems, the unsolvability of the halting problem, the uncountability of the real numbers, and von Neumann's construction under the same scheme.

    Note that the hierarchy of homunculi is something very different from the hierarchy of knowledge you propose. In the latter, you can, in a sense, go as far as you like---each new level being a sort of 'coarse-graining' of the level below; in many cases, in fact, the hierarchy will terminate, because eventually, there's nothing left to coarse-grain.

    The hierarchy of homunculi, however, is necessarily infinite. Picture a person looking at a screen. We'd say that they understand what's happening on the screen, that they have knowledge about it, and so on. For instance, the screen might show an apple; the person will identify the apple, and thus, recognize that picture as a picture of an apple. In this way, the picture comes to represent an apple.

    But if we attempt to cash out all representation in this way, we become trapped in the infinite regress: if, in order to recognize the picture as being of an apple, the person possesses some internal mental representation of it---an inner picture of that picture of an apple---they need likewise an inner observer recognizing that second-order picture as such, in order to make it into a representation of the picture seen by the person.

    But this never bottoms out: we're left with ever higher-level homunculi, and, like the way the picture of an apple can only be recognized as a picture of an apple if the first-level homunculus recognizes the internal representation as representing a picture of an apple, the interpretation of the representation at the nth level depends on the interpretation of the representation at the (n+1)st; thus, we have to climb the whole infinite hierarchy in order to generate the recognition of the picture of an apple as being a picture of an apple on the lowest rung. Since we generally consider such infinite tasks impossible, it follows that representation---intention, meaning, having aims and goals---cannot be explained by such a homuncular theory.

    Now, not every theory of intentionality need be homuncular. Yours may not be, for instance. I try to circumvent the homunculus problem by the von Neumann construction, which allows me to construct a sort of internal representation that is itself its own user---that 'looks at itself', recognizing itself as representing something. But very many theories, in practice, are---as a guideline, whenever you read sentences such as 'x represents y', 'x means z', and so on, you should ask: to whom? And if there is an entity, implicitly or explicitly, that needs to be introduced in order to use a given representation as representing something, then you can discard the theory: it contains an (often unacknowledged) homunculus.

    Regarding the halting problem, the example I gave is actually one in which it is solvable (while in general, it's not): the algorithm that calls itself will never terminate. But you are right to link infinite regress and self-referential problems, such as the halting problem: when I draw a map of an island, and the island includes the map (say, it's installed at some specific point), then the map must refer to itself; and if it's infinitely detailed, then the map must contain a copy of the map must contain a copy of the map... And so on.

    Dear Jochen - thank you so much for the essay. It's cogent and well-put together, and it's definitely hitting upon an interesting line of thought and thus is very stimulating.

    I think actually what you said above to one of the other comments is the best paraphrase of your position:

    "an organism with a CA-brain encounters a certain environment, and receives certain stimuli; these stimuli set up certain conditions within the CA-brain; the CA-brain contains some population of von Neumann replicators, and, via a selection process, this population will eventually come to be shaped by, or adapted to, those CA-conditions---and with them, the environment that set them up."

    As you point out, this is pretty reminiscent of neural darwinism. I honestly think you (and Edelman) are correct: fundamentally, this is how brains are set up, much like the immune system. However, I don't think it by itself solves the symbol grounding problem that you're concerned with (particularly as it applies to intentions), as this approach runs into several problems.

    The first is that I don't think this truly solves the problem of error. You say that errors in representation are just when the internal replicators are "not the most well-adapted to the actual environmental conditions, becoming eventually replaced by one fitting them better."

    But what if they are never replaced by anything better? There doesn't seem to be any relation that you've described that actually fixes the representation. Rather it allows for good-enough fits, or only approximates the representation sometimes. For instance, in the dark room example of confusing a shirt for a person, one might merely peek into the room and never return, never updating the replicator.

    The second problem is that the internal replicators might be selected by multiple different things in the world, leading to the same structure, or merely two correlated but different things. Which of these does the internal replicator represent? I think on further amendments the account will break down into a form of utilitarianism, which essentially holds there are no true representations, but merely those that are useful or not. That doesn't solve the problem of intentionality, although it is a very elegant (or at least, highly interesting) view on how brains work.

    And this is without even bringing up the nuclear option for the symbol grounding problem: Putnam's twin earth.

      Jochen,

      Applying the mathematical idea of "fixed point" to the process of traveling towards a goal works not only for the map of a shopping mall, but also for the classic, well known story of a journey told many centuries ago by Parmenides (friend of Zeno, who devised his "paradoxes" as an attempt to help Parmenides). The goal in the story is to enter a hidden world. Applying the mathematical idea of "fixed point" to the map of a mall and also to this well known story demonstrates a way for goals to enter otherwise "mindless" mathematics. I posted it in the essay titled "Theoretical proof..."

      Best Regards!

      Thanks for these explainations.It is a lot of computing and simulations.It is a beautiful work about informations.Now of course the vectors and scalars with geometrical algebras like hopf, Clifford or lie.But If I can it is always how we utilise these vectors, operators, tensors,domains,finite groups,....it is always how we utilise the tool.I play guitar and piano.If you do not put the harmonical paramters, never you can have a harmonical music.The Tools are one thing, the domains and laws an other.The startegy in, the theory of game of von neuman tends always towards the points of equilibrium.Like the disuasion of arms and weapons due to the forces reached and energy.That imp)lies the disuasion.It is my startegy, the quiet harmonical entropical road.Well ,I will recome and I will ask you some détails about Your method.We are going to create an AI :) in utilising the good series arythmetic.The probanities and the distribution must be always rational after all.Best and until soon.

      Dear Erik,

      thanks for your comments! I'm glad you see some value in my thoughts. I should, perhaps, note that to me, my model is more like a 'proof of concept' than a serious proposal for how intentionality works in actual biological organisms (to point out the obvious difference, none of us has a CA in their brain).

      So the bottom line is that if the model works as advertised, what it does is to show that there is logical space between eliminativism and mysticism when it comes to intentionality---i.e. neither does the acceptance of a world that's comprised 'only' of the physical force us to deny the reality of intentionality, nor does the acceptance of that reality force us to believe that there is some mysterious form of original intentionality that we just have to accept as a brute fact about the world. There's a possibility here of both having your cake and eating it (if, again, thinks work as I want them to).

      Regarding the issues you raise, I think the most important thing to realize is that ultimately, reference in my model isn't grounded in the outside world, but rather, in the 'environmental conditions' set up in the cellular automaton via the environment's influence, mediated by the senses. So in some sense, we don't really have access to the outside world---but then again, we knew that already: all it takes to fool us is to insert the right electrochemical signals into our sensory channels, whether that's done by an evil demon or a mad scientist having your brain on their desk in a jar.

      So, for the problem of error, this means that, in the case you're describing, we simply don't notice the error---so the replicator wasn't perfectly adapted to the CA environment, and was never replaced; then, things are just going to proceed as if that replicator was a faithful representation of the CA environment. I might, for instance, run out of my room upon seeing the 'stranger', straight to the police, and report a break-in. I'm wrong about that, of course, there never was a stranger in my room, or a break-in---but this seems to be a common enough sort of occurrence.

      Similarly, you are right that there isn't necessarily a one-to-one correspondence between objects in the world and replicators. But then, what of it? It just means that we'll have the same beliefs, and show the same behaviors, in the presence of either---that is, we just can't distinguish between them.

      I don't think this necessarily reduces the approach to a pragmatist one---in the end, all we ever are aware of, or have beliefs about, are really classes of things, and not individual things themselves. For instance, the chair in my office certainly contains today a couple of different atoms than it did yesterday; yet, I consider it to be the same chair, and my beliefs and other propositional attitudes toward it aren't influenced by this difference. Some differences just don't make a difference to us.

      This then also suggests a reply to the Twin Earth case: on my account, 'water' doesn't refer to either H2O or XYZ; it refers to some set of CA-conditions set up by being in the presence of some sufficiently similar liquids. My meanings are all in the head.

      This also accounts for the possibility of a divergence in meaning, once additional facts come to light: suppose Earth science (and with it, Twin Earth science) become sufficiently advanced to tell the difference between H2O and XYZ. Then, the inhabitants of Earth could redefine water as 'that liquid whose chemical composition is H2O', while Twin Earthlings could say instead that water is 'that liquid whose chemical composition is H2O'. This difference will be reflected in a difference between the CA-conditions set up in an observer of water and their twin: the knowledge of water's chemical composition allows different replicators to prosper.

      Furthermore, an inhabitant of Earth transported to Twin Earth will mistake H2O for XYZ; but then, upon further analysis---i.e. looking really hard, just as it might take looking harder to distinguish between a stranger and a jacket in a dark room---will learn of the difference. In the future, he then simply won't know whether he's presented with a glass of H2O or a glass of XYZ without doing the analysis---but that's not any more problematic than not knowing whether something is water or vodka without having had a taste.

      Good essay sir...

      Taking your apple example....

      One apple will not be same another apple. Each single apple will be different. Each apple will have a different picture in different direction.

      -How will this homunculus algorithm will recognize it is an apple ?

      -How will it recognize it is a different apple?

      -How will it recognize it is a different view?

      Good essay sir...

      I have a small further doubt, hope you will analyze it for me. Now taking your apple example....

      One apple will not be same another apple. Each single apple will be different. Each apple will have a different picture in different direction.

      -How will this homunculus algorithm will recognize it is an apple ?

      -How will it recognize it is a different apple?

      -How will it recognize it is a different view..?