Dear Lee Bloomquist,

thank you for the interesting response! I'll have to take a look at both the book you cite (hopefully the university library has a copy), and your essay before I can comment more in depth, but I think we have broadly similar concerns---a circularity need not automatically be vicious. In some sense, my whole essay is an attempt at removing the vicious circularity in the idea that 'a mind is a pattern perceived by a mind', which seems to bear some relation to your 'self=(self)' (I interpret this as a kind of set notation?).

The homunculus regress is vicious, because it needs to be completed before, so to speak, the first element of the hierarchy is done---i.e. before a given representation has meaning to the lowest-order homunculus, all the representations on the higher levels must have meaning.

In contrast, an acoustic feedback, or a control loop, aren't vicious---in a feedback loop, we have an acoustic signal being generated, which is then recorded by a microphone, amplified, re-emitted, re-recorded, and so on. This may be undesirable, but there's nothing fundamentally problematic about it. It would be different if, in order to emit sound on the first 'level', the whole infinite tower of recording-amplifying-emitting would have to be traversed: in this case, the production of a sound is simply logically impossible, and likewise the production of meaning in a homuncular setup.

The same goes for an algorithm that calls itself before producing a certain output: no matter how long the algorithm is run, the output is never produced.

Anyway, I'm going to go have a look at your essay!

Dear Jack,

thanks for your kind words! I'm not quite sure I understand your questions correctly, though. I don't intend to put forward a modern form of idealism in the traditional sense---i.e. that everything is ultimately mental at the bottom. In some sense, I suppose one could argue that in my model, ideas are shaped only by certain mental background conditions, and hence, properly speaking, only refer to those---but I still intend for these background conditions (providing the fitness landscape) to be essentially provided by the outside, physical, world. You could think of a robot, having a cellular automaton for a brain, in which ideas 'evolve' according to the conditions created by the impact of the outside world.

Regarding your second question, are you asking about how self-reproduction arose in biological systems, or how it got started within the minds of biological creatures? If the former, I'm hardly an expert---but the basic idea is that there exist certain autocatalytic reactions, which then, over time, grow more efficient at creating copies of certain molecules. I think something like that may also have occurred in the brain: organisms prosper if they can adapt to a wide variety of circumstances, and as I outlined in my essay, the evolution of stable structures mirroring the outside world within the brain may be a general-purpose way of coping with near-limitless variation in the outside world.

Thus, creatures with a simple sort of self-replicating mechanism in the brain did better than creatures without, and this simple mechanism then got gradually refined via natural (and perhaps, also cultural) selection.

Did that address your questions at all?

Dear Jochen Szangolies,

your essay is interesting and thought-provoking, at least for me. You give an attempt to model meaning in terms of algorithmic encodings. Your attempt is based on the assumption that brains are cellular automata, exhibiting patterns that can be encoded by the cellular automaton itself. You define CA patterns to be mental representations, thereby excorcising the homunculus problem. These patterns use themselves as symbols. The reason for you to introduce CA patterns as capable of being 'mentally' accessible is - as i understood it - because of those patterns being algorithmically compressible. As you wrote, a mental inner world is populated with a large amount of combined facts which all have their own specific meaning (coffee in the mug), so the CA must be able to produce new symbols every time a new combinatorical fact is imposed by it via the environment. Although i do not doubt that arbitrary symbols can be encoded by arbitrary symbols, i did not grasp how this could lead to the emergence of mental representations. Taking your attempt seriously, your attempt came about by the same process your attempt is describing. This may be consistent for a fully-fledged conscious being, but i think this is not the whole story, because for giving such an attempt, you had to carefully manipulate many concepts ('symbols') already being established in your mental inner world. Although i assume that your attempt at a certain level of the brain does indeed meet reality by constituing some parts of a stable cognition, i cannot see how mere data processing can ever produce the slightest piece of a mental inner world. Data processing surely can shape an already existent inner world. You seem to take it as guaranteed that the brain can be sufficiently defined as a neural network and/or as a CA, being also capable of *producing* the needed mentality in the first place in order for data processing being able to shape this mental inner world. Until now, i doubt that a simulation of such a network on a computer does result in a conscious entity. I would be more convinced when some projects modelling the brain as a neural network / CA would indicate some strong evidence that neural networks / CA give rise to mental inner worlds. But this does not prevent your attempt to explore the consequences of such a positive result. In this sense, your essay was an interesting piece to read. But until now i don't think that data processing is the key for explaining consciousness. It nonetheless is surely important to generate and compress meaningful symbols within a consciousness, shortcuts to reduce complexity and to organize an already existing inner mental world.

    Your model has a definite selection mechanism to it. The more precise emulation of the exterior world by the tripartite system is a sort of self-correcting system. Is this similar in ways to Dennett's heterophenominon idea, where there might be several competing systems that result in a one of them that has the best outcome. Further, could this be derived within something like maximum entropy?

    LC

      -- "'self=(self)' (I interpret this as a kind of set notation?)"

      Yes! It's the language of "non-wellfounded sets" where the set need not be "founded" in different objects.

      -- "The homunculus regress is vicious, because it needs to be completed before, so to speak, the first element of the hierarchy is done---i.e. before a given representation has meaning to the lowest-order homunculus..."

      In "self = (self)" there is no hierarchy of order between "selves." There is only one "self": "self = (self)."But I do think that hierarchy is relevant. In "The Knowledge Level Hypothesis," there is a hierarchy of analyses-- One could analyze the system in terms of the wave functions of the electrons in circuit components; or in terms of the voltage and amp levels at circuit components; or in terms of microcode in the processor; or in terms of the assembly language; or in terms of the higher level language used (e.g. C++, Pharoh); or in terms of the formal specification of the algorithms involved; or finally, in terms of the "knowledge level" where there are knowledge, *goals,* and actions. The knowledge level hypothesis says there is no higher level than this useful for analysis.

      -- "...an algorithm that calls itself before producing a certain output: no matter how long the algorithm is run, the output is never produced."

      As I understand it that's the classic "halting problem." Pragmatically, in a real world computer the called routine would never return to the memory address of execution. But I want to mean something different. "self = (self)" will terminate when all it's possibilities are zeroed. But during its lifetime, it's possibilities are not all zeroed!

      Dear Stefan Weckbach,

      thank you for reading my essay, and especially for your comments. I think one thing I must clarify right off the bat: I don't claim for my von Neumann minds to be a model of full-fledged consciousness, by which I mean especially phenomenal consciousness---the 'what-it's-likeness' of being in a conscious state.

      But I think this problem can be usefully separated from the problem of intentionality---that is, from the question of how mental states come to be about things external to them. So, while I am mute on the issue of consciousness, per se, I try and at least outline a possible solution to the question of how mental representations can in fact come to represent something.

      To this end, I draw an analogy with von Neumann replicators in a CA-environment: they contain information by simply being shaped, evolutionarily, by that environment; they can access their own information, and generate appropriate behaviour. In this sense, they're like a picture that can look at itself, thus getting rid of the homunculus.

      So the way a mental representation arises is roughly this: an organism with a CA-brain encounters a certain environment, and receives certain stimuli; these stimuli set up certain conditions within the CA-brain; the CA-brain contains some population of von Neumann replicators, and, via a selection process, this population will eventually come to be shaped by, or adapted to, those CA-conditions---and with them, the environment that set them up (if only indirectly---but then, all access to the world is ultimately indirect).

      In this way, replicators surviving the selection process contain information about the environment. Moreover, this information is accessible to themselves, for e.g. building a copy. But equally well, the information may be used to guide behaviour. Thus, the dominant replicator (whatever, exactly, that may mean) comes to be in a position to guide behaviour (it gets put into the driving seat), and then steers the organism in accord with the information it retrieves about itself, and by extension, the environment.

      None of this, as I said, entails that there's anything it is like to be such a CA-brained organism. I think the problem of phenomenology, the hard problem of conscious, lies elsewhere, and will probably require entirely new tools for its solution. In fact, I share your suspicion that data processing does not suffice to create any kind of inner world---but note that my approach shouldn't be construed as 'only' data processing: while one can use cellular automata in this way, a cellular automaton pattern is just a concrete physical entity, like a stone or a telephone; and it's really in this sense that I conceive of them, as entities in their own right, rather than just data structures. But again, I see no reason to believe that even this ought to suffice for phenomenal experience.

      Dear Lawrence,

      thank you for commenting. I'm not sure the selection mechanism you outline really works---I see a danger of hidden circularity: how do I select for a 'better match' to the environment, without already having a representation of the environment in hand? Whatever tells me that a given replicator matches the exterior well already contains the information that I want to evolve within the replicator, so I could just use that instead. Or am I misunderstanding you?

      Regarding Dennett, yes, there is some similarity to his multiple drafts: as different replicators come to dominance, different 'versions' of experience---or at least, of beliefs about the world---arise in the agent, as in the example where an agent mistakes a jacket in a dark room for a stranger. There, the difference between both is clear, but there might also be cases where the earlier set of beliefs is erased, such that there is no introspective evidence of having believed otherwise, but where we can extract it by behavioural experiments---much as in Dennett's model.

      Your suggestion towards a maximum entropy principle is interesting. Indeed, in some sense, we should be able to arrive at the 'most sensible' set of beliefs of an agent about the world in terms of maximizing the entropy---in a sense, we should find the set of beliefs with maximum entropy regarding the constraints set up by the environment. I wonder if this is possible with a sort of genetic/evolutionary approach?

      Dear Harry,

      thank you for your comment. The topic of this essay contest is 'Wandering Towards a Goal - How can mindless mathematical laws give rise to aims and intentions?'.

      To me, the key words here are goal, aims, and intentions: in order to have either, agents need the capacity for intentionality---that is, they need to be capable of having internal mental state directed at, or about, things (or events, or occurrences) in the world. To have the goal of climbing Mount Everest, say, you need to be able to have thoughts about Mount Everest; to intend an action, you need to be able to plan that action, for which you again need to be able to think about it, and so on.

      Consequently, it is this aboutness---the aforementioned intentionality---that is the prerequisite to all goal-directed behaviour; my model then proposes a way of how such intentionality might arise in a natural world. Agents within which something equivalent to this model is implemented are able to represent the outside world (or parts thereof) to themselves, and consequently, to formulate goals, have aims, and take intentional action. Thus, the model is a proposal for how goals, aims, and intentions may come about in a natural world governed by 'mindless mathematical laws'.

      Does this help answer your concern?

      Hello Mr Szangolies,

      Thans for sharing your work.Congratulations also.It is an intersting appraoch considering the works of structure of von neuman.It is always a question of hard drive and memmory and input and output with Of course an arythmetic method of translation, logic and an unity of checking also logic in its generality.But the add to this unity of codes in considering the mind and intentions seem really difficult considering the main gravitational codes different than photons and binar informations.That is why an AI is possible with the structure of von Neumann, but not a free mind like us the humans because I beleive that gravitation and souls are linked.We cannot approach the main singularities, personal.Like all singularities in fact.

      Best Regards

        It is true what.How can we define whzt is a meaning, how to quantify the importance of a meaning for the synchros and sortings of codes and informations.The nature seems utilising spherical volumes and rotations.Lawrence is right in saying that sélections with environments are important.How to rank with an unbiversal logic in fact.

        Hi to both of you,

        Lawrence, what is this maximum entropy ? a maximum Shannon entropy because if it is the maximum thermodynamical entropy or the maximum gravitational entropy ,it is different.Could you tell me more please?

        if we consider that informations and the Shannon entropy can reach an infinity, it is more logic than a maximum.The potential is infinite simply like for the electromagnetic and gravitational informations when we superimpose nor add these informations.A machine mimating the universe could be relevant for the sortings and synchros of codes.The evolutive point of vue is always relevant.

        A maximum entropy so in theory of information is when we have all probabilities without constraints for the message, the signals.But I don't see how this concept could be derived ? for what aims?could you explain me please?

        Yes, and "Wandering towards a goal" in the context of mathematical physics suggests to me the "fixed point problem."

        Say that in a shopping mall your goal is the sporting goods store. So you stand in front of the map of the mall.

        What enables you to plan a path towards your goal is that there is, on the map in front of you, the point "You are here." Which is where you are actually standing in the mall.

        Without this "fixed point" you would be faced with a random walk towards the goal (If, like me most times, you are unwilling to ask strangers).

        The fixed point-- "You are here"-- enables you more efficiently and effectively to move towards your goal.

        So to me, the key in an effective use of a map for moving towards a goal is FIRST to know where you are. (First understand "self.")

        After "self" is identified both in the real world and on the map, then a goal can be located on the map and a route planned towards that goal in the real world.

        But before that-- you have to know where the "self" is, and where it is imaged on the map.

        There may be a potentially useful mnemonic for this: When going towards a goal, first know what the self is so you can locate it in both places-- in other words, "Know thyself."

        Well, the basic idea of maximum entropy methods is that you should always choose the probability distribution with the maximal amount of entropy (information-theoretic Shannon entropy) that is compatible with your data. In this way, you guarantee that you haven't made any unwarranted assumptions about anything not covered by your data (this is very informal, but I hope the point will get across).

        So in a sense, it just says that in an environment about which you have incomplete information, the optimal (in some Bayesian sense) strategy is to assume the maximum uncertainty compatible with the data you already have.

        Joe Fisher, thanks for your comment. I will have a look at your essay.

        Lee Bloomquist, it's interesting that you mention fixed points---in some abstract sense, the self-reproduction I study is a fixed point of the construction relation: the output is the same as the input.

        In fact, you can more stringently formulate von Neumann's construction as a fixed point theorem, as shown by Noson Yanofsky in his eminently readable paper "A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points". It essentially elaborates on and introduces Lawvere's classic "Diagonal arguments and Cartesian Closed Categories", showing how to bring Gödel's theorems, the unsolvability of the halting problem, the uncountability of the real numbers, and von Neumann's construction under the same scheme.

        Note that the hierarchy of homunculi is something very different from the hierarchy of knowledge you propose. In the latter, you can, in a sense, go as far as you like---each new level being a sort of 'coarse-graining' of the level below; in many cases, in fact, the hierarchy will terminate, because eventually, there's nothing left to coarse-grain.

        The hierarchy of homunculi, however, is necessarily infinite. Picture a person looking at a screen. We'd say that they understand what's happening on the screen, that they have knowledge about it, and so on. For instance, the screen might show an apple; the person will identify the apple, and thus, recognize that picture as a picture of an apple. In this way, the picture comes to represent an apple.

        But if we attempt to cash out all representation in this way, we become trapped in the infinite regress: if, in order to recognize the picture as being of an apple, the person possesses some internal mental representation of it---an inner picture of that picture of an apple---they need likewise an inner observer recognizing that second-order picture as such, in order to make it into a representation of the picture seen by the person.

        But this never bottoms out: we're left with ever higher-level homunculi, and, like the way the picture of an apple can only be recognized as a picture of an apple if the first-level homunculus recognizes the internal representation as representing a picture of an apple, the interpretation of the representation at the nth level depends on the interpretation of the representation at the (n+1)st; thus, we have to climb the whole infinite hierarchy in order to generate the recognition of the picture of an apple as being a picture of an apple on the lowest rung. Since we generally consider such infinite tasks impossible, it follows that representation---intention, meaning, having aims and goals---cannot be explained by such a homuncular theory.

        Now, not every theory of intentionality need be homuncular. Yours may not be, for instance. I try to circumvent the homunculus problem by the von Neumann construction, which allows me to construct a sort of internal representation that is itself its own user---that 'looks at itself', recognizing itself as representing something. But very many theories, in practice, are---as a guideline, whenever you read sentences such as 'x represents y', 'x means z', and so on, you should ask: to whom? And if there is an entity, implicitly or explicitly, that needs to be introduced in order to use a given representation as representing something, then you can discard the theory: it contains an (often unacknowledged) homunculus.

        Regarding the halting problem, the example I gave is actually one in which it is solvable (while in general, it's not): the algorithm that calls itself will never terminate. But you are right to link infinite regress and self-referential problems, such as the halting problem: when I draw a map of an island, and the island includes the map (say, it's installed at some specific point), then the map must refer to itself; and if it's infinitely detailed, then the map must contain a copy of the map must contain a copy of the map... And so on.

        Dear Jochen - thank you so much for the essay. It's cogent and well-put together, and it's definitely hitting upon an interesting line of thought and thus is very stimulating.

        I think actually what you said above to one of the other comments is the best paraphrase of your position:

        "an organism with a CA-brain encounters a certain environment, and receives certain stimuli; these stimuli set up certain conditions within the CA-brain; the CA-brain contains some population of von Neumann replicators, and, via a selection process, this population will eventually come to be shaped by, or adapted to, those CA-conditions---and with them, the environment that set them up."

        As you point out, this is pretty reminiscent of neural darwinism. I honestly think you (and Edelman) are correct: fundamentally, this is how brains are set up, much like the immune system. However, I don't think it by itself solves the symbol grounding problem that you're concerned with (particularly as it applies to intentions), as this approach runs into several problems.

        The first is that I don't think this truly solves the problem of error. You say that errors in representation are just when the internal replicators are "not the most well-adapted to the actual environmental conditions, becoming eventually replaced by one fitting them better."

        But what if they are never replaced by anything better? There doesn't seem to be any relation that you've described that actually fixes the representation. Rather it allows for good-enough fits, or only approximates the representation sometimes. For instance, in the dark room example of confusing a shirt for a person, one might merely peek into the room and never return, never updating the replicator.

        The second problem is that the internal replicators might be selected by multiple different things in the world, leading to the same structure, or merely two correlated but different things. Which of these does the internal replicator represent? I think on further amendments the account will break down into a form of utilitarianism, which essentially holds there are no true representations, but merely those that are useful or not. That doesn't solve the problem of intentionality, although it is a very elegant (or at least, highly interesting) view on how brains work.

        And this is without even bringing up the nuclear option for the symbol grounding problem: Putnam's twin earth.

          Jochen,

          Applying the mathematical idea of "fixed point" to the process of traveling towards a goal works not only for the map of a shopping mall, but also for the classic, well known story of a journey told many centuries ago by Parmenides (friend of Zeno, who devised his "paradoxes" as an attempt to help Parmenides). The goal in the story is to enter a hidden world. Applying the mathematical idea of "fixed point" to the map of a mall and also to this well known story demonstrates a way for goals to enter otherwise "mindless" mathematics. I posted it in the essay titled "Theoretical proof..."

          Best Regards!