Note that the hierarchy of homunculi is something very different from the hierarchy of knowledge you propose. In the latter, you can, in a sense, go as far as you like---each new level being a sort of 'coarse-graining' of the level below; in many cases, in fact, the hierarchy will terminate, because eventually, there's nothing left to coarse-grain.

The hierarchy of homunculi, however, is necessarily infinite. Picture a person looking at a screen. We'd say that they understand what's happening on the screen, that they have knowledge about it, and so on. For instance, the screen might show an apple; the person will identify the apple, and thus, recognize that picture as a picture of an apple. In this way, the picture comes to represent an apple.

But if we attempt to cash out all representation in this way, we become trapped in the infinite regress: if, in order to recognize the picture as being of an apple, the person possesses some internal mental representation of it---an inner picture of that picture of an apple---they need likewise an inner observer recognizing that second-order picture as such, in order to make it into a representation of the picture seen by the person.

But this never bottoms out: we're left with ever higher-level homunculi, and, like the way the picture of an apple can only be recognized as a picture of an apple if the first-level homunculus recognizes the internal representation as representing a picture of an apple, the interpretation of the representation at the nth level depends on the interpretation of the representation at the (n+1)st; thus, we have to climb the whole infinite hierarchy in order to generate the recognition of the picture of an apple as being a picture of an apple on the lowest rung. Since we generally consider such infinite tasks impossible, it follows that representation---intention, meaning, having aims and goals---cannot be explained by such a homuncular theory.

Now, not every theory of intentionality need be homuncular. Yours may not be, for instance. I try to circumvent the homunculus problem by the von Neumann construction, which allows me to construct a sort of internal representation that is itself its own user---that 'looks at itself', recognizing itself as representing something. But very many theories, in practice, are---as a guideline, whenever you read sentences such as 'x represents y', 'x means z', and so on, you should ask: to whom? And if there is an entity, implicitly or explicitly, that needs to be introduced in order to use a given representation as representing something, then you can discard the theory: it contains an (often unacknowledged) homunculus.

Regarding the halting problem, the example I gave is actually one in which it is solvable (while in general, it's not): the algorithm that calls itself will never terminate. But you are right to link infinite regress and self-referential problems, such as the halting problem: when I draw a map of an island, and the island includes the map (say, it's installed at some specific point), then the map must refer to itself; and if it's infinitely detailed, then the map must contain a copy of the map must contain a copy of the map... And so on.

Dear Jochen - thank you so much for the essay. It's cogent and well-put together, and it's definitely hitting upon an interesting line of thought and thus is very stimulating.

I think actually what you said above to one of the other comments is the best paraphrase of your position:

"an organism with a CA-brain encounters a certain environment, and receives certain stimuli; these stimuli set up certain conditions within the CA-brain; the CA-brain contains some population of von Neumann replicators, and, via a selection process, this population will eventually come to be shaped by, or adapted to, those CA-conditions---and with them, the environment that set them up."

As you point out, this is pretty reminiscent of neural darwinism. I honestly think you (and Edelman) are correct: fundamentally, this is how brains are set up, much like the immune system. However, I don't think it by itself solves the symbol grounding problem that you're concerned with (particularly as it applies to intentions), as this approach runs into several problems.

The first is that I don't think this truly solves the problem of error. You say that errors in representation are just when the internal replicators are "not the most well-adapted to the actual environmental conditions, becoming eventually replaced by one fitting them better."

But what if they are never replaced by anything better? There doesn't seem to be any relation that you've described that actually fixes the representation. Rather it allows for good-enough fits, or only approximates the representation sometimes. For instance, in the dark room example of confusing a shirt for a person, one might merely peek into the room and never return, never updating the replicator.

The second problem is that the internal replicators might be selected by multiple different things in the world, leading to the same structure, or merely two correlated but different things. Which of these does the internal replicator represent? I think on further amendments the account will break down into a form of utilitarianism, which essentially holds there are no true representations, but merely those that are useful or not. That doesn't solve the problem of intentionality, although it is a very elegant (or at least, highly interesting) view on how brains work.

And this is without even bringing up the nuclear option for the symbol grounding problem: Putnam's twin earth.

    Jochen,

    Applying the mathematical idea of "fixed point" to the process of traveling towards a goal works not only for the map of a shopping mall, but also for the classic, well known story of a journey told many centuries ago by Parmenides (friend of Zeno, who devised his "paradoxes" as an attempt to help Parmenides). The goal in the story is to enter a hidden world. Applying the mathematical idea of "fixed point" to the map of a mall and also to this well known story demonstrates a way for goals to enter otherwise "mindless" mathematics. I posted it in the essay titled "Theoretical proof..."

    Best Regards!

    Thanks for these explainations.It is a lot of computing and simulations.It is a beautiful work about informations.Now of course the vectors and scalars with geometrical algebras like hopf, Clifford or lie.But If I can it is always how we utilise these vectors, operators, tensors,domains,finite groups,....it is always how we utilise the tool.I play guitar and piano.If you do not put the harmonical paramters, never you can have a harmonical music.The Tools are one thing, the domains and laws an other.The startegy in, the theory of game of von neuman tends always towards the points of equilibrium.Like the disuasion of arms and weapons due to the forces reached and energy.That imp)lies the disuasion.It is my startegy, the quiet harmonical entropical road.Well ,I will recome and I will ask you some détails about Your method.We are going to create an AI :) in utilising the good series arythmetic.The probanities and the distribution must be always rational after all.Best and until soon.

    Dear Erik,

    thanks for your comments! I'm glad you see some value in my thoughts. I should, perhaps, note that to me, my model is more like a 'proof of concept' than a serious proposal for how intentionality works in actual biological organisms (to point out the obvious difference, none of us has a CA in their brain).

    So the bottom line is that if the model works as advertised, what it does is to show that there is logical space between eliminativism and mysticism when it comes to intentionality---i.e. neither does the acceptance of a world that's comprised 'only' of the physical force us to deny the reality of intentionality, nor does the acceptance of that reality force us to believe that there is some mysterious form of original intentionality that we just have to accept as a brute fact about the world. There's a possibility here of both having your cake and eating it (if, again, thinks work as I want them to).

    Regarding the issues you raise, I think the most important thing to realize is that ultimately, reference in my model isn't grounded in the outside world, but rather, in the 'environmental conditions' set up in the cellular automaton via the environment's influence, mediated by the senses. So in some sense, we don't really have access to the outside world---but then again, we knew that already: all it takes to fool us is to insert the right electrochemical signals into our sensory channels, whether that's done by an evil demon or a mad scientist having your brain on their desk in a jar.

    So, for the problem of error, this means that, in the case you're describing, we simply don't notice the error---so the replicator wasn't perfectly adapted to the CA environment, and was never replaced; then, things are just going to proceed as if that replicator was a faithful representation of the CA environment. I might, for instance, run out of my room upon seeing the 'stranger', straight to the police, and report a break-in. I'm wrong about that, of course, there never was a stranger in my room, or a break-in---but this seems to be a common enough sort of occurrence.

    Similarly, you are right that there isn't necessarily a one-to-one correspondence between objects in the world and replicators. But then, what of it? It just means that we'll have the same beliefs, and show the same behaviors, in the presence of either---that is, we just can't distinguish between them.

    I don't think this necessarily reduces the approach to a pragmatist one---in the end, all we ever are aware of, or have beliefs about, are really classes of things, and not individual things themselves. For instance, the chair in my office certainly contains today a couple of different atoms than it did yesterday; yet, I consider it to be the same chair, and my beliefs and other propositional attitudes toward it aren't influenced by this difference. Some differences just don't make a difference to us.

    This then also suggests a reply to the Twin Earth case: on my account, 'water' doesn't refer to either H2O or XYZ; it refers to some set of CA-conditions set up by being in the presence of some sufficiently similar liquids. My meanings are all in the head.

    This also accounts for the possibility of a divergence in meaning, once additional facts come to light: suppose Earth science (and with it, Twin Earth science) become sufficiently advanced to tell the difference between H2O and XYZ. Then, the inhabitants of Earth could redefine water as 'that liquid whose chemical composition is H2O', while Twin Earthlings could say instead that water is 'that liquid whose chemical composition is H2O'. This difference will be reflected in a difference between the CA-conditions set up in an observer of water and their twin: the knowledge of water's chemical composition allows different replicators to prosper.

    Furthermore, an inhabitant of Earth transported to Twin Earth will mistake H2O for XYZ; but then, upon further analysis---i.e. looking really hard, just as it might take looking harder to distinguish between a stranger and a jacket in a dark room---will learn of the difference. In the future, he then simply won't know whether he's presented with a glass of H2O or a glass of XYZ without doing the analysis---but that's not any more problematic than not knowing whether something is water or vodka without having had a taste.

    Good essay sir...

    Taking your apple example....

    One apple will not be same another apple. Each single apple will be different. Each apple will have a different picture in different direction.

    -How will this homunculus algorithm will recognize it is an apple ?

    -How will it recognize it is a different apple?

    -How will it recognize it is a different view?

    Good essay sir...

    I have a small further doubt, hope you will analyze it for me. Now taking your apple example....

    One apple will not be same another apple. Each single apple will be different. Each apple will have a different picture in different direction.

    -How will this homunculus algorithm will recognize it is an apple ?

    -How will it recognize it is a different apple?

    -How will it recognize it is a different view..?

      Dear Jochen - thanks so much for your detailed response!

      I agree with most everything you say, I just disagree that this solves the actual issue you bring up of intentional reference.

      The initial problem you set up is this one: "the notion of reference: when we interpret the word 'apple' to refer to an apple, a reasonable suggestion seems to be that the word causes an appropriate mental representation to be called up--that is, a certain kind of mental symbol that refers to said apple."

      When then after giving your account, you admit that there are still things like reference-error, twin-earth problems, etc, and answer those things by saying "My meanings are all in the head." In analytic philosophy this is called a "narrow content" view of representation. But once one takes a narrow content view, why specify this tripartite structure and use the analogy of the CA-replicators?

      For instance, one could give a more general answer that takes the same form as your proposal, and just say that through development, learning, and evolution, our internal brain structure correlates to the outside world. But when pressed about errors in reference, twin earth, etc, the more general proposal just says "well, sure, all that's true. But the meanings are in the head anyways!"

      In other words, if you've admit that meanings are all in the head anyways, can have errors, and don't have a fixed content in terms of referencing the outside world, I'm not sure what further work needs to be done in terms of the analogy to Von Neumann machines. The traditional problem that narrow-content views run into is that of underdetermination -> there are many possible interpretations to some brain states (or CA-states) in terms of what it's representing, and I'm not sure how the analogy gets you out of that.

      Btw I know I sound critical here - but it's only because it's so advanced as an essay that we can even have this discussion.

      EPH

      I found your small paragraph at the top of page 5:

      The key to shake the agent's mind free from empty, self-referential navel-gazing is the design's evolvability.

      Assume that the agent is subject to certain environmental stimuli. These will have some influence

      upon its CA brain: they could, for instance, set up a certain pattern of excitations. As a result, the

      evolution of patterns within the CA brain will be influenced by these changes, which are, in turn, due

      to the environmental stimuli.

      as interesting. This is similar to what I argue with the necessity of the open world. The open world, or the environmental stimuli that is not predictable in a closed world, is what cuts off the self-referential endless looping. I discuss this in my essay at the end with respect to MH spacetimes and black holes. I don't necessarily think black holes are conscious entities, but that they have an open quantum structure means they are not complete self-referential quantum bit systems. In my essay I also invoke the MERA logic which has a cellular automata nature.

      The randomizing influence of the environment is crucial I think to prevent the duplicator from the universal Turing machine problem. The duplicator duplicates the object and blueprint, but in doing so duplicates the blue print encoding a copy of itself, which leads to this infinite regress. This is why there is no UTM; there is the need for the UTM to emulate itself emulating all TMs and itself emulating TMs which then ... . It leads to an uncountable infinite number of needed copies that runs into Cantor's diagonalization problem.

      Great essay! LC

        Dear Erik,

        please, don't apologize for being critical---if the idea is to have any chance at all, it must be able to withstand some pressure. So every issue you point out is helping me, and I'm grateful for having the opportunity of discussing my idea.

        Regarding the problem you see, do you think that narrow content is just in principle not capable of solving the problem of reference, or do you think that if one believes in narrow content at all, then there's really no additional problem left to solve---i.e. that one then is committed to a kind of eliminativist stance?

        To me, the problem to solve is how any sort of mental state refers at all---i.e. how it comes to have any kind of content. For instance, I disagree with the idea that a thermostat refers, simply by virtue of being in a particular state, to the temperature (or to its being 'too low' or 'too high'). There's no salient difference between that thermostat and any other system with two possible states---it will always depend on the environment which state the system is in, and thus, the representational content could at most be 'I am in the state I am when the environment dictates that I evolve into this state'---which is basically empty, and doesn't really say more than that the system is in one of two possible states.

        Reference, semantic content, etc., needs more than that. For instance, consider the example of one versus two lights being lit in the steeple of the Old North Church. If that's all you see, it has no representational content; but if you further know that 'one if by land, two if by sea', then one lantern being lit has content, and it represents the English attacking by land.

        The trouble is, of course, that the intentionality thus bestowed to the lantern is derived from that of you, who knows that 'one if by land, two if by sea'. Since you already possess intentionality, trying to explain the intentionality of your own mental states---i.e. how they come to refer to anything---in the same terms as we have just explained the intentional, referential nature of the one lantern burning in the Old North Church will run into the homunculus problem.

        This is independent of whether mental content is narrow or wide---the important thing is that it represents something; whether that something is, say, the apple out there in the world, or the conditions caused within the brain by the presence of that apple, or even the phenomenal experience of that apple is immaterial.

        And it's there that my model comes in (if things work as I think they do): by collapsing the symbol and the entity using it as a symbol---the representation and the homunculus using it; you and the lantern at the Old North Church---into a single entity, I think it's possible to get rid of the regress. So, a von Neumann replicator evolved in conditions caused by the presence of an apple uses itself as a symbol for these conditions, reads the information it has stored and causes the body to perform actions appropriate to the presence of said apple. One might call this an 'autotelic symbol', because it derives its referent not from something external to it, but rather, from its own form (and because I just learned the word 'autotelic').

        Thank you for the compliment! I suppose you're essentially asking about how there come to be different categories of objects that can be represented. I.e., what makes a different apple an instance of the category 'apple'? What makes a peach not an instance of the same category?

        In a sense, this harks back to the problem of universals, with all the attendant baggage that would take too long to even review, much less address, here.

        But I think that, from a more modern perspective, one can draw an interesting analogy to a hash function. A hash function, used, e.g., in cryptography, is a function that takes a set of inputs to the same output, thus 'grouping' inputs into distinguishable sets.

        Thus, we get a partitioning of a certain domain into different classes---like, e.g., the domain 'fruit' is partitioned into 'apples', 'peaches', and so on. So, one possible response here would be that two different replicators represent different instances of the same sort of object if they are mapped to the same hash code. This doesn't have to be explicit; for instance, when the replicator guides behavior, it might be that only certain of its properties are relevant for a given action---this ensures that the reaction to 'apple A' will be the same as to 'apple B', but different from 'peach X'.

        Alternatively, one can think about this more loosely in terms of Wittgensteinian 'family resemblances': if there is a resemblance between objects, there will be a resemblance in the replicators, and consequently, a resemblance in actions taken upon encountering these objects (such as saying, 'that's an apple').

        However, I think that this is an issue whose detailed treatment will have to wait until the model is more fully developed, and one can start applying it to real-world situations.

        I found the essay a but hard to read and a bit waffly.

        It seems to me you are over complicating the problem.

        We can easily see how a robot can build another copy of itself & then install the software that it is using itself into the new robot - job done!; no problems about infinite regress etc.

        In nature, creatures that are unable to reproduce will die out, so given enough time and different types of creatures being formed due to essentially random changes, those that have formed the ability to copy themselves will continue to exists - those that don't won't.

        Declan T

          Thanks for your comment, and the compliment! I've already had a preliminary look at your essay, but I'll hold off on commenting until I've had time to digest it somewhat (there's quite a lot there to be digested).

          I'm thus not quite sure we mean the same thing by an 'open world'. It's true that I use the evolvability of my replicators in order to cope with the limitless possibilities that an agent in the world is presented with---that's why something like an expert system simply won't do: it's essentially a long, nested chain of 'if... then... else' conditionals, which the real world will always exhaust after some time (and given the limitations of feasibility of such constructions, usually after a rather short time).

          It may be that something like this intrinsically non-delimitable nature is what you have in mind with the concept of openness, which you then more concretely paint as the existence of long-range entanglement between arbitrary partitions of a system, defining a topological order. But I'll have to have another look at your essay.

          Thank you for your comment. I'm sorry to hear you found my essay hard to read; I tried to be as clear as I can. One must, however, be careful in treating this subject: it is easy to follow an intuition, and be led down a blind alley. Hence, I simultaneously tried to be as scrupulous in my formulations as possible---perhaps excessively so.

          Take, for instance, your example of the self-reproducing robot: at first sight, it seems to be a nice, and simple, solution to the problem. Likewise, a machine that just scans itself, and then produces a copy, seems perfectly adequate.

          But both actually don't solve the problem, as can be seen with a little more thought. For the self-scanning machine, this is described in my essay; for your robot, the key question is about how it copies its own software. The first thing is that the robot itself is controlled by that software; hence, all its actions are actions guided by the software. So, too, is the copying: consequently, the software must actually copy itself into the newly created robot body.

          But this is of course just the problem of reproduction again: how does the software copy itself? So all your robot achieves is to reduce the problem of robot-reproduction to software-reproduction. Consequently, it's an example of just the kind of circularity my essay tries to break up.

          So I don't think I'm overcomplicating the problem; it's just not that easy a problem (although as von Neumann has shown us, it is also readily solvable, provided one is a little careful).

          This definition of open world is with respect to entanglement swapping in the framework of ER = EPR. With cosmology there is no global means to define a time direction. A time direction is really a local structure as there does not exist a timelike Killing vector. Energy is the quantity in a Noether framework that is conserved by time translation symmetry. So if you have cosmologies with entangled states across ER black hole bridges (non-traversable wormholes) the only means one can define an open world is with entanglement exchanges. For instance the right timelike patch in a Penrose diagram may share EPR pairs with the left patch. In general this can be with many patches or the so called multiverse. There can then be a sort of swapping of entanglement.

          I then use this to discuss the MH spacetimes and the prospect this sets up the universe to permit open systems capable of intelligent choices. Your paper takes off from there to construct a possible way this can happen.

          Cheers LC

          Hi Jochen,

          You began by observing that "a stone rolls downhill because of the force of gravity, not because it wants to reach the bottom." In fact, life is almost defined by its ability to work its will against gravity. One might ask how this happens.

          But your paper, on the homunculus fallacy is excellent. The main problem of representations 'using' themselves [thus somehow invoking 'intentionality'] is two-fold. First, there is usually an infinite regress hiding somewhere, and second, as you note in your essay, in the absence of one replicator, "it is not clear how the dominant replicator is selected in order to guide behavior." This is clearly a major problem.

          Along the way quite strong assumptions are introduced:

          "Suppose the system is simply capable of scanning itself, producing a description that then enables it to construct an exact copy." [Maybe, for strings of bits, but how scan ones 3D self?] Svozil addresses this. Even so, past the DNA level, it's difficult to envision "mapping all possible responses of an automaton to binary strings...".

          Then one assumes producing "images of the world that are capable of looking at themselves - representations that are their own users." You "create mental representations (CA patterns) that are their own homunculi, using themselves as symbols." This strikes me as easier said than done!

          I love automata. My PhD dissertation, The Automatic Theory of Physics dealt with how a robot could derive a theory of physics, [see my Endnotes] but, significantly, the goal was supplied from outside, leaving only the problem of recognizing patterns and organizing Hilbert-like feature-vectors. I made no attempt to have the robot formulate the dominant goal on its own.

          You then ask that we "imagine the symbol to be grabbing for the apple." Despite that you presume "employing a replicating structure that interprets itself as something different from itself" [??] I have trouble imagining the symbol doing so. You've lost me. This is how you achieve "the symbol becomes itself a kind of homunculus."

          The core of the problem, as I see it, is the concept of "the internal observer, the homunculus." In other words, an internal system must both model itself and understand itself. Your treatment of this problem is masterful.

          May I suggest a different approach. In my essay I note that there experiential grounds for speculating that there is a universal consciousness field, a physically real field, that interacts with matter. This can be developed in detail [I have done so] but for purposes of discussion, why don't you willingly suspend your disbelief and ask how this solves your problem.

          It allows a homunculus to model or "represent" itself (as pattern recognizer and neural nets can do] while not demanding that the device understand itself, or even be aware of itself. All infinite regress problems disappear, as does the need to explain how consciousness 'emerges' from the thing itself.

          I hope you will read my essay and comment in this light.

          Thanks for an enjoyable, creative, well thought out essay.

          Best regards,

          Edwin Eugene Klingman

            Hi Edwin,

            thank you for your kind words, and for giving my essay a thorough reading! I'll have to have a look at yours, so that I can comment on some of the issues you raise.

            Regarding the selection problem, I think this is something my model can only hope to address in some further developed stage. Right now, my main concern is to show, in a kind of 'proof-of-principle'-way, that a pattern, or a state of mind, having meaning to itself isn't in conflict with a natural, physical world governed by 'mindless mathematical laws', as the contest heading stipulates (although I myself tend to think of laws rather as descriptions than as active governing agencies).

            Furthermore, the 'self-scanning' system is introduced as an example of what will not work: I (following Svozil) demonstrate that this assumption leads to absurdity. So, your intuition is right: there is no system (well, no 'sufficiently complex' system) that could simply scan itself in order to produce its own description. It would've made my life a whole lot easier if there were!

            Rather, the impossibility of this particular solution is what forces me to introduce the von Neumann structure, of a system with a clearly delineated syntactic and semantic aspect---copying and interpreting its own, coded description. So there's a system that simply has its own description available to itself; and if now this description is shaped, as I propose, by an evolutionary process such that the fitness function depends on the 'outside world', then this description contains likewise information about the outside world.

            Consequently, we have a symbol that has access to information that it itself represents, and that information is about the outside world (by mediation of sensory data setting up certain conditions within the internal CA-universe). In this sense, it is a representation that is its own user.

            Now, intentionality is contagious: your own purposeful behavior translates into purposeful behavior of, say, the car you drive. The car makes a left turn because you want to take a left turn. In the same way, if a replicator becomes dominant, it gets to control an organisms behavior---where I fully acknowledge that how it comes to be dominant, and how exactly this behavior-controlling works, don't as yet have a satisfying answer in my model.

            But suppose this works (and I don't believe that there are any other than technical problems in realizing this). Then, we have a symbol that to itself contains information using that information in order to guide movement---say, grabbing for an apple. That is, the goal-directedness of the action is due to the information the evolutionary process has imbued the replicator with---because it has a certain form, so to speak, it produces a certain action.

            Does this help?

            I'm going to have a look at your essay (but it might take me some time to comment).

            Cheers,

            Jochen

            6 days later

            Thanks for the clarifications Jochen. It's clearer to me what your account is addressing now.

            I think maybe the best way to phrase it is there's two separate problems: the homunculus problem and the problem of reference. The problem of error that you talk about in the paper is a subproblem of the problem of reference. I don't think your account actually addresses it (beyond you advocating for a narrow-content view). However, I do see how you're trying to address the homunculus problem of mental content in an interesting way. It might be clearer to separate those out in the future, so that way you can drill down on this notion of "autotelic symbols."

            Thanks for the interesting read!

            Erik

            Hmm, I don't really think these two problems can be usefully separated. Rather, the homunculus problem is a problem that arises in trying to solve the problem of reference---namely, trying to solve it by means of an internal representation immediately implies the question of who uses that representation as a representation.

            Consequently, such a naive representational account doesn't work; but if the homunculus problem didn't arise, then the account could do its job, and solve the problem of reference. Likewise, if the homunculus regress could actually be completed---i.e. if we could traverse the entire infinite tower of homunculi---the account would work, giving an answer to how reference works.

            But we typically don't believe such 'supertasks' can be performed; and that's where my construction comes in, replacing the homunculus with my self-reading symbols. If they now do the same work, which I argue they do, then this solves the problem of reference just as well as traversing an infinite tower of homunculi would have.

            Dear Jochen Szangolies

            Nice reply and analysis... have a look at my essay also please....

            Best wishes for your essay

            =snp.gupta