Sara Walker,
This is a fantastic essay on the emergence of purpose. It is one of the best I have read; right up there with John Ellis.
Here comes the amateur lumbering into the conversation, way over his head. All of the following statements are guesses. They should be prefaced with: "Could it be that..." I haven't even begun to parse it all, if I ever will. I see a lot of additional homework ahead. A deeper understanding of information theory is definitely in order. I see a lot of good leads in your list of references. Can you recommend a good reference book on Boolean networks?
First off, I think we both can take for granted, the need for top-down causality in any sensible description of goal oriented behavior. I really like your statement: 'it is not a goal if it is not internally programed.' This line of thinking nicely decouples the word goal from my notion of teleological bias, which require sentience and places it earlier in the evolution of the causal chain.
I find it useful to slightly redefine several of the most common words used in these discussions. These are phenomenal definitions: a sentient being is nothing more than an individuated organism, which is connected to and reacts to the variations in its environment by way of receptor and proprioceptor nerve endings. By this definition a worm can be sentient. Intelligence is the quantitative and qualitative capacity to process and organize information. By this definition, the computer Watson is highly intelligent. Consciousness is the subjective phenomenal experience of the qualia of sentience as a first-person observation of the present moment. An agenda somehow comes out of this and presents itself directly to the subject. I generally try to avoid the use of the word intentional as it can be confused with the less descriptive philosophical term of art denoting the content or object of consciousness. This definition is unfortunate. So much of our language developed around the requirement of encoding the aspects of dealing with other living things in our environment. Consequently it has acquired all of the teleological trappings of directed, volitional goal oriented behavior which come to be applied to the behavior of inanimate objects in an attempt to explain them. Nature abhors a vacuum.
From my remarks to John Ellis on his essay:
Purpose is something we see within ourselves and see in others. As embodied minds, we take its existence for granted as part of the requirement for the evolution of life. And like consciousness, it seems to resist a reductionistic explanation. Existence, sentience, consciousness and the nature and mechanism behind the collapse of the wavefunction remain elusive and mysterious.
It would occur to us in retrospect that the veracity, completeness and therefore the predictive power of this internalized observation of reality would serve an organism well. But this would beg the question: how, on the evolutionary trail, did an organism's acquisition of an agenda to extract meaningful and relevant information for survival arise? Somehow, it must be connected to existential threat. But how does the organism come to sense that existential threat? My simplistic answer is that an organism's nerve endings, no matter how primitive, provide the initial feedback. All sentient beings have skin in the game. But there still remains the problem of how that feedback might be converted into consciousness and the sensation of jeopardy. {Insert hand waving here} Once the sense of jeopardy has been detected, the obvious back reaction would be a teleological bias to fulfill the dual agendas: stay in the energy flux and avoid destruction. This would go for the tubeworms living near a steam vent or, as more neural circuitry is thrown at the problem in service of this agenda, an investment banker competing for her share of the billions in bonuses available to maintain herself far from equilibrium.
Now back to your teleologically more neutral idea of 'goal' which can occur within any intelligent information processing system sans sentience. This brings me closer to spanning the explanatory gap between the goal seeking capabilities of a single biological structure and the entire organism. If it can be had just in the structure, it might provide the bridge needed to arrive at its final destination within the organism.
There is a wonderful description of microtubules in The Quantum Brain by Jeffery Satinover where alternating rows of alpha and beta dimers join at a seam which is offset by three rows. This configuration would lend itself to the progressive evolution of cellular automatons winding their way down and continuing their deterministic evolution at the seam just as programmed cellular automatons go off the screen on one side only to appear again on the other side. And no, I am not advocating for OOR. The decoherence time associated with ambient temperatures would seem to rule that possibility out. But at the same time, I would not totally rule it out.
Best regards,
Jim Stanfield