Thanks for your accolades, and for the reference to Pattee, of whom I didn't know. I thought I might respond (also to Edwin) with some thoughts on information.
On the one hand, information refers to real structure; on the other, to some extent structure is in the eye of the beholder. This complicates any analysis that is not in the third-person terms preferred by science. Epistemically, however, there is always both subject and object. Experience in general is necessarily a product of both--an interaction of of the epistemic subject with its environment. Yet, even to go this little far embroils us in circularity or regression. For, this division of subject and object--and the boundary between them--is also itself determined conjointly by the epistemic subject and its environment or object of thought. In science, the putative boundary between observer and observed is supposed to be a clear separation. No doubt the conceptual problems of quantum physics (entanglement, measurement problem, etc.) are related to this "entanglement" of subject and object. So must be the problem of the ontological status of 'information'.
It is usually admitted that information as defined by Shannon must be distinguished from meaning or content. This seems to give it an objective reality since it is freed from particular meanings and particular agents. It describes the general case, as mathematics describes the most general features of the world (properties such as integrity, additivity, identity, etc.). This does not mean there is no input from 'mind', which can slice up structure in the world in diverse ways. Information should reflect this flexibility and the agent's participation. It cannot be objective in an absolute sense, and I do not believe it should be objectified as a causal agent when we are the causal agents involved.
I do not mean to ignore other agents. From what I could tell by a cursory look, Pattee's work seems to concern the general case in regard to the use of information by non-human agents, as does the work of Maturana and Varela (Autopoiesis and Cognition, c.1973?). Information for such agents may not be the same as for human agents. One can only pretend to an objective handle on the information content of something, or the structure, for that matter.
Information should reflect reality, of course; but it also reflects the agent involved. 'Meaning' thus remains implicit, and need not be made explicit, just as Shannon kept message transmission capacity separate from the content of messages. But it would be a mistake to objectify information as ontologically real just because it reflects real structure, without considering also how it reflects the intentions of the agent.
Of course, similar arguments could be applied to all concepts of physics. They too are human-centric (e.g. force) or involve agent participation in spite of addressing 'objective reality'. (I believe even the concept of reality is agent-related, in that it refers to the evolutionary need to take the external world seriously.) This is something a mature science must comes to grips with.
Information is a necessary concept to explain cognition, but not sufficient. It should not be objectified as 'physical' in a way that divorces it from the meanings and uses of agents. It cannot, therefore--on its own--constitute a causal domain or serve as an ultimate ontological basis for 'reality'. On the other hand, it does bridge the gap between mind and matter/ energy in that it overlaps both domains. Since meanings must be carried on physical signals, information is both mental and physical, and one can assert (as Herbert Feigl did in mid 20th century) that 'the mental is the physical'.
The question remains, how does mind "emerge" from matter? Well, I backtrack to to say that matter itself emerges through processes of self-organization (autopoiesis). With the emergence of life, there are then agents to whom various events and structures in the physical world have significance. Brains evolve as specialized organs to handle not only self-organization but specifically the organism's relations with the external world. In this way 'mind' (considered behaviorally, from a third-person perspective) emerges through an evolutionary contest. Meaning refers to the urgency of survival, because physical events bear significance for the fate of the organism and therefore are meaningful to it.
Yet, (as Chalmers and many others have insisted) there still persists a gap between first-person experience and third-person description, leaving us wondering how 'consciousness' fits into a physicalist description of nature. I have some ideas about that, which I will summarize here (more can be found on my website http://bruiger.leftfieldpress.com/).
To put the question bluntly, how can a physical system have 'experience' or a 'point of view'? Or: how can these be explained in physical terms? One must relate 1st-person and 3rd-person perspectives.
We might realize right from the outset that that the 3rd-person is a fiction, at least for self-conscious organisms. All experience is necessarily 1st-person. 3rd-person description arises from the convention of ignoring the 1st-person perspective, and all that it implies epistemically. We want a transparent window on the world, unclouded by personal considerations--including the mechanics of nervous systems. This served well in physics before the discoveries of relativity and quantum mechanics. Now the presence and circumstances of the observer must be taken into consideration, if only because of the finite values of c and h.
This ideal transparent window corresponds to the perspective of a naive epistemic agent. Once self-consciousness enters, the situation is characterized rather by Gödel's theorems applied to knowledge generally.
But I have not yet addressed the gap between 1st and 3rd person perspectives--Chalmers' "hard problem". In a sense, it must be admitted that there never can be a definitive solution to the mind-body problem, because of the very nature of consciousness as self-transcendent. The problem is too far upstream from any ground we can stand on. For the same reason, there will never be universal agreement among philosophers , who are always free to prefer either materialism or idealism. In another sense, however, I think the problem can at least be understood more clearly. I tentatively suggest the following.
The self-luminous character of conscious experience arises through intentional assignation, in much the way that the meanings of language arise. A symbol is assigned by convention to represent something. While signal transmission has to do with causal processes, meaning has to do with intentional rather than causal connections. Consciousness is a language, so to speak, that the brain speaks to itself through such connections. Just as the inchoate sounds of speech acquire meaning to the infant (or to the adult learning a foreign language), so the "babble" of the senses acquires meaning through immersion in the world. The specific quality or "feel" of sensation, or of any other form of cognition, derives from its significance to the organism (which may be studied and understood in behavioral terms). A simple example is pain. The meaning of pain as an experience is contained in the behavioral response of aversion to the harmful stimulus. The experience of fear is aversion to a stimulus suspected to be harmful. Pleasure impels us toward a beneficial stimulus, etc. The meaning of the experience is contained in the action or judgment, the recognition of real consequence. This is how the 'symbol' acquires meaning. So much is clear for primitive responses. The picture becomes more complicated when considering the distance senses and the mind's dedication to an objective modeling of the world. One may nevertheless speculate that the very perceived objectivity of the world refers to the need of the organism to navigate space, avoiding obstacles and predators, and seeking nourishment and reproductive partners, etc. 'Reality' refers to consequence for the organism.
To further explain the comparative (behavioral) neutrality of perceptual qualities like color, shape, etc., requires introducing another fundamental aspect of mind, besides the organism's ability to respond immediately to stimuli. And that is the ability to not respond to them. A 'picture', from which one is overall detached, is built up essentially from these primitive responses. It thereby retains the "luminous" or self-evident quality that derives from intentional assignations, but does not necessarily compel us to action. This is something like the relationship between concrete terms, which evoke clear images as their referents, and the general flow of "sense" that occurs in speaking or listening, or the sense evoked by more abstract terms.
At any rate, that's my "just so" story about the mind-body problem, the best I can do here. I doubt it would satisfy Chalmers. My main point is to characterize information as a function of subject as well as object.