Dear Larissa,
First, let me congratulate you for winning the essay contest. Unfortunately, I could not get to see your essay prior to the result. Even belated, let me try to understand the idea of the essay. Your statements are referred to by '>' symbol, while my comments by '=>'.
> this essay discusses necessary requirements for intrinsic information, autonomy, and meaning.
=> While a specific requirement has been discussed, but we are not given a picture how information becomes abstract, or what exactly is the meaning. It largely dealt with probability aspect of information, not the actual semantics of information.
> only the integrated one forms a causally autonomous entity above a background of external influences.
=> In fact, if the natural causation remains entirely deterministic, all outcomes are pre-set, autonomy is not really there, but only pre-determined looping of signals and states occur. It seems, indeterminism, even if limited, is a necessary requirement.
> Any form of representation is ultimately a correlation measure between external and internal states.
=> While it is entirely agreeable that 'representation is ultimately a correlation', but not between the external and internal states alone. In fact, this is hardly the case as 'representation is a correlation with emerged semantics, that includes state descriptions (a projection of reality), as well as all possible abstraction of semantics that we are capable of'.
> The state must be "a difference that makes a difference".
=> I am not sure, but is it said in a sense that observed states are relative measures that makes a difference?
> A mechanism M has inputs that can influence it and outputs that are influenced by it. By being in a particular state m, M constrains the possible past states of its inputs.
=> Since M is a mechanism, not a physical entity, therefore, m is a particular instance of the mechanism, which should not be treated as state of matter. In several places, it is used in a manner that has this dual meaning.
> We can quantify the cause and effect information of a mechanism M in its current state mt within system Z as the difference D between the constrained and unconstrained probability distributions over Z's past and future states.
=> Z is a system, I suppose a Markov Brain, which may have many physical elements. So, what is Z's state? Is it a particular instance of specific states for its elements? And how do we understand the state mt, since it is a mechanism -- a logical or relational component of the connectivity. May I suppose mt is a specific set of probabilistic or deterministic Logic Gates? And if they are, then p(Zt-1 | mt) would be the probability of finding elements of Z in a particular specification of states at a time t-1 given a specific configuration of Logic Gates (LG). And p(Zt-1) would be the LG independent probability of the same set of states irrespective. Furthermore, as per the referred statement, the differences between the two probabilities constitutes the information of the cause and effect. That is, one is not talking about the meaning (semantics) of the information, but only of the probability of their occurrence (causal connection).
> All causally relevant information within a system Z is contained in the system's cause-effect structure.
=> Indeed, but in this essay, it is only the information about the physical states. If you think of intention, you know, intention is not just a state of matter. In any case, you have also concluded, "What kind of cause-effect structure is required to experience goals, and which environmental conditions could favor its evolution, remains to be determined."
> If the goals that we ascribe to a system are indeed meaningful from the intrinsic perspective of the system, they must be intrinsic information, contained in the system's cause-effect structure.
=> In my view, the assertion is accurate. In this essay though 'intrinsic perspective' is ascribable only to the 'intrinsic correlation' with the states of matter, not with the semantics (meaning) of 'intention of goals'. It is pertinent to note, all physical entities, as they evolve, they follow the path of 'least action', (usual units Energy x Time). So, if we ascribe 'goal of performing least action, all mechanical entities naturally do that.
> This is simply because a system cannot, at the same time, have information about itself in its current state and also other possible states.
=> Information is what a state correlates with emerging from observed relative state descriptions of interacting entities as per natural causation. Therefore, a state of an entity is not a correlating information in itself. Furthermore, there is no unique definition of state, a state is what is observed by another interacting entity (an observer). Therefore, a system can never be said to have information about its current state, since the state is not defined unless observed. In my reckoning too, it naturally requires a recurrent system to know what state it was in a while ago, but the new state is yet another state. For example, temporal relation being one central element of state description, therefore, current state is always a new state. By the way, resultant state is also a consequence of the relation among observed states of the context, which is what sets the basis for the emergence of abstract semantics. Scientists usually talk about the emergence, but never lay down the formal mechanism of abstraction.
> Any memory the system has about its past states has to be physically instantiated in its current cause-effect structure.
=> I suppose, this is what won you the day. But it need not be re-instantiated as long as the present state is causally dependent on the past state configuration and relation.
> The cause-effect structure of a causally autonomous entity describes what it means to be that entity from its own intrinsic perspective.
=> There is a big leap here. It is unclear 'what it means to be that entity'. Though it may be so, but why and how it is so, is not worked out in this essay. Considering all an element correlates with is the states of other connected elements, therefore, the constrained element may have state information about other elements. Furthermore, while the process of unification is outlined by stating each element affects every other in turn, but how does the semantic unification takes place is left out.
> In this view, intrinsic meaning might be created by the specific way in which the mechanisms of an integrated entity constrain its own past and future states.
=> The expression of mere possibility, "intrinsic meaning might be created by the specific way", is right, since the actual emergence of such a meaning is not detailed in the essay.
This is an essay, which I presumed, I could read without an external help. Publications cited in this essay are not for evidence or for suggestive further links, but as the background. I could not follow the methods of the operation of Markov Brain, since rules of evolution, formation of connectivity, parametric values are not defined here. For example, why would they develop connections at all, unless the rule is especially coded? This turned out to be the hardest read for me, yet, I am not fully confident. I had to learn rules of the methods from Wiki and cited texts. Similarly, I could not have any idea of how you calculated the value of R to be 0.6 at some point, even though I understood the idea of eqn.1.
Hearty congratulations again on winning the contest -- a remarkable feat indeed!
Rajiv