Dear Larissa,
thanks for a genuinely insightful essay. At several points, I was afraid you'd fall for the same gambit that's all too often pulled in this sort of discussion---namely, substituting meaning that an external observer sees in an agent's behaviour for meaning available to the agent itself. At each such juncture, you deftly avoided this trap, pointing out why such a strategy just won't do. This alone would have made the essay a very worthwhile contribution---it's all too often that, even in scholarly discussion on this issue, people seem insufficiently aware of this fallacy, and (often inadvertently) try to sell mere correlation---say, the covariance of some internal state with an external variable---as being sufficient for representation.
But you go even further, giving an argument why the presence of integrated information signals the (causal) unity of a given assemblage. Now, it's not quite clear to me why, exactly, such causal unity ought to bestow meaning available to the agent. I agree with your stipulation that intrinsic meaning can't arise from knowing: that simply leads to vicious regress (the homunculus fallacy).
Take the above example of correlated internal states and external variables: in order to represent an external variable by means of an internal state, their covariance must, in some sense, be known---in the same way that (my favorite example) one lamp lit at the tower of the Old North Church means 'the British will attack by land' only if whoever sees this lamp also knows that 'one if by land, two if by sea'. Without this knowledge, the mere correlation between the number of lamps and the attack strategy of the British forces does not suffice to decipher the meaning of there being one lamp. But such knowledge itself presupposes meaning, and representation; hence, any analysis of representation in such terms is inevitably circular.
But it's not completely clear to me, from your essay, how 'being' solves this problem. I do agree that, if it does, IIT seems an interesting tool to delineate boundaries of causally (mostly) autonomous systems, which then may underlie meaningful representations. I can also see how IIT helps 'bind' individual elements together---on most accounts, it's mysterious how the distinct 'parts' of experience unify into a coherent whole; to take James' example, how from ten people thinking of one word of a sentence each an awareness of the whole sentence arises. But that doesn't really help getting at those individually meaningful units to be bound together, at least, not that I can see...
Anyway, even though I don't quite understand, on your account, how they work, I think that the sort of feedback structures you identify as being possible bearers of meaning are exactly the right kinds of thing. (By the way, a question, if I may: does a high phi generally indicate some kind of feedback, or are there purely feedforward structures achieving high scores?)
The reason I think so is that, coming from a quite different approach, I've homed in on a special kind of feedback structure that I think serves at least as a toy model of how to achieve meaning available to the agent myself (albeit perhaps an unnecessarily baroque one): that of a von Neumann replicator. Such structures are bipartite, consisting of a 'tape' containing the blueprint of the whole assembly, and an active part capable of interpreting and copying the tape, thus making them a simple model of self-reproduction (whose greatest advantage is its open-ended evolvability). In such a structure, the tape influences the active part, which in turn influences the tape---a change in the active part yields a change in the tape, through differences introduced in the copying operation, while the changed tape itself leads to the construction of a changed active part. Thus, the two elements influence another in a formally similar way to the two nodes of your agents' Markov Brains.
What may be interesting is that I arrive at this structure from an entirely different starting point---namely, trying to exorcize the homunculus mentioned above by creating symbols whose meaning does not depend on external knowledge, but which are instead meaningful, in some sense, to themselves.
But that's enough advertisement for my essay; I didn't actually want to get into that so much, but as I said, I think that there may be some common ground both of our approaches point towards. Hence, thanks again for a very thought-provoking essay that, I hope, will go far in this contest!
Cheers,
Jochen