Dear Carlo,
one of the (many) merits of you essay is, I believe, to draw attention to a concept - meaning, or meaningful information - that was left a bit in the shadow in the Context Guidelines (the keywords there being long-term goals, intentions, agency etc.), and to convincingly argue that this concept could indeed represent a first crucial step in the path from physical to mental, which is of course much relevant to the contest objectives.
Once the entropy-based notion of useful correlation between internal and external variables is given, it is easy to see how the human brain, enjoying memory and the ability to model the external world, can take advantage of these correlations even in a conscious manner, e.g. by playing simulations internally before triggering external actions.
But I would be much interested in the opposite extreme: how far down can your idea be pushed?
In a world conceived as a bunch of atoms of spacetime, or a causal set, rather than a network of cells or animals, when and how could I start spotting meaningful mutual information at work?
Prerequisites include the emergence of sufficiently persistent regions (X, Y...) with an inside and an outside, and macro-variables (x, y, ...) on top of the available micro-levels, which enable entropy notions. Talking about correlations between variables x and y also requires many instances of their value pairs in 'time' and/or in 'space' (thus, persistent or multiple copies of X and Y).
I am much attracted by the search for the most elementary formal systems -- possibly intended as models of a (young) universe -- that support your definition of meaningful mutual information, and its fruitful operation in evolutionary sense. I would be grateful if you could share your opinion on this issue.
Thanks!
Tommaso