Dear Erik,
thanks for your comments! I'm glad you see some value in my thoughts. I should, perhaps, note that to me, my model is more like a 'proof of concept' than a serious proposal for how intentionality works in actual biological organisms (to point out the obvious difference, none of us has a CA in their brain).
So the bottom line is that if the model works as advertised, what it does is to show that there is logical space between eliminativism and mysticism when it comes to intentionality---i.e. neither does the acceptance of a world that's comprised 'only' of the physical force us to deny the reality of intentionality, nor does the acceptance of that reality force us to believe that there is some mysterious form of original intentionality that we just have to accept as a brute fact about the world. There's a possibility here of both having your cake and eating it (if, again, thinks work as I want them to).
Regarding the issues you raise, I think the most important thing to realize is that ultimately, reference in my model isn't grounded in the outside world, but rather, in the 'environmental conditions' set up in the cellular automaton via the environment's influence, mediated by the senses. So in some sense, we don't really have access to the outside world---but then again, we knew that already: all it takes to fool us is to insert the right electrochemical signals into our sensory channels, whether that's done by an evil demon or a mad scientist having your brain on their desk in a jar.
So, for the problem of error, this means that, in the case you're describing, we simply don't notice the error---so the replicator wasn't perfectly adapted to the CA environment, and was never replaced; then, things are just going to proceed as if that replicator was a faithful representation of the CA environment. I might, for instance, run out of my room upon seeing the 'stranger', straight to the police, and report a break-in. I'm wrong about that, of course, there never was a stranger in my room, or a break-in---but this seems to be a common enough sort of occurrence.
Similarly, you are right that there isn't necessarily a one-to-one correspondence between objects in the world and replicators. But then, what of it? It just means that we'll have the same beliefs, and show the same behaviors, in the presence of either---that is, we just can't distinguish between them.
I don't think this necessarily reduces the approach to a pragmatist one---in the end, all we ever are aware of, or have beliefs about, are really classes of things, and not individual things themselves. For instance, the chair in my office certainly contains today a couple of different atoms than it did yesterday; yet, I consider it to be the same chair, and my beliefs and other propositional attitudes toward it aren't influenced by this difference. Some differences just don't make a difference to us.
This then also suggests a reply to the Twin Earth case: on my account, 'water' doesn't refer to either H2O or XYZ; it refers to some set of CA-conditions set up by being in the presence of some sufficiently similar liquids. My meanings are all in the head.
This also accounts for the possibility of a divergence in meaning, once additional facts come to light: suppose Earth science (and with it, Twin Earth science) become sufficiently advanced to tell the difference between H2O and XYZ. Then, the inhabitants of Earth could redefine water as 'that liquid whose chemical composition is H2O', while Twin Earthlings could say instead that water is 'that liquid whose chemical composition is H2O'. This difference will be reflected in a difference between the CA-conditions set up in an observer of water and their twin: the knowledge of water's chemical composition allows different replicators to prosper.
Furthermore, an inhabitant of Earth transported to Twin Earth will mistake H2O for XYZ; but then, upon further analysis---i.e. looking really hard, just as it might take looking harder to distinguish between a stranger and a jacket in a dark room---will learn of the difference. In the future, he then simply won't know whether he's presented with a glass of H2O or a glass of XYZ without doing the analysis---but that's not any more problematic than not knowing whether something is water or vodka without having had a taste.