Dear Conrad,
I don't know how I missed your reply earlier---sorry for that. And thank you for your kind words!
I agree that representationalism isn't necessarily the only way to get meaning out of some system; one could, for instance, also think in terms of subsymbolic approaches. Representationalism's main virtue, to me, is that if it works, it's completely clear how---by simply having some vehicle standing in place of some object or state of affairs. But of course, this direct route is blocked by the homunculus; hence, my attempt to patch things up. If that turns out not to work, it might be necessary to abandon representationalism altogether, and move on to something else; but since, to me, this seems to entail a certain loss of intuitiveness and clarity, I'm going to keep on digging on this ground until I'm absolutely certain I'll never strike gold.
I'll certainly have a look at your essay; maybe I'll find something interesting to say about it.
However, a point of clarification: I don't understand my model as being mainly computational; in fact, I'm skeptical of computational models. I know that usually CAs are thought off as a computational system, but that just means that they are systems that can be used to compute, not that they are intrinsically computational. To me, what's more important is the pattern, which is a physically real thing (an analogy to the pattern of neuron firings in a brain), and its properties. The meaning I see is the semantic information the pattern contains about itself, and about the environmental conditions. But that's not a point I wanted to put too much emphasis on in the present essay.
Anyway, thanks again for your comment!
Cheers,
Jochen