Dear Noson,
I'm very happy you found the time to have a look at my essay, and even more happy you found something to like about it!
As for strong AI, it depends how you understand the thesis. I do believe that conscious machines are possible---and that, indeed, we are just such machines. The notion of dualism is superficially attractive, but so far, I simply have never found a good explanation of how two substances can causally interact, without effectively becoming unified: after all, we know the physical world only via cause and effect; whatever leaves some imprint on our measuring devices, via usually a fairly long chain of proxies unraveled by inference, we call 'physical'. But anything that can causally influence something physical is in principle detectable by a suitable measuring instrument; so what could make it non-physical?
If, however, you take strong AI to mean that conscious machines are conscious due to computation, then I would be inclined to disagree: while I believe that model-building, and consequently, the mind's explanatory capacities are indeed computational, I also think that this computation has to be grounded in something non-computational to avoid infinite regress. So consciousness is not merely the right program running on some appropriate hardware.
You're very right to point to Rorty, I think. But in a sense, my account is not wholly model-based: the connections between the model and the world, in a way the 'clay' from which the models are built, is not itself part of the model, but something more fundamental that underlies it. I think that we do build models in cognition is hard to deny: for instance, when I picture how to perform a task, say, tying my shoes, I can visualize it as a series of steps---a kind of algorithm.
But we can model the world under different aspects, yet still cut from the same cloth: for instance, when listening to somebody speak, we can pay attention to what is being said---understand the meaning of the words---or we can pay attention to how it is being said---register inflection, tone, rhythm, the uhms and uhs that one usually does not consciously perceive.
Both these views pertain to the same phenomenal experience, however: the same sounds reach our ears. We just attend to that experience in different ways---which is roughly what Ned Block calls 'access consciousness' (as opposed to 'phenomenal consciousness'). This is what my models really pertain to (although I formulated this somewhat stronger in the essay).
As for an analogy to the completeness theorem, in a sense, I already use it in the essay: in my discussion of the zombie argument, I claim that, since phenomenal experience is in some sense 'undecidable', one can imagine both that a physical system (like a brain) possesses it, or fails to (in which case it would be a zombie).
The completeness theorem, applied to a formal system subject to Gödelian incompleteness F, basically tells us that there exists a model of the system extended by its Gödel sentence G, as well as a model of the system extended by its negation, ~G. Applied to the zombie issue, this would entail a 'possible (or perhaps, imaginable) world' in which there are zombies, and a possible world in which brains possess phenomenal experience.
Of course, this talk is somewhat metaphorical at best. But it's very interesting to think about!
Thanks, again, for your comment, and your good wishes.
Cheers,
Jochen