Hi Stefan, glad I could help some. You're right to say that it would be mysterious to first characterize qualia as noncomputational, and then turn around on a dime to claim a computational mind after all (if I understand you correctly and that is what you're saying), but that's not my intention---rather, it seems in some agreement with you, I think that qualia are noncomputational phenomena, and that our problems with explaining them ultimately come from the fact that reason, being based on models, is computational.
So in a sense, the mind can do more than it can explain. You might think of qualia as supplying the link between our models of the world, and the stuff in the world they model---they underwrite our 'mental simulations' (i.e. computations) of the world, but are not themselves computational.
You need something different in kind from the way computation works in order to ward of infinite regress. I think this may be better explained by considering a simplified version of the 'language of thought'-hypothesis: when we encounter, say, a Chinese text, we may translate it into a language that we already understand, say, English, to grasp its meaning.
But how, then, is the English text understood? The language of thought-hypothesis says that there is a native language which our brains speak, the mentalese, into which English text is translated.
Now, clearly, mentalese can't itself be understood in the same way: or else, we would face infinite regress, always needing another language into which to translate. So our minds must be capable of some other process of uncovering the meaning of mentalese, which ultimately grounds our ability to understand any language at all.
The same now goes for computation (or model-building). A computation is ultimately just a sequence of computational states, that gets then mapped to, for instance, the states of some physical system, or derivations in some formal system, or something like that. This mapping is our translation, and in general, we can use a computation to implement this mapping (in computer science terms this is called a 'reduction').
However, we face the same problem as above: we may interpret, say, the lights blinking on some early computer as some numerical value, and thus, the computer as implementing a certain calculation by mapping it to some abstract, mental symbols---the equivalent of mentalese. But how are they themselves understood? The process grounding this understanding can't itself be computational, or we just run into the regress again. Consequently, some noncomputational process must be at work in order to ground our mental models. This process I then propose to identify with qualia.