Note that the hierarchy of homunculi is something very different from the hierarchy of knowledge you propose. In the latter, you can, in a sense, go as far as you like---each new level being a sort of 'coarse-graining' of the level below; in many cases, in fact, the hierarchy will terminate, because eventually, there's nothing left to coarse-grain.
The hierarchy of homunculi, however, is necessarily infinite. Picture a person looking at a screen. We'd say that they understand what's happening on the screen, that they have knowledge about it, and so on. For instance, the screen might show an apple; the person will identify the apple, and thus, recognize that picture as a picture of an apple. In this way, the picture comes to represent an apple.
But if we attempt to cash out all representation in this way, we become trapped in the infinite regress: if, in order to recognize the picture as being of an apple, the person possesses some internal mental representation of it---an inner picture of that picture of an apple---they need likewise an inner observer recognizing that second-order picture as such, in order to make it into a representation of the picture seen by the person.
But this never bottoms out: we're left with ever higher-level homunculi, and, like the way the picture of an apple can only be recognized as a picture of an apple if the first-level homunculus recognizes the internal representation as representing a picture of an apple, the interpretation of the representation at the nth level depends on the interpretation of the representation at the (n+1)st; thus, we have to climb the whole infinite hierarchy in order to generate the recognition of the picture of an apple as being a picture of an apple on the lowest rung. Since we generally consider such infinite tasks impossible, it follows that representation---intention, meaning, having aims and goals---cannot be explained by such a homuncular theory.
Now, not every theory of intentionality need be homuncular. Yours may not be, for instance. I try to circumvent the homunculus problem by the von Neumann construction, which allows me to construct a sort of internal representation that is itself its own user---that 'looks at itself', recognizing itself as representing something. But very many theories, in practice, are---as a guideline, whenever you read sentences such as 'x represents y', 'x means z', and so on, you should ask: to whom? And if there is an entity, implicitly or explicitly, that needs to be introduced in order to use a given representation as representing something, then you can discard the theory: it contains an (often unacknowledged) homunculus.
Regarding the halting problem, the example I gave is actually one in which it is solvable (while in general, it's not): the algorithm that calls itself will never terminate. But you are right to link infinite regress and self-referential problems, such as the halting problem: when I draw a map of an island, and the island includes the map (say, it's installed at some specific point), then the map must refer to itself; and if it's infinitely detailed, then the map must contain a copy of the map must contain a copy of the map... And so on.