Dear Torsten,
you propose an intriguing new way of looking at neural networks in terms of topologically stable structures---feedback loops. I'm not sure, however, I understand it fully. I would see an analogy to (topological) error correction here: the whole network, i.e. the detailed states (firing or not-firing) of all the neurons, yields the codespace, while only the topologically protected structures encode information; that way, the information is robust to small, random fluctuations within the network, i.e. noise. Is this somewhat close?
I'm not sure, then, what exactly you mean by the 'strength' and 'phase' of a signal. Are you referring to the signal carried by a given feedback loop? If so, is the strength related to the rate of firing, and the phase to the timing?
In the end, it's an interesting proposal, which however I feel could benefit from a more in-depth treatment (but of course, that's hard to do within the length constraints of this essay contest).
Hope you do well!
Cheers,
Jochen