Dear Sabine,
thanks for an engaging and well-written essay. It's good to introduce a skeptical point of view to the applicability of limitative metamathematical theorems to physics, or at least, one tinged with a healthy dose of realism.
However, I find your initial argument somewhat spurious. While it's true that, no matter how much we like a certain mathematical formalism, this doesn't tell us anything regarding whether nature actually avails herself of that formalism, perhaps much to our chagrin, the theorems of Gödel, Turing et al are not limited to some formalism, but rather, are metamathematical theorems applying instead to wide classes of theorems.
In this sense, they are rather like the theorems of Bell and Kochen-Specker: once a mathematical formalism fulfills the requisite conditions (essentially, allowing for the possibility of universal computation), it must be subject to these theorems. We're thus not in the position of facing the question of whether a certain piece of mathematics applies to the real world, but rather, whether a certain kind of mathematics does, with a negative answer essentially relegating the physical universe to be equivalent to some finite state machine.
Furthermore, I'm not quite sure if I agree with you that these questions really only apply in infinitary contexts. It seems to me, we can become convinced that they apply to the world in the same sense we can become convinced of anything in science: by inference to the best explanation. This is fallible knowledge, but all scientific knowledge is.
As such, let's assume that the world is described by some noncomputable function, and have an agent try and find out that fact about it. One typical argument is, well, the agent only has access to finite observations, so will never be able to conclusively make that determination, as all finite data can be reproduced by some finite machine. That's of course entirely true.
However, in general, a finite machine will only be able to reproduce that data by basically restating it---that is, it will not significantly compress it. In particular, it will not be likely to predict any further observations.
On the other hand, every noncomputable function can be decomposed into a finite algorithm plus an infinite random string of bits. This may, in general, both compress the original data, and enable future predictions, at the cost of interleaving occasional random events.
As an example, consider the function that outputs at each timestep n the square of n, except at times where n is prime, where it outputs the nth digit of the Halting probability of some Turing machine. This function is clearly noncomputable. Yet any agent observing that output will, if they're sufficiently clever, eventually deduce that they can predict the output at all times n such that n is not prime; at all other times, the output will be random. That is, they can decompose the evolution into a deterministic law, plus random events.
If they are unable to find a better explanation (which they won't), they'll conclude that this is as good as it gets; and in total, they'll conclude, with the same degree of reasonableness as any conclusion is reached in science, and from entirely finite data, that the underlying function is noncomputable. Hence, these issues can be brought within the purview of ordinary scientific reasoning.
Anyway, I greatly enjoyed your essay, especially the connection between the real butterfly effect and the breakdown of reductionism---or perhaps at least reducibility. I wish you the best of luck in the contest!
Cheers
Jochen