Dear Gemma,
I'm glad you found something worth your while in my essay! Your summary, I think, is accurate: in the general sense, there exists a bound on the information obtainable about any given system (which, in the full sense, requires the appeal to Chaitin's quantified incompleteness theorem given in the Foundations-paper), and once one has reached that limit, 'old' information must become obsolete---as it does when we try to expand our horizon by walking (on the spherical Earth, or, well, any planet will do) west, losing sight of what lies to the east.
I'm not completely sure I get the gist of your question regarding quantum computation (etc.) right. Are you asking whether my proposal implies that quantum computers should be capable of beyond-Turing computation? If so, then the answer is kind of a 'it depends': any functional model of quantum computation will be able to compute only those functions that a classical Turing machine can compute. But still, if quantum mechanics is genuinely (algorithmically or Martin-Löf) random, then obviously, we can use quantum resources to do something no classical machine can do, namely, output a genuinely random number! Hence, as Feynman put it, "it is impossible to represent the results of
quantum mechanics with a classical universal device."
So in a sense, we need to be careful with our definitions, here---any way of implementing a finite, fixed procedure, whether classically or with quantum resources, will yield a device with no more power than a classical Turing machine (regarding the class of functions that can be computed, if not the complexity hierarchy). The reason for this is that simply outputting something random does not avail any kind of 'useful' hypercomputation, because one could always eliminate the randomness by taking a majority vote (provided the probability of being correct is greater than 1/2), and hence, do the same with a deterministic machine.
In a way, the story with quantum mechanics and computability is like that with non-signalling: it somehow seems like the classical constraint ought to be violated---but then, the quantum just stops short a hair's breadth of actually breaking through.
As for the Deutsch et al. paper, I can't really offer an exhaustive analysis. I'm somewhat sympathetic to the notion of 'empiricizing' mathematics (indeed, Chaitin has made the point that the undecidability results mean that at least to a certain degree, there are mathematical facts 'true for no reason'), but I think that notions such as those in Putnam's famous paper 'Is Logic Empirical?' go a step too far. In a way, different logics are like different Turing machines---you can do all the same thing using a computer based on a three-valued logic (the Russian 'Setun' being an example) as you can do using one based on the familiar binary logic, so in that sense, there can't be any empirical fact about which logic is 'better'.
But again, I'm not familiar enough with the paper to really offer a balanced view.
Cheers
Jochen