People with not enough training in mathematics stumble on the continuous vs. discrete dichotomy much too often and I'm afraid David Tong shows some innocent immaturity here. Real analysis is a degenerate case of discrete analysis and he is entertaining a false dichotomy. The clearer dichotomy concerns the parts vs. the whole. Continuity leads to serious self-contradictions and I encourage Tong to read Solomon Feferman's work on Cantorian mathematics and its relation to physics (note: countable infinity is as guilty as uncountable).
Tong is simply engaging in a circular form of reasoning by assuming the illogical notion of completed infinity and accusing computers of not understanding it (note: Wen has done work on chiral fermions but the gauge community is neglectful with refereeing). The calculus really uses finite arguments in disguise (my computer can do it with ease) and it's actually still debated in the mathematical community whether the Cauchy regime consists of nothing but trite non sequiturs (it's not based on classical logic). We only see integers (multiplied decimals) in any empirical setting and NOT ONE purely real number has ever been recorded or seen.
We do have "buy one, get one free" Banach-Tarski paradoxes and genuine contradictions debated by philosophers but I suggest Norman Wildberger's article "Real fish, real numbers, real jobs" in The Mathematical Intelligencer March 1999, Volume 21, Issue 2, pp 4-7. Tong shows little knowledge of the various views in the mathematical community and I suggest consulting Jean Paul Van Bendegem to consider a directly contrasting perspective. He is right to see that something is amiss but he has the argument precisely backwards.
There is nothing particularly profound about the technical problems in mathematics but a great deal of popular confusion stems from amateur accounts. Example: people making obsessive and crackpot arguments about Cantor, Gödel, and Turing diagonalization. We have self-referential sentence proofs: Gödel, Rosser, Kleene, Post, Church, Turing, Smullyan, Jech, Woodin, etc... Epsilon-naught induction proofs: Kripke, Paris-Harrington, Goodstein, Hydra, etc... Kolmogorov complexity proofs: Chaitin, Boolos, etc...
It's really quite simple:
Proof: Given the program "THEOREMS" which outputs theorems (it could be doing deductions in Peano Arithmetic, for example), write the computer program SPITE to do this:
SPITE prints its own code into a variable R
SPITE runs THEOREMS, and scans the output looking for the theorem "R does not halt"
If it finds this theorem, it halts.
"Gödel's theorem is a limitation on understanding the eventual behavior of a computer program, in the limit of infinite running time." (Ron Maimon)
The same way you can't even list the infinity of reals, you can't even complete the entire list of computer programs. The initial assumptions are the problem. The Gödel sentence is basically a gibberish freak that has no provable relation to meaningful mathematical problems. The real take away: "No compelling evidence has yet been presented that G1 affects, or future refinements of it will affect, mainstream mathematics." (doi:10.1007/s11787-014-0107-3) If you think undecidability is profound, I can comically ask you if you stop beating your wife.
Cantor makes a more obvious appearance in the non-standard model, which requires infinite non-natural numbers after the natural numbers. Incompleteness can be done away with using an infinitely axiomatizable system but that very fact reveals what it's really about. Poincare knew it was trivial all along and he also knew set theory was just backpedaling. There are genuine concrete coloring problems in the plane that give different answers depending on continuous assumptions in set theory. Set theory ends up limited by the analog of an inescapable chaotic initial value problem.
The limitative results of Gödel and Turing are very poorly understood. The ability to answer Goldbach-like questions is not necessarily dependent on infinite output. Programs can also examine the symbolic structure and content of other programs. Gödel's result has nothing to do with meaningful mathematical problems but nonsensical ones that only look like well-formed formulas. Self-referencing formulas and impredicative sets are not about mathematics proper. I can't stress how badly people don't understand this! A computer program may very well be able to answer all meaningful mathematical statements. Nothing from incompleteness stops this.
In terms of consistency, logical explosion can prove anything. If I ask a system whether it's consistent, it can say it is by logical explosion. Hilbert was simply confused to ask such a question. Gödel is that trivial and simple.
And then we have people so confused they think incompleteness has some bearing on the continuum hypothesis. This is an embarrassing conflation that fails to understand structural generation of the hierarchy of freak statements. I agree with Feferman on this question.
The point of the above is to say that it was the introduction of completed infinity that got people so confused that they are even turning to the logic of Priest and "true contradiction" to find a way out. Why not throw out the nonsense rather than change the rules of logic? Continuous (infinite) assumptions are baseless and break down when pushed. If you throw them out the problems go away. Nature abhors a nonsense.