Dear André,
there is a lot to unpack in your essay. I think your main question, however, is a simple one: when is it reasonable to believe something? This seems a straightforward question, but, as so very often, there are many subtleties that confound finding a straightforward answer.
You start out by noting that, contrary to mathematics, physical theory always includes an issue of vagueness: initial conditions are contingent facts about the world that can only be imperfectly known, thus making all of our predictions---and hence, beliefs---essentially probabilistic.
Furthermore, the very means we use to interrogate the world to check our predictions turn out to be, themselves, dependent on yet further assumptions, theories, and contingencies---a point sometimes framed as the 'theory-ladenness of data'. We do not simply observe the world; our means of observation themselves contain nontrivial assumptions that are usually ignored.
On top of that, what we do end up believing ends up defining who we are---making any attack on our beliefs, whether by argument or by unwelcome fact, an attack on our very self-image, against which, we must defend ourselves. Hence, we close ourselves against that which does not already fit our beliefs.
Human reasoning is not a truth-finding engine, but rather, a means of social cohesion---I was happy to see 'The Enigma of Reason' in your references, which, I think, defends a very sensible thesis regarding human cognition.
You also note the uncomputability of methods of inference. I wonder if you're familiar with Solomonoff's work on universal priors---essentially, it amounts to formalizing Occam's razor as preferring the most simple program, in the sense of algorithmic complexity, capable of reproducing a given set of data. It is possible to formulate a theory of inductive inference ('Solomonoff induction') on this basis---which naturally ends up being uncomputable, due to the uncomputability of Kolmogorov complexity.
Markus Hutter has proposed an artificial intelligence framework, the AIXI agent, based on this, and could show that it asymptotically solves every unknown problem in the same time (in the sense of time complexity) as a special-purpose problem solver. There is much interesting work on trying to find computable approximations to AIXI.
I agree regarding your remarks on the interpretation of quantum mechanics---there is simply too vast a space of possibilities, and too little constraint from data. However, I do believe that going in the other direction---that is, trying to reconstruct quantum mechanics from suitable fundamental assumptions, rather than trying to reconstruct the underlying ontology starting with the quantum formalism---to be possibly more fruitful. If you think that's interesting, perhaps you might want to have a look at my essay.
Cheers
Jochen