As I am much involved in the foundations of mathematics, I noted some inaccuracies in your essay on this topic:
"Bertrand Russell asked what would happen if you have a set of sets that does not include itself"
A consequence of the axiom of regularity of ZF is that no set is ever member of itself. And as in this theory all objects are sets, every set is a set of sets that does not contain itself, and none of the sets it contains contain themselves either. Instead, the reasoning of Russell's paradox starts by considering THE set of ALL sets that do not contain themselves. (I prefer the word "contain" rather than "include" as the latter may be confused with the inclusion relation, which is not involved in the Russell paradox).
Where you you take this from :"if [a list] does list itself it must list that it lists itself, which means it must list that it lists that it lists,...and so forth" I neither saw any reasoning like this, nor can find any logic according to which any list of lists should also have to "list that one of its lists, lists something". Wondering where you take that from, I see you refer to a writing by Russell in 1903. I'm not going to check if that reference actually contains that fuss or not, but anyway that's a very old reference, and thus not one that I guess any specialist in the foundation of maths would refer to nowadays when trying to explain what the paradoxes of set theory really are, unless of course they would otherwise know that a given idea turns out to be correct and worth quoting, instead of simply trusting what was written at that time. The only thing I see related to the structure of your sentence, is the axiom of regularity which forbids such loops of constructions, however it is only very remotely related and we would need to completely rewrite and reinterpret your sentence in very different ways in order to actually make such a link, and I don't feel like developing this now.
"Turing demonstrated that no Turing machine can emulate all other possible Turing machines to determine if it halts. To do this it must emulate itself emulating all possible machines, which gets one into the same conundrum that Russell found"
Sorry, this is definitely not the way the reasoning goes. There is no problem to make an algorithm emulating all other algorithms, including itself emulating all algorithms including itself and so forth. The problem is not with emulation, but with finitely proving a claim of impossibility for some algorithm (or equivalently an emulation of it) to ever stop. In other words, there is no problem to determine that an algorithm halts, in the case it will halt : we just need to run it "long enough". What we cannot do, is to find a general method that will surely happen, sooner or later, to establish that some other algorithm will NOT halt, in case it will not halt. Because no matter how long a simulation we make, the problem is that, in case it will actually never halt, we can continue emulating it indefinitely and not see it halt but we will remain unable to know if the reason we did not see it halting, is because it will actually never halt or because we did not emulate it long enough to see it halting. The claim "This algorithm will never halt" cannot be proven by running the said algorithm any amount of time, but would require a formal proof of that claim, which is something very different from the act of emulating the algorithm. Even if the said algorithm will never halt, the search for proofs that it never halts, is a very different algorithm (which depends on the precise chosen axiomatic system of set theory), which might never halt either.
"Kurt Goedel's proof that no mathematical system can ever prove all possible statements as theorems about itself"
This sentence is confusing, grammatically ill-formed and I cannot see any known result that looks like this. One thing he proved instead, for example, is that no algorithm can ever prove all true arithmetical statements, unless it is self-contradictory (also wrongly proving wrong statements). And also, that among the true statements that no algorithm can prove, is the statement that this same algorithm will never prove any contradictory statements (in case it is true).
I understand that the foundations of mathematics may not be your area of specialization, so that you may be confused about what the results exactly say and how they may be proven. I also understand that, when something is subtle and complicated, it may be uneasy to explain them in a short essay. However I consider that in case you choose to sketch the explanation of something, you should care to do it right ; and if you can't then you should not even try.
"The diagonal elements in the list when increased or decreased by one can be formed into a string of numbers that does not exist in the list"
Hmm, which list are you talking about, and to prove what ? Even while I remember there exists a result whose proof involves something like this, you should care to correctly specify which is this result obtained by which kind of argument formed by involving which list of stuff, instead of casting a randomly shuffled list of possible results mixed with possible reasoning involved in the proofs of some results.
"the existence of unprovable propositions, and further that these statements effectively declare their unprovability"
It would not be so interesting to find an unprovable proposition, unless this proposition also happened to be true. Also, not any unprovable statement effectively declares its own unprovability, but only a specifically constructed statement does.
"The second Goedel theorem is that these statements must be true, because their falsehood would contradict the statement declaring its own unprovability."
If you could prove that a statement is true because its falsehood contradicts something we know, then the statement would be provable, in contraction to what you just mentioned. So there is an argument why it is true anyway, but it cannot be a strict proof of truth. It is something more subtle. The problem is, when an argument is subtle, it needs a clear enough explanation so as to give a proper idea what it is about. Because the very point of paradoxes is that they are about subtle ideas, that require careful distinctions between concepts which feel similar but are in fact different, until we prove that they are indeed sometimes not equivalent in some cases.
"the cardinality of the continuum, thought to be larger than countable infinity, is not decidable, but where one can construct models independent of the axioms of set theory"
You meant : that its value (known to be larger than countable infinity, but coming after how many other infinite cardinals) is not decidable but independent of the axioms of set theory, as we can show by constructing diverse models where it takes different values.
"We have an intuitive sense of numbers and the inductive reasoning for why if there exists the integer N then the integer N + 1 must exist. Goedel tells us that something goes wrong with this; there is something in basic arithmetic that is not computable".
I never heard about Goedel claiming there would be anything wrong with the idea that if an integer N exists then the integer N+1 exists too. I even never heard of what it might formally mean for the integer N+1 to not exist.
"We might then ask the question: do all the numbers between these two large numbers exist?"
It depends what you mean by "exist". Mathematics has its own concept of existence, by which all these number indeed exist, but which is independent of any concept of physical existence.
"This means if they exist in some meaning according to computation there must be a machine that performs any calculation"
Here again, it depends what you mean by "exist". We can mathematically consider the mathematical existence of "machines" more powerful than this universe, with the only defect that we can only know the results of some specific cases of its calculations, those that can be deduced by a much shorter method.
"the Kolmogorov entropy" : it seems you mean the Kolmogorov complexity, which is indeed a concept of entropy, to not be confused with what is called the name of Kolmogorov entropy but that is quite different.
There is indeed a theorem by Chaitin about Kolmogorov complexity, which is about finding the smallest possible program making a given output, or optimal data compression algorithm, and is inspired from the Berry paradox. However even though I know this theorem and its proof, I could not relate to it the bits of sentences that you sketched and that seemed to me totally incoherent.
"We know that between 10^10^10^10 and 10^10^10^10^10 there are numbers that have enormous complexity, but we cannot know what is the smallest of these numbers that has no such description"
Any very big number can always be described as having an enormous complexity ; the question is whether it also admits a simpler description, with complexity smaller than some amount. If n is a reasonable number (such as n=500) and K is a large number that is "quite complex" in the sense that it is not known as having any relatively low complexity (say, < n+100), then we cannot know what is the smallest number larger than K that has a description with complexity < n.
However your example completely fails : among all numbers between 10^10^10^10 and 10^10^10^10^10, we do know what is the smallest one with a low complexity, and that is 10^10^10^10 itself (since we could define it in a simple manner !)
In the ordinary double slit experiment, the paths do not wind around the slits, as any contribution of paths that wind around are usually neglected in descriptions of this experiment (they are small corrections from quantum field theory, and do not affect the basic paradoxical aspect of the experiment).
I can admit a theoretical concept of super-Turing machine which "solves the halting problem" of ordinary Turing machines. However I think you gave a wrong example here : "the problem of whether a light switch that is turned on in the first second, then off in the next half second, then on again in the next quarter second, then . . . , and whether the light switch is on or off at the end." The obvious answer here is neither. Generally for any algorithm that may turn on or off a switch along time depending on some computation, a super-Turing machine may tell whether there is a time after which it will stay on, or a time after which it will stay off, or neither, i.e. that it will keep alternating endlessly (for any time there exists a later time with different result). But if you explicitly assume endless alternation at the start then you no more need any super-Turing machine to discover that you will get endless alternation.
Unfortunately, this abundance of inaccuracies I found in what I could decipher in your essay as I know the topics, does not leave me quite optimistic about what I cannot decipher, on topics I am not familiar with (especially HOTT)
A simple web search seems to indicate that there is no such thing as "Polish set theory".
We have multiple theories for the foundations of mathematics, with possible variants of set theory, but I would not take this as if it meant "we have no particular foundations to mathematics". There is a picture of hierarchy and interdependence between versions of set theory, and there are clear reasons for this. The main reason is that there is not one unique mathematical universe, but an endless hierarchy of bigger and bigger ones, that fit different descriptions.
Chaitin's work insisted on pessimistic aspects of the foundations of mathematics. This does not mean everything in mathematics is baseless and happens by chance, "by no logical reason". Actually, the incompleteness theorems themselves are examples of remarkable successes of mathematics to handle its own foundations - because they are mathematically proven results !
You incoherently conclude "It is very difficult to understand how this could be scientifically demonstrated, yet maybe regularities in physics described by mathematics exist for no reason at all. Mathematics and physics have this curious relationship to each other for purely stochastic or accidental reasons; there ultimately is no reason for this"
You mysteriously remove the "maybe" on the way, replaced by an "ultimately"... seemingly for no reason at all. What can allow you to you positively claim this "ultimate" absence of reason, as if you got a proof for this, while at the same time you say it is very difficult for you to "understand" the possibility to prove it, either that such a proof should exist but you visibly do not know it (otherwise you would understand it) or anyway you strongly believe that this absence of reason is a fact (why ?) ?
But the worst form of incompleteness, in my opinion, is that of the incomplete understanding of how the foundations of mathematics look like and what do the diverse incompleteness theorems actually say and for what reason, that may be due to a lack of clarity in the way these things are usually presented. I invite you to visit my site settheory.net where I cared to explain as clearly as I could the main concepts and paradoxes at the foundations of mathematics. Maybe you will find there that the real picture of the foundations of maths is more coherent than what you now think.
Finally, I invite you to read my own essay where I discussed how quantum physics avoids the incompleteness of the infinity which the continuity of physical space naively seems to contain.