Akinbo,
Thank you for taking the time to read my essay. It might be helpful for you to review the differences between frequentist and Bayesian inference. One of the key differences between the two is that frequentists assume that there is some true but fixed values to unknown parameters, where as in Bayesian inference those parameters are updated. In this sense, one must accept that questions of whether something is true or false is always hinged on historical observation and not on whether there is some underlying fundamental reality. While I steel the idea of a confidence interval from frequentists, which isn't strictly Bayesian in any sense, the point is that the notion of a true mean is illusory at best.
The idea of averages is best understood by analogy to the statement of what an "average person" is. If one where to take all the statistics one can devise to measure a human trait and define what the "average person" is, one would be very hard pressed to find any person that actually met this statistical description. So the answer to whether a person is average must almost always be no, unless one is willing to accept error in the estimate. In order to avoid contradiction with frequentists, the "true value" in Bayesian statistics would be the value that is approached if one had more knowledge. This sort of recursive process is the type we see in nature almost everywhere.
As far as Peter Jackson's essay, I think the notion of the excluded middle is helpful in understanding fundamental issues. If can follow the following chain of links on Wikipedia (which I will leave to the reader to do):
Law of Excluded Middle > Autoepistemic Logic > Uncertain Inference > Probabilistic Logic Network > Markov Logic Networks
In essence, the law of the excluded middle has been abandoned in the face of modern thinking of networks. This fact should not be viewed as trivial. In any case, to quote the article on Autoepistemic Logic:
"In uncertain inference, the known/unknown duality of truth values is replaced by a degree of certainty of a fact or deduction; certainty may vary from 0 (completely uncertain/unknown) to 1 (certain/known). In probabilistic logic networks, truth values are also given a probabilistic interpretation (i.e. truth values may be uncertain, and, even if almost certain, they may still be "probably" true (or false).)"
In this sense, one can understand that there is an "average value" for truth that is indicates something is most true or mostly false.
In any case, the "It" is not fundamental, it is the developed based on the shared knowledge of all observers, or the networked knowledge of all observers. What is fundamental is the uncertainty or imprecision associated with whatever "It" is. "It" if it follows from "Bit" can not be definite, since the "privileged" frame of reference of the "average" bit is not tied to any single observer.
Thanks again for the comment.