Dear Stefan,
Please forgive me if my comments show the ignorance of what you have discussed. I tried repeatedly to give some ground to what I read, but I do not believe that I succeeded. Also, by the time, I got to the end, I felt, several deductions were on the lines that I thought correct, yet, I decide to log the discussion below.
1. You write, "Because for knowing the existence of certain things one had at first to know whether or not the existence of these things should necessarily be considered a possibility, a necessity or even an impossibility."
How does it fair against the existence of consciousness? Must we necessarily know the possibility or necessity of existence of consciousness, before we can have consciousness?
2. You state, "Despite our problems to decide between the above mentioned modalities, we nonetheless obviously can know a truth that mathematical systems or machines presumably never can know. We know that machines and thus, their algorithms, lack the needed ontological awareness of the terms 'possible', 'necessary' and 'impossible..."
With such comfort 'we' use the term 'we' in statements like 'we know', without asking a question what exactly are terms such as 'I' and 'we' refer to? Many questions, and discussions become ill-posed because of a lack of understanding of entities referred to by 'I' and 'we'. Keeping in the same tradition of usage of such terms, let me assert first the following before offering any clarity, "If 'we' know things, then in the same sense, computers can be made to know the same things." Simply stating, if there is an objectivity to the creation of entities such as 'I' and 'we', then the same objective function can also create a device to express 'I' and 'we'.
Consider for a moment, what if neuronal system in modular hierarchy represents meaningful information that expresses relations among objects, where one of the represented objects refers to certain characteristics of self either as an individual, 'I', or as a group, 'we'. Every object is referable by the constancy of its relation with other objects, and constancy of structural relation among component objects. The same is true for the object that refers to characteristic composition that we refer to as 'I', which entails characteristics of an embodiment of physical body, an observer, an actor, a controller, and so on. A reference to the self is an attribution in a represented semantics (meaning) of information. For instance, when we refer to pain in our hand, the represented semantic value includes reference to the hand as a component of unified system, the specifics of pain, the specifics of location, etc.
Now, the point is that, for this to be represented, hand need not exist, as is established from phantom limb experiments. That is, pain is an attribution to the represented extension of the body, as the attribution of consciousness to the represented unified self. The semantics of being an observer, of bearer of knowledge (the knower) not only of the objects, but also of their inter relations and their causal dependence are all representable by physical states as neural states represent these semantics. The attribution of all this characteristics to an unified system as unitary referable system composes the self; it is that self that gets referred to in expressions like, 'I know', 'we know', etc. Just because, physics has not touched upon the reality of semantic values (meanings) of information, we face such a void in our understanding. But first step in establishing the same has already been done in 'Fundamentals of Natural Representation', https://doi.org/10.3390/info9070168.
Now, let us revisit the same statement -- we nonetheless obviously can know a truth that mathematical systems or machines presumably never can know. A caveat must be added -- machines need not be mathematically consistent, as is a requirement in Godel's theorems. We now know why processing in the brain, and for that reason in any physical system, is not mathematically consistent, the same can be implemented even in the computers. After all, human brain is a biological (physical) device that processes information. Godel's theorems do not apply to processing in physical systems, therefore, they do not apply anywhere except in pure mathematics. Pure mathematics does not bound the limits of existence.
3. You seem to take a position in terms of possibles and impossibles. One must keep in mind that with the aid of a language, created by a non deterministic device (brain), arbitrary semantics can be constructed, which may not have anything to do with realizability of the expressed semantics to have causal powers. Since a statement can be constructed in terms of possibles with no reference to causal basis, does not mean that such an expression can be given any objective basis or legitimacy to infer the limits of existentiality, particularly in the domain of limited indeterminism. If existentialism does not have a deterministic basis, then an arbitrary class of logical possibles cannot be used to deduce conclusions about the existentiality of objects. For instance, a statement about objects can be constructed that are self contradictory, such as 'I do not exist'. But it must not allow a conclusion to be drawn that a universe allowing such relations among objects to be expressed cannot exist because such a relation among objects can never have an existential reality; i.e, a universe is forbidden if it leads to impossible contexts. But, there is an objective difference between impossible contexts and expression of impossible contexts. One may express the list of impossibles, but one may not causally have one. Hence, a logical conclusion based on having such a list is not operational.
Rajiv