Dear Rajiv,

thanks for reading and considering my essay! I will try to answer what you brought up:

"How does it fair against the existence of consciousness? Must we necessarily know the possibility or necessity of existence of consciousness, before we can have consciousness? "

You eventually misunderstood what I wrote about. I wrote about things we yet don't know if they exist or or do not exist. Consciousness does not fall under that category - since we know that it does exist, independent of knowing whether consciousness is "necessary" or merely possible. I wrote for example about the idea of a multiverse, for which we yet do not know if it only exists as an idea of scientists or if it physically exists "out there". Same for a "platonic" realm of mathematics.

"You state, "Despite our problems to decide between the above mentioned modalities, we nonetheless obviously can know a truth that mathematical systems or machines presumably never can know. We know that machines and thus, their algorithms, lack the needed ontological awareness of the terms 'possible', 'necessary' and 'impossible..." "

O.k., maybe you made a false inference out of that - if I interpret your lines of reasoning correctly. So to clarify: machines may become conscious, but in that case my assumption is that the "algorithmic" activity for that to happen will be such complex that no scientist can trace this activity - in other words, no one them will be able to point to those parts of that algorithmic activity that initialize a state of conscious awareness within these machines - consciousness will then be a non-formalizable feature of such a machine. If you like, you then can term a brain to be such a machine.

"You seem to take a position in terms of possibles and impossibles "

Well, not quite. Because I think that nobody really can neatly separate the possible from the impossible - how should one every be able to do that and on what logical basis? So I do not argue for the possibility that we could somewhat obtain complete lists of the impossibles or the possibles. But as you may suspect, one impossibility that I think can be inferred is exactly that such lists aren't obtainable - because how can some little brains like us gain the needed information for such lists without being at the same footing as let's say God?

Hope that clarifies your questions.

Best wishes,

Stefan

Dear Stefan,

Thanks for responding.

From your agreement, "machines may become conscious", one may infer that you accept the objectivity of physical function that gives rise to consciousness. And if a process is objective, then it is repeatable, it is same for all, and it applies the same way in all applicable contexts, then its description must be constructible. The process of emergence is algorithmically expressible. But then, you also add, "but in that case my assumption is that the "algorithmic" activity for that to happen will be such complex that ... no one will be able to point to those parts of the algorithmic activity that initialize a state of conscious awareness within these machines". By this addition, you seem to be taking away the same objectivity that you granted a moment ago. Somehow, you tend to maintain that consciousness is fundamentally inexplicable, and that an entity is either conscious or non-conscious as given as in a corporal sense. My effort in the following paragraphs was to show that 'association of consciousness to an entity' is a representation of semantics (meaning) that attributes consciousness to a body as a person.

Consider for a moment, the neuronal system in modular hierarchy represents meaningful information that expresses relations among objects, where one of the represented objects refers to the self in the same way as another represented object refers to 'a book', and yet another object refers to the 'act or reading'. The three combined together in conjunction refers to, "I am reading a book". Every object is referable by the constancy of its relation with other objects, and constancy of structural relation among component objects. Try constructing a description by any other relations, and you will know that these are the only two ways to construct a description of any object. Hence, the mechanism of representation of semantic value (meaning) is knowable for all objects including the self.

For instance, when we refer to pain in our hand, the represented semantic value includes a reference to the hand as a component of the represented unified system, the specifics of pain, the specifics of location, etc. Now, the point is that, for this to be represented, hand need not exist, as is established from phantom limb experiments and dream events. That is, the sense of pain is a semantic attribution to the represented extension of the body, as the attribution of consciousness to the represented unified self. The attribution of all this characteristics to an unified system as unitary referable system composes the self with the semantics of knower, actor, and controller; it is that represented self that gets referred to in an expression like, 'I know', 'we know', etc. As I referred to a publication, where the physics or mechanics of representing and processing of semantic values (meanings) of information has been worked out.

If it makes good sense to you, then reconsider the statement, "We know X", and determine how it is constructed. So, if we know the mechanism of how to build structured and abstract semantics via organized phyiscal interaction, then we can lay down the algorithm, and we also know what processing constructs the self with the associated properties of consciousness.

Rajiv

Dear Rajiv,

yes, I maintain that consciousness is fundamentally inexplicable, although I well see that many contents of intelligent consciousness are logically ordered. Logical rules, cause and effect etc. for example. I also see that the brain is somewhat related to the contents of consciousness (phantom limbs and other brain experiments!). But I also see that what I wrote in my essay in 2017 about near-death experiences is another piece of evidence that there is some part of consciousness that is independent of a physical instantiation.

When I wrote that "algorithmic" activity may be too complex in conscious machines to trace them, I used the wrong wordings to express what I really meant, namely that no algorithmic activity alone - in my opinion - can cause dead matter to become conscious. I well understand that representation and meaning are central to conscious operations in the world. But "meaningful information" is an effect of consciousness, so I think, not a cause for consciousness to happen. In a world without consciousness at all, in my opinion there is no such thing as "meaningful information", although many physical processes are governed - more or less - by some cause-and-effect patterns that can be captured mathematically. But without consciousness, even mathematics is meaningless, we only think otherwise because we are conscious and can trace some of these cause-and-effect patterns.

You wrote that

"The process of emergence is algorithmically expressible "

I am not sure about that. The term emergence in this context surely implies that less complex things can be organized (by mathematics) such that more complex phenomena may arise. This surely is true for certain phenomena, but I wouldn't generalize it into a concept of universality. I sincerely doubt that "emergence" is a kind of pure coding and data processing task, where that task at some point becomes conscious about the "contents" that it is processing.

"Every object is referable by the constancy of its relation with other objects, and constancy of structural relation among component objects. "

I think that is true if one does exclude indeterminacy (for example at the quantum level). But once again I cannot see what the term "referable" refers to if there is no consciousness there to start with in the first place. Moreover, the question to what extend the term "object" does point to the ontological existence of what it refers to or merely to the limits of what can be represented in general (namely only "objects" whose behaviours are definitely determined by "something") seems to me to be not definitely answered yet.

I really do not think that human beings can answer all the questions we are concerned with at fqxi in an objective manner. For that to accomplish one had to have the list of all impossibilities and all possibilities. I cannot see how human brains may obtain such lists. Moreover, if there exist phenomena in the universe (multiverse or whatever) that aren't completely representable, we may never know that they aren't completely representable - and consequently will represent them erroneously and falsely (incompletely!) by thinking that our representation of them is all there is to them. That would be a serious problem to overcome, since if we don't know that we fundamentally can't know certain things ("knowing" in terms of representing them algorithmically), then we would have a huge blind spot in our epistemological and ontological understanding of existence. As I wrote in my current essay, it is logically not meaningful wanting to reduce every thing (object etc.) to another thing (object etc.). Viewing this as the other direction of "emergence", there is a limit of reducibility for everything - you can't endlessly ask "why". So all boils down to stop at a certain point and accept some fundamental axioms about the nature of all. But this acceptance is then a matter of faith in these axioms and not a correct "representation" of how things are objectively.

Hope this post could clarify the questions that remained concerning what I wrote earlier.

Best wishes,

Stefan

Dear Stefan Weckbach!

In our opinion, 10 points is not enough to evaluate your work. We set 10.

Truly yours,

Pavel Poluian and Dmitry Lichargin,

Siberian Federal University.

    Dear Stefan,

    Thank you again for responding.

    > But I also see that what I wrote in my essay in 2017 about near-death experiences is another piece of evidence that there is some part of consciousness that is independent of a physical instantiation.

    So, you do remember me.

    > no algorithmic activity can cause dead matter to become conscious.

    I suppose a computer can be thought of as a dead matter. But, isn't all matter in biology too dead matter? Of course, you will differ by virtue of possession of consciousness, not life.

    > In a world without consciousness at all, in my opinion there is no such thing as "meaningful information".

    Physicists have never bothered to deal with the semantics (meaning) of information, and that has created a huge void in our understanding. An observable state S of a physical system P causally correlates with information of precursor states of interacting systems that cause S of P. And this causal correlation serves as primitive of semantics, which can be used to construct all expressible semantics. May I refer you again to 'Fundamentals of Natural Representation', https://doi.org/10.3390/info9070168 , which may clarify certain points. Though, this work has not gained traction yet.

    > "Every object is referable by the constancy of its relation with other objects, and constancy of structural relation among component objects. "

    The word 'referable' here may be substituted with 'describable'.

    > As I wrote in my current essay, it is logically not meaningful wanting to reduce every thing (object etc.) to another thing (object etc).

    I consider the term emergence in opposition to reduction. So I agree semantic descriptions of all objects (relations, processes) cannot always be reduced, but the mechanism of emergence can be understood.

    Now that you have identified several fundamental differences in your responses that constitute the basis of our arguments, a resolution without evidences to support fundamentals cannot settle the differences left.

    Best wishes to you too.

    Rajiv

    Dear Pavel Poluian and Dmitry Lichargin,

    i am happy that my essay had an impact for you and feel deeply honored by your lines and voting - so i want to thank you for both.

    Best wishes to you both from Germany,

    Stefan Weckbach

    Dear Rajiv,

    thank you also for your correspondence!

    Best wishes,

    Stefan

    6 months later
    24 days later

    Here is the paper at viXra Explaining Results of Stern Gerlach Apparatus Experiments with Gyroscopic Motion, Georgina Woodward

    Dear Georgina,

    thanks for your notifictation. I will certainly study the paper in the next few days and if you don't mind then give you some feedback.

    Kind regards

    Stefan

    Write a Reply...