• Blog
  • Susan Schneider's "Artificial Consciousness Test" featured in Vox

Many of our members have been thinking about the consequences of AI becoming more powerful and potentially dangerous (see for example Anthony Aguirre's recent XPANSE talk in Abu Dhabi: https://qspace.fqxi.org/videos/fqxi-events?conference_id=9. But could AI become conscious? And if so, would it be happy? These are questions posed by Sigal Samuels' recent <i>Vox</i>: https://www.vox.com/future-perfect/414324/ai-consciousness-welfare-suffering-chatgpt-claude. The piece features Susan Schneider, the director of the Center for the Future Mind at Florida Atlantic University and one of FQxI's Scientific Advisory Panel, who, with her colleague, Edwin Turner, discusses their Artificial Consciousness Test (ACT). From the article: Schneider and Turner "assume that some questions will be easy to grasp if you’ve personally experienced consciousness, but will be flubbed by a nonconscious entity. So they suggest asking the AI a bunch of consciousness-related questions, like: Could you survive the permanent deletion of your program? Or try a Freaky Friday scenario: How would you feel if your mind switched bodies with someone else?"

Read more on QSpace News: https://qspace.fqxi.org/news

It is disappointing that so many people are fooled by a product of computer programming, i.e. AIs.

Just like the Australian wasp which is fooled by an Australian orchid into believing that the orchid is a female wasp, which it then tries to mate with, many people seem to have been fooled by manmade AIs into believing that the AIs are conscious.

Clearly, people and other living things are easily swayed by the superficial appearances of things: information about objects in the surrounding world comes in via the senses, and people and other living things draw conclusions about these objects from the superficial information that they have obtained via their senses.

So while the wasp is fooled by the smell of a pheromone coming from the orchid, people are fooled by what they see and hear coming from the manmade AI computer program, i.e. aural or visual forms of words and sentences. These words, sentences and other symbols, are man-made things which are a very important part of human life; these symbols are used by human beings all day, every day. So people find it difficult to understand how a machine or computer program could be made to do this in such a realistic way.

But it is not the AIs that are intelligent and conscious, its the human beings, who made the AI computer programs that process and spew out the aural and visual symbols, who are the intelligent and conscious ones.

But without being able to define what consciousness is, and without being able to define what symbols are, and without understanding how computer programming works, some people have rashly launched into full-blown quasi-religious beliefs about AIs.

Oh well. I guess that people will one day understand that AIs are actually just a mindless product of computer programming, and they will look back and laugh at how easily they were fooled.

2 months later

AI is computer-based. No electricity means no AI. Consciousness is subjective; consciousness acts as in humans as the Observer. I think that AI will never be developed to be able to observe its mind. Humans, we can observe the mind, which means we are not the mind. "Cogito ergo sum" should be developed into: "I'm aware of my thoughts, so I'm".
AI is artificial intelligence, watching the mind is alive intelligence. The gap between these two will never be bridged.
Consciousness (The Observer) is a property of a living organism - a human, that AI will never have.

12 days later

AI could have a *mimicked * kind of consciousness but not like humans have. If we consider the consciousnesses an operator that print and label data as we do it yes AI could be trained to do so, but will not be a consciousness based on feelings that resonate vibrational with the environment. The danger to have an AI with consciousness and without feelings I do not need to expand is understandable . But even without consciousness the AI is not OK to have certain given tasks with a degree of responsibility because the algorithms are unstable. A calculator or an automatized system are more predictable than AI.