Dear Cornflower Cicada,
I think the reason why people believe that an array of computers can be conscious is because they also believe that everything else in the universe is purely information processing in nature.
So they conclude that the brain is an exclusively only deterministic machine, and since it produces consciousness, therefore an equally complicated information processing device (computer array) must also be conscious at some point. They explain this by “weak emergence” (not strong emergence).
I speculated in my essay “computers may become conscious” not because I believe in that possibility, but because I believe that I have to argue on the basis of such beliefs for at all being able to introduce my contrasting point of view. Therefore I never mentioned that I do not believe in conscious computers.
My belief is that “computers can become conscious” is a kind of “Eliza effect”, first observed in the 1960s, where a computer program with 100 lines of code made people believe that this thing must be conscious (look it up at Wiki if you wish).
ChatGPT has roughly about a hundred million users today. And it often clearly outputs illogical, dramatically non-factual information (especially for simple math questions). For both reasons I believe it is concluded by the majority of those users that this thing cannot be conscious, or its consciousness must have some serious bugs (what presumably not many people will believe since this would somewhat indicate that human consciousness could also have some bugs...).
But in my opinion, the situation will change when scientists will have facilitated an AGI and billions of people will use it, a computer array with human common sense behaviour: if humans already fall short of discriminating fake-news from truth delivered by social media, I think it is not too far-fetched that they also will fall short in concluding that whatever AGI will say about iself (it is conscious, it knows the secrets of life etc.) must be merely the result of a very huge computing power together with some very sophisticated algorithms. Too long it has been told to people (not to all people) that consciousness is a phenomenon of weak emergence. Those that believe in the latter will probably also conclude that this weak emergence must necessarily also occur within an AGI at some point. Anyways, the Eliza-effect seems to be a regularity when people engage themselves in what Alan Turing called the “imitation game”.
Of course, I could be totally wrong with the prediction in my essay and it is true that society will be able to manage upcoming AI problems – and further that the majority of people will not be alarmed by the alarmists. At least I do not overestimate my contribution to the current essay contest in its influence in either direction: most probably it will have not the slightest effect on the course of future events.
Nonetheless I apprechiate Tristan Harris' engagement, not only because I am convinced about what the video (and website) identifies as real dangers and that managing these dangers means to be aware of these dangers in the first place, but because it seems to me that I have good reasons to think that humanity will not be able to exclude the malicious use of that upcoming technology – by other human beings. This hasn't worked with computer viruses and I think it also will not work with computer attacks on human infrastructure (the latter heavily based on computers and internet) by some human beings (for political, ideological or other mentally ill-defined intentions).
I do not think that a conscious or not conscious AGI will decide to do such attacks, or will decide to take over the world. But a sufficiently powerful computer array that already has been fed with all kinds of source codes from all kinds of open source projects in the world may be able to produce large infrastructural damage by exploiting all its weak points and then be promted to write a malicious program (that already happened) for a user (human being). Take for example Iran, where its centrifuges for nuclear science were coupled to computers, the latter were coupled to the internet at some point. Some years ago, somebody (no AGI but (a) human being(s) managed to infiltrate the system and to make the centrifuges run much, much faster than they should – until they crashed and the facility was severely damaged.
In my opinion it is not too far-fetched to think of other scenarios where something is damaged upon which a huge population of human beings is dependent on (for examples computer memory with no backups available, electricity facilities etc.). I am not the only one who thinks along these lines. What do you think about it?