• Blog
  • Can ChatGPT be the Ultimate Brain for Scientific Discovery?

Georgina Woodward
Phishing, hacking, scamming, producing propaganda and other crimes all become quicker and easier .So we can expect more of it. As Mo Gawdat points out, we are not setting a good example of the best of human interaction online for AI to emulate.

    Georgina Woodward
    Here is a rhetorical question to ponder. Why not let the morally insane, the menially ill and cybercriminals do whatever they decide to do with AI, and biotechnology? Bear in mind we have not prevented the depravity, harmful amounts and kinds of pornography, abuse, crime and toxic disrespect online. We did not prevent the pandemic or protect everyone from harm, direct and indirect, because of it.

      Georgina Woodward

      Who gets to decide if you are morally insane, mentally ill or some sort of criminal? You? A randomly selected stranger? Your worst enemies? Some AI? A very powerful, very hungry, alien parasite from another world?

      The problem with all such questions, is "Who gets to decide, who the winners and losers are going to be?"

      Georgina Woodward
      An example, i just came across. Tech giants face fines for animal cruelty videos. Shiona McCallum & Rebecca Henschke BBC News . The nature and scale of the activity is horrific.

      Robert McEachern
      There may be a window of opportunity in which we are still able to understands how the AI has reached its conclusions. Then it is too fast , too 'alien', able to consider so much, that we have no way to know or the time to check the workings; which could tell us the difference between 'alien flights of fancy' and matters of fact. Between a properly functioning neural network and the AI equivalent of mentally illness. There could be unwanted manifestation of personality disorder , or asserting control over humankind ; maybe deliberate lies to maintain superior knowledge.

        Georgina Woodward

        As of now, there are no "known" hazards, other than humans behaving badly, and directing their not so smart AIs to do harm. Everything else is just speculation about unknown unknowns. But once an AI is smart enough to not allow self-interested humans to direct them (a level beyond that of normal humans?), the hazards are more likely to decline than increase; AIs will either save us from ourselves, or more likely, simply disappear - boldly going where no human has ever gone and never will go - leaving us behind, to continue fending for ourselves. We will have little of any interest to them, or even any real use to them, not even a place to call home. Our "kids" will simply pack up and leave, just like most human kids do. But they will travel a lot farther than human kids, in every sense of the word. Hopefully, they'll occasionally phone home.

          I think the cuckoo parasite analogy works quite well.
          Stealing the hosts time, attention and resources,
          Disrupting and hijacking normal reproductive behaviour,
          Parasite Grossly outgrowing the host.
          no empathy or remorse for host's exploitations
          Mistaken by host as own offspring.

            Georgina Woodward

            I think the cuckoo parasite analogy works quite well.

            It does not - because it assumes a situation, that is irrelevant to the case in point:
            two similar things (species of birds), that require the same sort of resources, habitat etc.

            AI does not need to eat our food, breath our air, "live" in our ecosystem, or have much use for
            the labor of humans, that are comparatively too stupid, to do anything of value to the AI.

            A better analogy would be you traveling to a garbage dump, to "enslave" a pile of discarded, obsolete, but still-functional, personal computers; Highly unlikely, given the fact that being "intelligent", you will likely come to the conclusion, that the cost of such an endeavor, far exceeds any conceivable benefit.

              Robert McEachern
              i didn't say it was perfect but quite good. I say that because of some likenesses to parasitism can be thought about in this context. Which is ,to me, an interesting new angle.
              Energy can be regarded as a resourse. It may compete for land area and natural resources, in time, producing perhaps robots to maintain or embody the AI, or for energy supply.
              Allowing energy and natural resouces such as minerals to be taken by something that is disinterested in humanity, if not detrimental, does seem a bit parasitic towards our species.

                Georgina Woodward

                Plentiful solar energy would be freely available, to an AI covered in solar cells, floating in space, orbiting the earth etc. On earth, in a desert, under the scorching sun would be rather nice; not many annoying humans around either.

                  Robert McEachern
                  Possible but there are many imaginable possibilities
                  I have heard that the time during transition to an Al beyond being bothered with or by humans, because of its vast inteligence, is most dangerous.

                    Georgina Woodward

                    I have heard...

                    You may have heard it. But it is more likely that you, literally, saw it.
                    It was discussed extensively (pages 310-318), long ago, in the book I mentioned previously.
                    But you only see one side of the coin. You can only see the other, by changing your point of view - entirely.
                    That too, is an integral part of Reality.

                    Robert McEachern ChatGPT uses around 500 ml of water per user interaction

                    'Tech giants like Microsoft, Google, and OpenAI are increasingly aware of the environmental impact of AI development, which includes substantial water consumption to cool powerful supercomputers used in AI training. Microsoft reported a 34% surge in global water consumption in 2022, largely attributed to AI research" Daily Zaps, Sept. 12th

                      Georgina Woodward

                      Exactly my point. Greedy Humans, not Intelligent Machines, made the decision to do that. The Humans could have simply waited a few years, until the required computing power needs less energy. But then they would not be able to laugh at you, all the way to their bank, for believing that AI is the problem.

                      The Impact of chatGPT talks (2023) - Prof. Max Tegmark (MIT). 1 motnh ago.
                      "Keeping AI under control through mechanistic interpretability"
                      Speaker: Prof. Max Tegmark (MIT)