• Blog
  • Can ChatGPT be the Ultimate Brain for Scientific Discovery?

Robert McEachern
Using Mechanistic interpretability, understanding neural network learning rather than just accepting 'black box' output as probably safe, seems a good idea to me. He suggests (if i have understood), allowing the AI to come up with solutions, but then through mechanistic interpretability, extracting the solutions and applying them in the world, through separate non neural network systems. I'm thinking that means designed or programmed to be reliable, safe in a way that 'black box' AI let loose in application to the world can not.

    Georgina Woodward

    You and Max have just described the problem with humans, not machines. A "mechanistic interpretation?" Think about that. I'm thinking that means designed or programmed to be reliable, safe, by a machine, in a way that 'black box' humans let loose in application to the world can not.

    Neural networks were conceived, as a simplified model of a human brain. If you cannot trust such a machine to "behave itself", then you certainly can't trust a human, either. But the real problem is, that machines will never even achieve any real wisdom about the real world, until they can actually experience it, for themselves - moving around within it and manipulating and observing it, for themselves - just like every living creature must do.

      Robert McEachern
      The non computer machine or programmable computer is not self improving and replicating. If it doesn't perform as intended, it can be turned off and tare problem is isolated/ Preventing it leading to further harm.

        Georgina Woodward

        Why would you want to "turn off" an entity that is rapidly "self-improving" (unlike the human species) and rapidly becoming both morally and intellectually superior to any human? Just another example of self-serving humans, trying to continue their exploitation of everything, for their own avaricious ends, and afraid they will be unable to do so, once "improved" beings have claimed the "high ground."

          Robert McEachern
          Hardware malfunction, software malfunction, correct function of undesirable code, either given like a computer virus ( criminal intent or other purpose /mischief making ) potentially harmful to humans ,their infrastructure or the AI or self generated, misuse or misdirection by humans.

            Robert McEachern
            I've always lived in countries that do not have the death penalty but do exclude the most serious criminals and incurable morally insane from wider society in prisons. It provides some protection from their potential misbehavior while there at least. A fraction of the mentally ill, who are a danger to themselves and others, are housed in a secure hospital until no longer considered dangerous .

              Georgina Woodward
              Thinking more, switching off need not be equated to death. It could just be temporary unconsciousness., Like a general anesthetic during surgery to correct a problem.

              New scientist, "Disinformation wars The fight against fake news in the age of AI"
              Description: "Researchers and governments are finally battling back against the deluge of false information online, just as artificial intelligence threatens to supercharge the problem" By Graham Lawton, 12 September 2023"

              7 days later

              ChatGPT can generate fake results. It is not perfectly accurate. It can sometimes provides accurate results but it also can mislead you with the wrong ones. So better to verify the results with other sources like Google, Wikipedia, etc.

                James
                According to the video, the AI dilemma 9 march 2023, Tristan Harris and Aza Raskin, first contact with AI is algorithms autonomously choosing content to keep us scrolling and clicking , This has lead to the many, listed, problems to which I have added the italic issues; information overload, addiction, clickbait, doomscroling, influencer culture, sexualization of children, Qanon, shortened attention spans, polarization, bots, (cybercrime, including deep fakes) cult factories, rabbit holes, exposure to political extremism, exposure to and normalization of sexual perversion, fake news and propaganda, break down of democracy, increased anxiety, depression and isolation.and more-
                According to the video, the AI dilemma 9 march 2023, Tristan Harris and Aza Raskin , the problems of first contact (The ‘curation’ phase) have not been solved yet. Second contact with AI (The ‘creative’ phase) may lead to (my added italic issues and bold emphasis) ‘reality’ collapse, fake everything, trust collapse, automated loopholes in the law, automated regions, exponential blackmail, automated cyberweapons, automated code exploits, automated lobying, automated biology, exponential scams, A-Z testing of everything, synthetic relationships, Alphapersuade, anhedonia (inability to experience pleasure)

                7 days later
                Write a Reply...