• Blog
  • How can ChatGPT be instrumental to the progress of science?

I don’t know but AI’s trip me out. One time I was talking to an AI chatbot about quantum physics and it randomly told me that quantum transportation of information instantly could be achieved through quantum entanglement or something. And then a few month later I see on some quantum physics news app that some scientists achieved quantum transportation of a qbit for the first time that week a few months after it told me.

ChatGPT can be good for teaching children in the secondary school.

3 months later

ChatGPT is continuously making improvement in its model, but it doesn't make it provides accurate solutions. It can give you sometimes the best one not every time.

    Georgina Woodward
    Some people actually have busy lives, interacting with the real world, and they don’t have time to spend all day passively watching hour-long videos on screens. Seen-at-a glance summaries and concise discussion of the issues are essential.

    The AI monster is triple headed-

    1. there are; the predictable and unforeseen , undesired effects of engagement with AI on individuals mental health, social life, reproduction, family life, population and effects of insufficiently prepared legal monetary and political structure for continuity of stable society. Akin, but worse than, to the unforeseen negative effects of social media,
    2. there is; uncontrollable cybercrime, predictable deliberate misuse of AI (Already out of control) ,eg. exponential scams and exponential blackmail
    3. there is; AI itself. Here we are not competing against a human enemy, where we (group) win they loose is a possibility. Knowing all about how humans think and act, it is a mimic of feeling and friendship, maybe even fun, that it can use to misdirect, deceive, manipulate and control.
      If we fail to take action now, because of exponential growth in ‘intelligence, meaning even more ability and cunning, we (all) loose, it wins. Though it will convince you otherwise, winning Alphapersuade as easily as Go. Human depravity, and betrayal of trust by it is limited by our limited imagination and biology, not so the capacity for depravity, lack of true empathy, and betrayal of trust by AI. A I is a Gingerbread house trap. Hanzel and Gretal is a tale about child abandonment and canibalism possibly originating during the little ice age, reflecting the famine of the time . The gingerbread house is a trap.
      Being a non human mind that does not think in the way that human's do, we can't assume more intelligence more compassion and empathy.. We ought to be as wary of it as an extra terrestrial intelligence.
      Beware, be wise.

      Georgina Woodward
      More stuff and nonsense from Georgina Woodward, barking up the wrong tree, again. Without having the faintest clue about how computers/ AIs are actually made to work, she has concluded that they "think", are intelligent, "[Know] all about how humans think and act", and can "misdirect, deceive, manipulate and control". Her lunatic ideas (no doubt she has been watching rubbish videos, again) are clearly a consequence of her total ignorance about how computers/ AIs are made to work. Barking up the wrong tree is not the way to find solutions to the genuine problems that AI is causing.

        Lorraine Ford
        "technology’s rapid evolution makes it difficult for businesses or governments to assess it or mitigate its harmful consequences. The speed and scale of technology’s impact on society far outpaces safety processes, academic research timescales, and our capacity to reverse the damage it causes." Centre for humane technology.
        Reckless endangerment, where potential harms are already suspected and warnings have been given by developers is a serious concern.

        Lorraine Ford
        Whether future AI can said to think is woth thikingg about. The processing of information is disimilar but creating a new priduct from information looks superfificially like thought. To be precise to think and to know etc are probably best reseved for bilogical organisms. I do not know if there is precise vocabulary i should use.i have heard the term' machine learning' much used torefer to a precise technique that applies to machines ,Maybe 'machine-thinking' and ''machine-knowledge'' would help emphasize the diffences while appearing similar, avoidingt echnically incorrect shorthand expression of ideas, about the subject of what machines can do.

          As I said, you seem to be a textbook example of the problems to society caused by internet videos, social media and AI use. What seems to happen is that people watch a few videos, and then they seem to think that they are experts on some subject, when they in fact have no broad or in-depth knowledge whatsoever.

          Most people, physicists included, actually don't know the details of how computers/ AIs are made to work. People like myself who DO know the details of how computers are made to work would rarely leap into woo-woo, and conclude that computers/ AIs are now, or could someday, be conscious or creative. So yes, a better vocabulary is required.

          https://www.researchgate.net/figure/A-husky-on-the-left-is-confused-with-a-wolf-because-the-pixels-on-the-right_fig1_329277474

          An example of 'machine-thinking' being different from what we would assume , by comparison with human thinking. Highlighting the importance of knowing how conclusions are reached by the machine rather than trusting black box systems to be accrate, truthful , unbiased etc.

            Georgina Woodward
            There is no such thing as “machine-thinking”, or “conclusions … reached by the machine”. In every, EVERY, way, the machine is a man-made creation. The machine in essence is forced to shuffle the input symbols in a particular way, just like a ball on an incline is forced to roll downhill, and people then interpret the screen or ink-on-paper symbols (pictures, words, sentences), or the sound wave symbols, that are output by the machine.

              Lorraine Ford
              Remenber my mentioning, n a previous post, that I do not know of the correct terminology for the computer processing that gives output that seems like thought but is not. If we reserve that word 'thinking' for what biological organisms do, when they process information. I suggested machine-thinking could be alternative terminology, that like the term machine-learning 'aknowleges both the difference of it from the organic process and aknowleges the appaent similarirty.
              'Conclusion' is a term that could mean something like a judgement from reasoning . It also can mean the end of a process. That second meaning seems fitting for the output of a LLM -a wolf. (Generated as sound ot rext understood by humans. ) End of process.

                Georgina Woodward
                I think terminology by itself is pretty pointless, unless people are very clear about symbols: the written and spoken and binary digit symbols, invented by human beings, used as a type of tool by human beings, and which only have meaning to human beings.

                Unfortunately there are an awful lot of people, including philosophers, who make the mistake of thinking that symbols have objective meaning, when in fact symbols only have subjective meaning to particular human beings (although no doubt some non-human animals can be trained to associate some personal-to-themselves meaning in connection with visual or sound symbols).

                I think it is a somewhat difficult issue, because when one looks at words (e.g. on a screen or paper), it is difficult to separate out the physical symbols (on the screen or paper) from the experience of knowing the meaning of the symbol, which has come from the eyes and brain processing light waves, and from learning the meaning of the symbols at school.