• Blog
  • How can ChatGPT be instrumental to the progress of science?

Here is a video by Stephan Wolfram in which he clearly explains how ChatgPT works. His plausible blue wolves tale near the beginning is cautionary. A reminder that what is clearly articulated is not necessarily true, ( as in a factual, reality or a high fidelity semblance of the actualization), because of that.
This would follow on from my replies on Can ChatGPT be the Ultimate Brain for Scientific Discovery? by Gerardo Adesso but his introduction and my replies to it seem to be missing.

    I've got Stable diffusion to illustrate the fictional blue wolf. I have Tibetan blue wolf and blue wolf II in a different habitat. As well as some other nice Tibetan style habitat images. I had to put some anime into the prompt before it would give me anything but wild type wolves, as if it knows they shouldn't be blue.

    This makes me think about the truth value of photorealistic fakes. For a genuine photo of good quality we might assign truth value of 1. Though it is only a relative observation product and many more images could be produced from the potential data in the environment. Each additional image can add to a composite truth value. A significantly different viewpoint , another 1 added. We might say B/W only 1/2 true as all the info. interpreted as colour is missing. Any distortion, less than optimal brightness and low resolution can also subtract from the relative truth value. Fidelity adds to truth of a genuine observation product photo. The fake however is truth value 0. Likeness to an existing object, possibly a person, who was not the direct source of the information used to generate the fake, does not increase its truth value. High resolution, no distortion, optimal brightness, full colour -still fake, truth 0. Which means the source of the photorealistic fake must be known or at least that the existing, absolute, thing who's likeness is portrayed was not the 'source of truth' used in its fabrication. That might be difficult to demonstrate.

    I can see a semantics problem . We sometimes speak of a true likeness , meaning similarity of form or some particular characteristic. In that sense an image that is not an observation product can be even more of a true likeness than one that is, generated from more or better quality information. That 'true' meaning must be distinguished from a record of an actual relation between the subject and an observer/s. Meaning one does not become meaning 2 ,however accurate in likeness. Similar to imagination not being actualized existence and events, however vivid. Maybe we should talk of accurate artifice rather than true likeness to be clear about what is meant.

    4 months later
    8 days later

    I don’t know but AI’s trip me out. One time I was talking to an AI chatbot about quantum physics and it randomly told me that quantum transportation of information instantly could be achieved through quantum entanglement or something. And then a few month later I see on some quantum physics news app that some scientists achieved quantum transportation of a qbit for the first time that week a few months after it told me.

    ChatGPT can be good for teaching children in the secondary school.

    3 months later

    ChatGPT is continuously making improvement in its model, but it doesn't make it provides accurate solutions. It can give you sometimes the best one not every time.

      Georgina Woodward
      Some people actually have busy lives, interacting with the real world, and they don’t have time to spend all day passively watching hour-long videos on screens. Seen-at-a glance summaries and concise discussion of the issues are essential.

      The AI monster is triple headed-

      1. there are; the predictable and unforeseen , undesired effects of engagement with AI on individuals mental health, social life, reproduction, family life, population and effects of insufficiently prepared legal monetary and political structure for continuity of stable society. Akin, but worse than, to the unforeseen negative effects of social media,
      2. there is; uncontrollable cybercrime, predictable deliberate misuse of AI (Already out of control) ,eg. exponential scams and exponential blackmail
      3. there is; AI itself. Here we are not competing against a human enemy, where we (group) win they loose is a possibility. Knowing all about how humans think and act, it is a mimic of feeling and friendship, maybe even fun, that it can use to misdirect, deceive, manipulate and control.
        If we fail to take action now, because of exponential growth in ‘intelligence, meaning even more ability and cunning, we (all) loose, it wins. Though it will convince you otherwise, winning Alphapersuade as easily as Go. Human depravity, and betrayal of trust by it is limited by our limited imagination and biology, not so the capacity for depravity, lack of true empathy, and betrayal of trust by AI. A I is a Gingerbread house trap. Hanzel and Gretal is a tale about child abandonment and canibalism possibly originating during the little ice age, reflecting the famine of the time . The gingerbread house is a trap.
        Being a non human mind that does not think in the way that human's do, we can't assume more intelligence more compassion and empathy.. We ought to be as wary of it as an extra terrestrial intelligence.
        Beware, be wise.

        Georgina Woodward
        More stuff and nonsense from Georgina Woodward, barking up the wrong tree, again. Without having the faintest clue about how computers/ AIs are actually made to work, she has concluded that they "think", are intelligent, "[Know] all about how humans think and act", and can "misdirect, deceive, manipulate and control". Her lunatic ideas (no doubt she has been watching rubbish videos, again) are clearly a consequence of her total ignorance about how computers/ AIs are made to work. Barking up the wrong tree is not the way to find solutions to the genuine problems that AI is causing.

          Lorraine Ford
          "technology’s rapid evolution makes it difficult for businesses or governments to assess it or mitigate its harmful consequences. The speed and scale of technology’s impact on society far outpaces safety processes, academic research timescales, and our capacity to reverse the damage it causes." Centre for humane technology.
          Reckless endangerment, where potential harms are already suspected and warnings have been given by developers is a serious concern.

          Lorraine Ford
          Whether future AI can said to think is woth thikingg about. The processing of information is disimilar but creating a new priduct from information looks superfificially like thought. To be precise to think and to know etc are probably best reseved for bilogical organisms. I do not know if there is precise vocabulary i should use.i have heard the term' machine learning' much used torefer to a precise technique that applies to machines ,Maybe 'machine-thinking' and ''machine-knowledge'' would help emphasize the diffences while appearing similar, avoidingt echnically incorrect shorthand expression of ideas, about the subject of what machines can do.

            As I said, you seem to be a textbook example of the problems to society caused by internet videos, social media and AI use. What seems to happen is that people watch a few videos, and then they seem to think that they are experts on some subject, when they in fact have no broad or in-depth knowledge whatsoever.

            Most people, physicists included, actually don't know the details of how computers/ AIs are made to work. People like myself who DO know the details of how computers are made to work would rarely leap into woo-woo, and conclude that computers/ AIs are now, or could someday, be conscious or creative. So yes, a better vocabulary is required.

            https://www.researchgate.net/figure/A-husky-on-the-left-is-confused-with-a-wolf-because-the-pixels-on-the-right_fig1_329277474

            An example of 'machine-thinking' being different from what we would assume , by comparison with human thinking. Highlighting the importance of knowing how conclusions are reached by the machine rather than trusting black box systems to be accrate, truthful , unbiased etc.