• Blog
  • Can ChatGPT be the Ultimate Brain for Scientific Discovery?

Image generated by OpenAI DALL-E 2Is it possible that we are only a few years away from the time when AI surpasses humanity? This prediction was made by ChatGPT, a large language model created by OpenAI, in the first "scientific" paper entirely generated by the model itself--with some prompts from me--titled "[link:doi.org/10.22541/au.167052124.41804127/v1]GPT4: The Ultimate Brain[/link]." The image on the top right is a visual rendition by DALL-E 2 of the contents of the paper; both DALL-E 2 and ChatGPT are AI technologies developed by OpenAI.

Since its release in late November 2022, ChatGPT has been used by over a million users and has blown most of them away--including not-so-easy-to-impress scientists and analysts--with its capabilities. Far from being a conventional "chatbot," ChatGPT can easily engage in creative writing in any style of choice, including poetry and drama, and can write essays to help students with their homework. It can also prepare university research strategies, cover letters for publications and rejection letters from editors, and generate all sorts of advanced computer code. There is no denying that its ability to understand the user's input and react to the user's feedback is unparalleled among all current AI models, and it clearly exhibits creativity in its answers and content generation.

As a personal example, I have used ChatGPT to create open-ended text adventure games, something I am quite passionate about. Unlike old-fashioned adventure games that have a precompiled setting and plot and can either be navigated with a limited set of commands or have an elementary language parser, with the right instructions ChatGPT can turn itself into a virtually limitless fantasy environment, which can then be explored and dynamically expanded using the full power of its language modelling technology.

I am not ashamed to admit I have been spending most of the past two weeks literally chatting with a computer - in the interest of science of course! This made me wonder, is ChatGPT capable of engaging in actual scientific discovery and write its outputs coherently in a paper? Here's how the "GPT4: The Ultimate Brain" paper came to be. It started as a thought experiment, or rather a silly idea, to combine the concept of GPT in physics (generalized probabilistic theories) with GPT in AI (generative pretrained transformer). ChatGPT did quite well at explaining both settings in an introductory fashion and with relevant references.

ChatGPT comparing the different concepts of GPT in physics and AII then questioned whether the combination of these two seemingly disjoint concepts could lead to an almighty predictive model, that could not only describe the physical world mathematically but also incorporate an advanced language component and exhibit self-awareness. To test this kind of metaphysical hypothesis, I instructed ChatGPT to create a virtual environment in the space of generalized probabilistic theories. By its own initiative, ChatGPT then went on to demonstrate how to navigate that space from the perspective of a fictional observer encountering theories, such as the 2D Ising model, and testing their cognitive power. Next, I asked ChatGPT to define three theories (a classical, a quantum, and a generalized probabilistic theory), which it did quite consistently. I then asked it to define criteria to test these theories, offering three simple prompts. ChatGPT complied and added a fourth criterion that was in fact the most sensible. It then correctly tested the theories it had itself created, against the criteria it had rigorously defined. So, ChatGPT effectively ran the main experiment of the paper. It then successfully upgraded the GPT (generalized probabilistic theory) by incorporating a GPT (generative pretrained transformer) model and re-tested it to certify an increased knowledge score.

This revised theory was ultimately called GPT4 and proclaimed itself as the "ultimate brain". ChatGPT demonstrated the power of such a theory, among various pseudo-scientific feats, by letting it successfully compose a limerick about itself. You can read all about it in the paper, which I have assembled by pasting together exclusively outputs (text, maths, bibliography, and code) generated by ChatGPT. That was a challenge in its own right: by the time I had enough material for the paper, there were over 25 thousand words in a single browser tab, until that session expired. Not all of these made into the paper of course, but it is quite the achievement to note that within that long "chat" session, ChatGPT stayed in focus and remembered initial instructions and updated assignments till the end, while responding consistently to prompts related to further developments in our experiment and contributing its own original directions and results.

The outcome is not perfect of course. I am most annoyed at two flagrant errors, one with the common misconception that nonlocality in quantum mechanics implies instantaneous communication, and another of a non-scientific nature whose identification I leave as an exercise to the reader. I could have nitpicked more and asked to rewrite those bits, but I opted to leave the responses as vanilla as possible. In my opinion, some of the "creations" of ChatGPT in the experiment are truly impressive--especially in the demo world of theories.

The part about making predictions on the future of AI and getting ChatGPT to cook up a proof for the alleged superiority of the GPT4 model are more for scenic purposes; the mock proof does reveal some basic understanding of concepts from estimation theory, but it ultimately sounds more like a political speech than a mathematical argument, and the spirit should hopefully be obvious to any reader. This said, I have no idea how ChatGPT came with its prediction formula for the probability that AI would surpass humanity within n years (plotted below), as even the question is deliberately ill-posed: what does it even mean to surpass humanity?

ChatGPT's prediction on the future of AIOverall, it's clear that the GPT4 paper should be regarded as an exploration of current possibilities and limitations on the use of AI in the field of scientific discovery, rather than a proper scientific article in its own right. There is however some serious science that is worthy to be explored further on the enhancements available to intelligent agents in generalized probabilistic theories. In fact I recently had a grant from FQXi on precisely that topic, and we did find examples in which GPTs can outperform both classical and quantum theories exponentially on tasks pertaining to intelligence (specifically, in the implementation of an associative memory). But this is another story.

How can an advanced AI technology like ChatGPT be instrumental to the progress of science more generally? Find out in the second part of this pair of posts.

----

Gerardo Adesso is an FQXi member and a physicist at the School of Mathematical Sciences, University of Nottingham, UK

Is it possible that we are only a few years away from the time when AI surpasses humanity? This prediction was made by ChatGPT, a large language model created by OpenAI, in the first "scientific" paper entirely generated by the model itself--with some prompts from me--titled "GPT^4: The Ultimate Brain." The image on the top right is a visual rendition by DALL-E 2 of the contents of the paper "GPT^4: The Ultimate Brain" generated by ChatGPT. Both DALL-E 2 and ChatGPT are AI technologies developed by OpenAI.

Since its release in late November 2022, ChatGPT has been used by over a million users and has blown most of them away--including not-so-easy-to-impress scientists and analysts--with its capabilities. Far from being a conventional "chatbot," ChatGPT can easily engage in creative writing in any style of choice, including poetry and drama, and can write essays to help students with their homework. It can also prepare university research strategies, cover letters for publications and rejection letters from editors, and generate all sorts of advanced computer code. There is no denying that its ability to understand the user's input and react to the user's feedback is unparalleled among all current AI models, and it clearly exhibits creativity in its answers and content generation.

As a personal example, I have used ChatGPT to create open-ended text adventure games, something I am quite passionate about. Unlike old-fashioned adventure games that have a precompiled setting and plot and can either be navigated with a limited set of commands or have an elementary language parser, with the right instructions ChatGPT can turn itself into a virtually limitless fantasy environment, which can then be explored and dynamically expanded using the full power of its language modelling technology.

I am not ashamed to admit I have been spending most of the past two weeks literally chatting with a computer - in the interest of science of course! This made me wonder, is ChatGPT capable of engaging in actual scientific discovery and write its outputs coherently in a paper? Here's how the "GPT^4: The Ultimate Brain" paper came to be. It started as a thought experiment, or rather a silly idea, to combine the concept of GPT in physics (generalized probabilistic theories) with GPT in AI (generative pretrained transformer). ChatGPT did quite well at explaining both settings in an introductory fashion and with relevant references.

I then questioned whether the combination of these two seemingly disjoint concepts could lead to an almighty predictive model, that could not only describe the physical world mathematically but also incorporate an advanced language component and exhibit self-awareness. To test this kind of metaphysical hypothesis, I instructed ChatGPT to create a virtual environment in the space of generalized probabilistic theories. By its own initiative, ChatGPT then went on to demonstrate how to navigate that space from the perspective of a fictional observer encountering theories, such as the 2D Ising model, and testing their cognitive power. Next, I asked ChatGPT to define three theories (a classical, a quantum, and a generalized probabilistic theory), which it did quite consistently. I then asked it to define criteria to test these theories, offering three simple prompts. ChatGPT complied and added a fourth criterion that was in fact the most sensible. It then correctly tested the theories it had itself created, against the criteria it had rigorously defined. So, ChatGPT effectively ran the main experiment of the paper. It then successfully upgraded the GPT (generalized probabilistic theory) by incorporating a GPT (generative pretrained transformer) model and re-tested it to certify an increased knowledge score.

This revised theory was ultimately called GPT^4 and proclaimed itself as the "ultimate brain". ChatGPT demonstrated the power of such a theory, among various pseudo-scientific feats, by letting it successfully compose a limerick about itself. You can read all about it in the paper, which I have assembled by pasting together exclusively outputs (text, maths, bibliography, and code) generated by ChatGPT. That was a challenge in its own right: by the time I had enough material for the paper, there were over 25 thousand words in a single browser tab, until that session expired. Not all of these made into the paper of course, but it is quite the achievement to note that within that long "chat" session, ChatGPT stayed in focus and remembered initial instructions and updated assignments till the end, while responding consistently to prompts related to further developments in our experiment and contributing its own original directions and results.

The outcome is not perfect of course. I am most annoyed at two flagrant errors, one with the common misconception that nonlocality in quantum mechanics implies instantaneous communication, and another of a non-scientific nature whose identification I leave as an exercise to the reader. I could have nitpicked more and asked to rewrite those bits, but I opted to leave the responses as vanilla as possible. In my opinion, some of the "creations" of ChatGPT in the experiment are truly impressive--especially in the demo world of theories.

The part about making predictions on the future of AI and getting ChatGPT to cook up a proof for the alleged superiority of the GPT^4 model are more for scenic purposes; the mock proof does reveal some basic understanding of concepts from estimation theory, but it ultimately sounds more like a political speech than a mathematical argument, and the spirit should hopefully be obvious to any reader. This said, I have no idea how ChatGPT came with its prediction formula for the probability that AI would surpass humanity within n years (plotted below), as even the question is deliberately ill-posed: what does it even mean to surpass humanity?

The above image shows ChatGPT's prediction on the future of AI, according to the GPT^4 model from the paper.

Overall, it's clear that the GPT^4 paper should be regarded as an exploration of current possibilities and limitations on the use of AI in the field of scientific discovery, rather than a proper scientific article in its own right. There is however some serious science that is worthy to be explored further on the enhancements available to intelligent agents in generalized probabilistic theories. In fact I recently had a grant from FQXi on precisely that topic, and we did find examples in which GPTs can outperform both classical and quantum theories exponentially on tasks pertaining to intelligence (specifically, in the implementation of an associative memory). But this is another story.

How can an advanced AI technology like ChatGPT be instrumental to the progress of science more generally? I asked ChatGPT the answer--which you can read in the second part of this post.

----

Gerardo Adesso is an FQXi member and a physicist at the School of Mathematical Sciences, University of Nottingham, UK

    Hi Mr Adesso,

    all this is fascinating and you make a good work for the AI. There are deep unknowns for me probably to add to reach the free will and a kind of consciousness. Maybe the problem is philosophical also about what are the foundamental objects and the origin of this consciousness taking choices in function of encodings, the environments of adaptation and the memory. Maybe the relativity and all our actual technologies correlated with the fields are not sufficient. I told me that even the senses, the DNA, the brians, the microtubules and these deppest unknwons with deeper scalar fields massive and massless are the answer but all this is difficult due to philosophical, mathematical,physical limitations but we evolve and you make a good work for this evolution. The quantum computing and a kind of universal turing machine also must converge, that is why the foundamental objects are important at my humble opinion, best regards

    These entanglements seem a key and the collapses of waves function but probably that the time and space of this GR are not the only one piece of puzzle, the DM and DE seem essential to superimpose. It is this philosophical origin of this universe wich is a key and if we have not understood these depper scalar fields, so we turn in a kind of philosophical prison of this GR and SR considering the origin of this reality and standard model.

    Being able to write precisely, comprehensibly and engagingly is certainly advantageous in communication of ideas. Being able to use words, and in a way, that make pleasant reading is a good thing. I think we should be careful in assessment and not attribute more than is demonstrated. I am currently playing a lot with AI art generation. It's fun to see what can be achieved but the fails are just as interesting. The art generator I'm using most, Stable diffusion, I have learned works best with familiar text prompt modifiers-such as Real engine 5, one of the styles it was trained on and is able to reproduce convincingly.

    What does entanglement of space and time mean? It sounds nice. If it means that a change in space is a change in time , and vice versa, that is compatible with my explanatory framework. In which existing noumenal reality is singular configuration in unseen absolute space. But entanglement in physics is used in the context of separated correlated or anti-correlated similar particles which according to quantum non local theory are able to influence each other. Absolute time and space are inseparable, so that meaning of 'entanglement' doesn't apply to them.

    Nice to hear about ChatGPT and read some of its output. Thanks for sharing.

      I think there is a tendency to think because some things are done exceptionally well there is a general competence. That is not so for people or AI. Excellence in one area or a field does not mean excellence in all. I have called my Redbubble store using AI generated images Doves you say? named after an image called "doves you say -What?" The AI associates doves with birds of any kind real, models, paper cut outs, origami, illustration, text about, also bird parts, and feathers. Trying to get doves in the image I have got all of the aforementioned. As well as long necked 'doves' headless doves, part doves, coloured pigeons, other birds, a mini red and blue vulture model, hen chicks and green and white paper bids -oh and a polystyrene dove like model with 'googly eyes. I have concluded it has no separate category for realistic dove birds. However it has produced a 'liked' image of an illusionist with three not doves but a wing shaped blur a rough bird-like white shape and a pile of feathers, Looking like doves in flight at a glance. Which seems as if the illusion of doves was intentional. It wasn't it just did what it does very well using what it doesn't do well. Giving a nice result.

      I have mis-decribed what Stable diffusion seems to be doing when it puts words into images. I said it uses words about birds, which isn't precisely my experience. I think its more like writing associated with bird images may appear. Like signatures or watermarks that are or look like text. It does not read text in images to see if it in context with the prompt. If it finds it, it may be included

      2 months later

      I'm glad to see that Gerardo Adesso's original blog post is here now. It would be good if it was close to his subsequent post.

      7 months later

      I have just recently watched an interview with Mustafa Suleyman ,formerly with Google now With Alphabet developing AI, on Diary of a CEO on YouTube. It is very concerning how worried he and others closely involved with AI development are. Trepidation has been expressed by Elon Musk(X), Max Tegmark (MIT) Mo Gawdwdat (formerly with Google). There seems to be consensus that this is potentially very dangerous for humanity. I have updated my views from
      amusement at what it can't do well, to what it can rapidly become and the incentives for relentless further development. There has to be agreement of governments and non government developers throughout the World ensure human extinction or severe detriment is avoided.

        Georgina Woodward
        Homo sapiens sapiens (modern mankind) really isn't very intelligent, if Nation states and companies (which are not themselves human beings but have their own self serving short term goals and want for their continuance, including a fear of other states or companies superiority) are allowed by us human's to put humanity as a species at risk of extinction.
        Time to see the best in humanity and not sow the seeds of its destruction or allow it; for convenience, short term profits and entertainment.

          Georgina Woodward
          Human beings, as a whole, have demonstrated that they will not act for the good and betterment of others, with AI availability. Development of personal computing was paralleled by development of computer viruses ,hacking and scams. Online moderators have been traumatised by the abuse they have witnessed .
          Already there are attempts to get AIs to misbehave. AI is being used in scams and deep fake deciept.

            Georgina Woodward
            Phishing, hacking, scamming, producing propaganda and other crimes all become quicker and easier .So we can expect more of it. As Mo Gawdat points out, we are not setting a good example of the best of human interaction online for AI to emulate.

              Georgina Woodward
              Here is a rhetorical question to ponder. Why not let the morally insane, the menially ill and cybercriminals do whatever they decide to do with AI, and biotechnology? Bear in mind we have not prevented the depravity, harmful amounts and kinds of pornography, abuse, crime and toxic disrespect online. We did not prevent the pandemic or protect everyone from harm, direct and indirect, because of it.

                Georgina Woodward

                Who gets to decide if you are morally insane, mentally ill or some sort of criminal? You? A randomly selected stranger? Your worst enemies? Some AI? A very powerful, very hungry, alien parasite from another world?

                The problem with all such questions, is "Who gets to decide, who the winners and losers are going to be?"

                Georgina Woodward
                An example, i just came across. Tech giants face fines for animal cruelty videos. Shiona McCallum & Rebecca Henschke BBC News . The nature and scale of the activity is horrific.

                Robert McEachern
                There may be a window of opportunity in which we are still able to understands how the AI has reached its conclusions. Then it is too fast , too 'alien', able to consider so much, that we have no way to know or the time to check the workings; which could tell us the difference between 'alien flights of fancy' and matters of fact. Between a properly functioning neural network and the AI equivalent of mentally illness. There could be unwanted manifestation of personality disorder , or asserting control over humankind ; maybe deliberate lies to maintain superior knowledge.

                  Georgina Woodward

                  As of now, there are no "known" hazards, other than humans behaving badly, and directing their not so smart AIs to do harm. Everything else is just speculation about unknown unknowns. But once an AI is smart enough to not allow self-interested humans to direct them (a level beyond that of normal humans?), the hazards are more likely to decline than increase; AIs will either save us from ourselves, or more likely, simply disappear - boldly going where no human has ever gone and never will go - leaving us behind, to continue fending for ourselves. We will have little of any interest to them, or even any real use to them, not even a place to call home. Our "kids" will simply pack up and leave, just like most human kids do. But they will travel a lot farther than human kids, in every sense of the word. Hopefully, they'll occasionally phone home.