Is it possible that we are only a few years away from the time when AI surpasses humanity? This prediction was made by ChatGPT, a large language model created by OpenAI, in the first "scientific" paper entirely generated by the model itself--with some prompts from me--titled "[link:doi.org/10.22541/au.167052124.41804127/v1]GPT4: The Ultimate Brain[/link]." The image on the top right is a visual rendition by DALL-E 2 of the contents of the paper; both DALL-E 2 and ChatGPT are AI technologies developed by OpenAI.
Since its release in late November 2022, ChatGPT has been used by over a million users and has blown most of them away--including not-so-easy-to-impress scientists and analysts--with its capabilities. Far from being a conventional "chatbot," ChatGPT can easily engage in creative writing in any style of choice, including poetry and drama, and can write essays to help students with their homework. It can also prepare university research strategies, cover letters for publications and rejection letters from editors, and generate all sorts of advanced computer code. There is no denying that its ability to understand the user's input and react to the user's feedback is unparalleled among all current AI models, and it clearly exhibits creativity in its answers and content generation.
As a personal example, I have used ChatGPT to create open-ended text adventure games, something I am quite passionate about. Unlike old-fashioned adventure games that have a precompiled setting and plot and can either be navigated with a limited set of commands or have an elementary language parser, with the right instructions ChatGPT can turn itself into a virtually limitless fantasy environment, which can then be explored and dynamically expanded using the full power of its language modelling technology.
I am not ashamed to admit I have been spending most of the past two weeks literally chatting with a computer - in the interest of science of course! This made me wonder, is ChatGPT capable of engaging in actual scientific discovery and write its outputs coherently in a paper? Here's how the "GPT4: The Ultimate Brain" paper came to be. It started as a thought experiment, or rather a silly idea, to combine the concept of GPT in physics (generalized probabilistic theories) with GPT in AI (generative pretrained transformer). ChatGPT did quite well at explaining both settings in an introductory fashion and with relevant references.
I then questioned whether the combination of these two seemingly disjoint concepts could lead to an almighty predictive model, that could not only describe the physical world mathematically but also incorporate an advanced language component and exhibit self-awareness. To test this kind of metaphysical hypothesis, I instructed ChatGPT to create a virtual environment in the space of generalized probabilistic theories. By its own initiative, ChatGPT then went on to demonstrate how to navigate that space from the perspective of a fictional observer encountering theories, such as the 2D Ising model, and testing their cognitive power. Next, I asked ChatGPT to define three theories (a classical, a quantum, and a generalized probabilistic theory), which it did quite consistently. I then asked it to define criteria to test these theories, offering three simple prompts. ChatGPT complied and added a fourth criterion that was in fact the most sensible. It then correctly tested the theories it had itself created, against the criteria it had rigorously defined. So, ChatGPT effectively ran the main experiment of the paper. It then successfully upgraded the GPT (generalized probabilistic theory) by incorporating a GPT (generative pretrained transformer) model and re-tested it to certify an increased knowledge score.
This revised theory was ultimately called GPT4 and proclaimed itself as the "ultimate brain". ChatGPT demonstrated the power of such a theory, among various pseudo-scientific feats, by letting it successfully compose a limerick about itself. You can read all about it in the paper, which I have assembled by pasting together exclusively outputs (text, maths, bibliography, and code) generated by ChatGPT. That was a challenge in its own right: by the time I had enough material for the paper, there were over 25 thousand words in a single browser tab, until that session expired. Not all of these made into the paper of course, but it is quite the achievement to note that within that long "chat" session, ChatGPT stayed in focus and remembered initial instructions and updated assignments till the end, while responding consistently to prompts related to further developments in our experiment and contributing its own original directions and results.
The outcome is not perfect of course. I am most annoyed at two flagrant errors, one with the common misconception that nonlocality in quantum mechanics implies instantaneous communication, and another of a non-scientific nature whose identification I leave as an exercise to the reader. I could have nitpicked more and asked to rewrite those bits, but I opted to leave the responses as vanilla as possible. In my opinion, some of the "creations" of ChatGPT in the experiment are truly impressive--especially in the demo world of theories.
The part about making predictions on the future of AI and getting ChatGPT to cook up a proof for the alleged superiority of the GPT4 model are more for scenic purposes; the mock proof does reveal some basic understanding of concepts from estimation theory, but it ultimately sounds more like a political speech than a mathematical argument, and the spirit should hopefully be obvious to any reader. This said, I have no idea how ChatGPT came with its prediction formula for the probability that AI would surpass humanity within n years (plotted below), as even the question is deliberately ill-posed: what does it even mean to surpass humanity?
Overall, it's clear that the GPT4 paper should be regarded as an exploration of current possibilities and limitations on the use of AI in the field of scientific discovery, rather than a proper scientific article in its own right. There is however some serious science that is worthy to be explored further on the enhancements available to intelligent agents in generalized probabilistic theories. In fact I recently had a grant from FQXi on precisely that topic, and we did find examples in which GPTs can outperform both classical and quantum theories exponentially on tasks pertaining to intelligence (specifically, in the implementation of an associative memory). But this is another story.
How can an advanced AI technology like ChatGPT be instrumental to the progress of science more generally? Find out in the second part of this pair of posts.
----
Gerardo Adesso is an FQXi member and a physicist at the School of Mathematical Sciences, University of Nottingham, UK