top of page
Johan Steyn

Brainstorm: Will GTP-4 usher us to the edge of AGI?


By Johan Steyn, 1 March 2023


In the world of artificial intelligence (AI), we’ve experienced a number of cold spells. These are characterised by a lack of breakthroughs and a decrease in the number of AI-related projects and startups.


The first AI winter occurred in the 1970s, after a period of high optimism and funding for AI research in the 1960s. This was followed by a second AI winter in the 1980s, and a third one in the late 1990s and early 2000s. Thankfully, progress in AI research has continued and the field has seen many advances in recent years. We’ve witnessed a resurgence in interest and funding, driven in part by breakthroughs in machine learning and deep learning, as well as practical applications of AI in areas such as self-driving cars and natural language processing.


Despite these leaps, there has been much speculation over the last decade that yet another AI winter is looming. Many pundits continually warned us that ‘winter is coming’.


One cold morning, not too long ago, the AI druids, dancing around their machineintelligent shrine, witnessed a particularly significant solstice. Glimmering in the east was the dawn of a new era. It was clear to the high priests of technology that winter is forever over. The name of their sun god is ChatGPT.


A large language model is a type of machine learning model that has been trained using a vast amount of textual data, such as that found in books, articles, and online sources. These algorithms make predictions about the next word or sentence based on current information. Training in these models entails acquiring knowledge that is then used to inform the model's parameters, which are stored in matrices and updated on a regular basis to bring the model's predictions closer to the data.


OpenAI's GPT-3 astounded us with its ability to write like a human, producing a sonnet in John Donne's style one minute, and a logical explanation of anything the next. ChatGPT, the newest product, builds on the most recent version, GPT-3.5.


While GPT-2 evaluated 1.5 billion parameters, GPT-3 evaluates 175 billion, 10 times the number of parameters evaluated in any other current language model. ChatGPT’s massive scale and the way it looks for data patterns have moved AI significantly towards the way the human brain works. And the closer we get to that, the closer we are to the next step in the evolutionary journey of machine intelligence: artificial general intelligence (AGI).


Unsurprisingly, the druids dancing around their tech temples are now anticipating the second coming of their AI messiah. Open-AI’s next language model, GPT-4, is possibly the most anticipated AI model in history. Rumoured to incorporate one trillion parameters, and capable of delivering even more accurate responses, many predict that it will be multimodal, accepting text, audio, image, and possibly video inputs.


GPT-4’s benefits will encompass significant advances in natural language processing, and the ability to intelligently and possibly completely autonomously automate processes. It will certainly enable us to gain previously unthinkable insights from data, and it will result in a ‘one giant leap for mankind’ improvement in human-machine interaction.


At the time of writing (1 March), OpenAI is yet to announce the release date, although many predict that we will see it by March 2023. I think we are finally standing on the edge of AGI, and wonder how it will impact humanity forever.


Rated as one of the top 50 global voices on AI by Swiss Cognitive, Prof. Johan Steyn is a member of the faculty of Woxsen University, a research fellow with Stellenbosch University and the founder of AIforBusiness.net.

Comments


bottom of page