top of page

BusinessDay: AI’s next chapter is the future of large language models

Primary objective for developers is to enhance reasoning, planning and memory capabilities of these models.


By Johan Steyn, 29 May 2024


In late 2022, OpenAI released ChatGPT, a ground-breaking achievement in generative artificial intelligence (AI) that sparked a wave of advancements and innovations.


New models from OpenAI, Meta, Google, Microsoft and others, are examples of large language models (LLMs) that use deep learning to generate content, perform analysis, summarise information, and make predictions. These algorithms, often trained on vast data sets, have fundamentally transformed the landscape of content generation and automation.


The market for generative AI is projected to expand from an estimated $900bn in 2023 to a staggering $1.3-trillion by 2032. Beyond text processing, these models possess multimodal capabilities, enabling them to analyse and generate diverse types of content, including images, audio, code, and even interactive media. As technology continues to evolve, we can expect further enhancements to these multimodal capabilities, making them even more versatile and powerful.


Parameters are the fundamental components of LLMs, acting as the variables and configurations adjusted during training to enable the model to process and generate content. As LLMs incorporate more parameters, their complexity and capability to tackle intricate tasks increase. For instance, in 2019, the largest LLM had 0.09-billion parameters. This number grew to 17.2- billion in 2020 and then skyrocketed to 540-billion in 2022, marking a staggering 574,368% increase. 


A primary objective for developers is to enhance the reasoning, planning and memory capabilities of these models. LLMs must possess these abilities to complete complex duties and comprehend the consequences of their decisions. Additional personalisation capabilities will enable responses and interactions to be tailored to the particular needs of each user.

   

Capabilities in reasoning and planning are essential for the future of AI. To enable AI to manage progressively intricate tasks requiring multiple steps, it is expected that forthcoming models will excel in these areas. The objective of this development is to produce AI systems that possess “artificial general intelligence” (AGI), a capability that can, to some extent, emulate human intelligence.


Emphasis is placed by AI developers on addressing ethical concerns. As an illustration, OpenAI’s anticipated GPT-5 is undergoing exhaustive testing to ensure its security and reduce biases. A problem with Google’s Gemini platform was the overcorrection of specific biases, which resulted in unintended consequences. This illustrates how attempts to mitigate biases may at times have adverse effects.


Due to the nascent stage of AI, robust regulatory frameworks are necessary to guide its development and ensure adherence to ethical principles. These frameworks play a crucial role in preventing potential misuse and addressing issues such as bias, privacy and accountability. They also help establish standards for transparency and fairness, ensuring that AI technologies benefit society as a whole while minimising risks.


An example of an early set of guidelines for AI research is the World Economic Forum’s Presidio AI Framework. This framework emphasises the importance of integrating safety, ethics and innovation into the core of AI development. Safety measures are designed to protect users and mitigate risks associated with AI applications. 


As AI becomes increasingly integrated into our daily lives, its effects will spread to an expanding array of devices and platforms, from smartphones and home assistants to wearable technology and smart appliances. This widespread integration will allow AI to seamlessly support a variety of tasks, enhancing convenience and efficiency in everyday activities.

 

Comentarios


bottom of page