top of page

A periodic table for AI might change everything

A new framework promises to bring order to multimodal AI by showing what models keep, discard, and why it matters.




Most people experience AI through outputs: a chatbot reply, an image generator, or a tool that summarises a document. But underneath those outputs sits something far more important: what the system is trained to optimise for. A recent SciTechDaily piece describes research from Emory University proposing a “periodic table” for artificial intelligence, aiming to organise many successful multimodal AI methods into a coherent framework.


If this idea holds up, it matters for a very practical reason: it could help organisations move from AI experimentation-by-instinct to AI design-by-principle. In South Africa, where compute, bandwidth, and skills capacity are real constraints, that shift could be the difference between responsible scale and expensive failure.


CONTEXT AND BACKGROUND

Multimodal AI is the ability to combine different kinds of data, such as text, images, audio, and video. It is also where many organisations are heading next, because real business problems rarely arrive in a single neat format. The challenge is that once you combine modalities, the design space explodes. There are many ways to train a system, many ways to balance signals, and many ways to get something that looks impressive in a demo but behaves unpredictably in the real world.


The Emory team argues that many methods that appear different can be understood through a single organising idea: compress each data source just enough to keep the pieces that truly predict the outcome you care about, while discarding noise and redundancy. They describe this through an information-theory lens, using a multivariate information bottleneck approach, and present it as a way to derive task-specific loss functions in a principled manner.


That “periodic table” metaphor is doing a lot of work. It suggests classification: different training objectives fall into different “cells” depending on what information they preserve or discard, which in turn shapes behaviour and performance.


INSIGHT AND ANALYSIS

This matters because the loss function is the incentive system of the model. If you reward a system for the wrong thing, it will learn the wrong habits, often in ways that are hard to spot until the system is deployed. In business, those failures are rarely dramatic at first. They show up as subtle bias, brittle edge cases, inconsistent reasoning, or “helpful” outputs that quietly drift into confident nonsense.


A framework that helps teams reason about objectives is also a framework that helps teams govern outcomes. If you can explain, in plain language, what your model is being trained to keep and what it is trained to ignore, you are already closer to explaining why it behaves as it does. That is essential for trust, auditability, and responsible procurement. It is also a reminder that “model choice” is often less important than clarity of purpose. Too many AI projects start with a tool and look for a problem, rather than defining the problem, the constraints, and the acceptable risks first.


There is a second angle that matters deeply for South Africa and much of Africa: efficiency. Better guidance on what information to encode should, in principle, reduce wasted computation and data hunger. SciTechDaily+1 When electricity costs, cloud bills, and connectivity constraints are part of daily reality, “smarter with less” is not a technical preference. It is a strategic necessity.


IMPLICATIONS

For business leaders, the immediate takeaway is not to become an information theorist. It is to ask better questions of your teams and vendors. What is the system optimising for? What trade-offs are being made between compression and accuracy? What data is being used, and what is being deliberately excluded?


If those answers are vague, your risk is not only regulatory. Your risk is operational: a system you cannot explain is a system you cannot reliably control.

For practitioners, this is a call to move away from cargo-cult AI building. A shared map of objectives can reduce trial-and-error, improve reproducibility, and make it easier to justify choices to governance and security teams. For education, it is another warning light: AI literacy cannot stop at prompting. We need more people who understand why models behave the way they do, and how design decisions translate into real-world consequences.


CLOSING TAKEAWAY

AI has become powerful faster than it has become understandable. That gap is where public distrust grows and where organisations waste money chasing outputs without controlling incentives. A “periodic table” for AI will not solve every problem, but it signals a healthier direction: less mystique, more engineering discipline. If South Africa wants AI that genuinely improves productivity, education, and public services without importing unnecessary risk, we should pay attention to frameworks like this, and build local capability to apply them responsibly.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page