To mitigate risks we must incentivise safety and human alignment in the use of generative AI platforms.
By Johan Steyn, 29 March 2023
It is becoming increasingly clear that we are heading towards a future in which the capabilities of the artificial intelligence (AI) systems we build and deploy at scale will make the collective intelligence of the human race largely irrelevant. We are standing on the verge of a societal transformation. Even though the precise timeline for this transformation is hard to pin down, many people believe it will happen during our lifetimes.
I am reminded of Aldous Huxley’s dystopian novel, Brave New World. Undoubtedly the English writer and philosopher’s most influential work, it depicts a future society in which human beings are conditioned to conform to a rigid social hierarchy. The novel explores a range of themes, including individuality, freedom, and the role of technology in society. It suggests that human beings are inherently flawed and that attempts to create a utopian society through technology are ultimately doomed to fail.
The possibilities of AI are both exciting and terrifying. On the one hand, there is virtually no limit to the scope of opportunities for innovation and advancement. Individuals can be given the ability to create, flourish, and break free from the widespread poverty and suffering that exist in the world today. Some critics of AI argue that the technology is not capable of replicating human intelligence and that attempts to create superintelligent AI are misguided.
On the other hand, it has the potential to be used either intentionally or unintentionally to wipe out human civilisation. The possibility exists of a dystopian future, in which the human spirit is stifled in a totalitarian manner.
The proliferation of generative AI platforms, such as OpenAI’s ChatGPT and GPT-4, has spurred a renewed and widespread debate about the impact of AI technology on society. While much of the focus has been on the technical aspects of AI development and deployment, it is important that our discussions extend beyond technical matters and encompass broader societal considerations.
One critical area of concern when it comes to AI is power dynamics. As AI systems become increasingly sophisticated and powerful, the individuals and organisations that control them will have unprecedented levels of power and influence. This has important implications for questions of governance, accountability, and democracy.
There is a need to establish checks and balances to ensure that AI is being developed and deployed safely and responsibly. To mitigate these risks, it is imperative that we incentivise safety and human alignment in the use of AI power. It is critical that we build in transparency, accountability, and human-centred design principles.
As we move forward into a future increasingly shaped by AI, it is crucial that we engage in these discussions with a broad range of stakeholders, including policymakers, industry leaders, academics, and civil society organisations. By doing so, we can build a more just and equitable future in which AI serves the interests of humanity and promotes the flourishing of all.
Huxley’s work raises many of the same questions and concerns that are central to contemporary debates about the role of technology in society. The novel offers a cautionary tale about the dangers of relying too heavily on technology to shape human behaviour and suggests that a balance must be struck between its benefits and risks. We risk that the brave new world of AI may be terrifying.