top of page

BusinessDay: Business leaders need to stay abreast with AI regulation


By Johan Steyn, 5 April 2023


With artificial intelligence (AI) technology, business executives can drive innovation, improve efficiency, and create new value. Companies may use AI’s potential while mitigating risks and ensuring ethical and responsible use provided they continue to educate themselves, promote an open dialogue, and adhere to responsible AI principles.

If you want to be a successful business leader in today’s continuously changing industry, you must learn to adopt new ideas and adapt swiftly. AI’s emergence as a viable instrument capable of disrupting multiple markets is revolutionary. In response to the numerous ethical and safety concerns raised by AI’s rapid breakthroughs, recommendations have been made for more restrictions and a more cautious deployment strategy.


Recently, AI researchers and ethicists released an open letter urging AI developers to slow down on creating generative AI platforms. They raised concern that the widespread use of generative pre-trained transformer (GPT) AI platforms could result in unforeseen outcomes such as the spread of misleading information, the promotion of undesirable applications, and the spread of bias into the AI’s outputs. As AI grows, these concerns illustrate the necessity to achieve a balance between technological progress and ethical concerns.


The UN Educational, Scientific and Cultural Organization (Unesco) has encouraged all governments to immediately adopt a global ethical framework for AI. The objective is to ensure that AI is developed and used in accordance with the principles of human rights, democracy and the rule of law. Business leaders must be aware of and adhere to these criteria when implementing AI technologies in their organisations.


AI’s potential risks should not be disregarded. Even in the eyes of Yoshua Bengio, a pioneer in the field, the concept of AI wiping out humans is not inconceivable. “It is not only AI that needs to be responsible. It is, above all, the humans deploying it.”


Before they fully explore this new frontier, corporate leaders must carefully weigh the merits and downsides and have open dialogues about the ethical and societal repercussions of this technology.


These concerns have spurred the British government to advocate for the development of an AI policy. Similarly, the EU has been working on AI regulation that should set a global precedent. The proposed legislation demands specific levels of transparency, and periodic risk assessments, and penalties for those that do not comply. Companies must monitor the evolution of these regulatory frameworks and be prepared to modify their AI activities as necessary to ensure compliance.


With the emergence of increasingly advanced AI systems such as the recently released GPT-4, it may be possible to create really intelligent machines. Artificial general intelligence (AGI) refers to technology capable of doing every intellectual task a person can. Though AGI is now only an anticipated next step in AI’s evolutionary advance, it will have far-reaching implications for business and society.


We should encourage a culture that supports ethical development and deployment by initiating a workplace discourse on AI ethics. Collaborate with those who have a stake in the success of your AI projects, including employees, clients and regulatory authorities. Being aware of the most recent AI developments, legislative changes and ethical issues is necessary for maintaining compliance and avoiding potential risks.


Should we pause the development of new AI technologies? It would be wise, but humans are hard-wired to create new technologies and I do not foresee that any slowdown will be possible.


• Steyn is on the faculty at Woxsen University, a research fellow at Stellenbosch University and founder of AIforBusiness.net

bottom of page