top of page

BusinessDay: AI requires balancing ethics and innovation

Implementing frameworks and guidelines for responsible AI development and use is of utmost importance.

By Johan Steyn, 19 July 2023

As AI continues to advance and permeate various aspects of our lives, it is crucial to ensure that its development and utilisation are guided by ethical principles, fairness, and consideration for societal values. Responsible AI practices not only protect individuals from potential harm and discrimination but also foster trust, accountability, and transparency.

By adopting ethical AI principles, establishing review boards, prioritising human-centred design, and addressing competing ethical concerns, organisations can navigate the complexities of AI implementation while upholding the welfare of users, stakeholders, and society as a whole.

One crucial aspect in that regard is the establishment and adherence to a set of ethical AI principles. These principles should align with societal values and ethical standards, providing a foundation for AI development, deployment, and decision-making processes.

Ethical AI principles can encompass various aspects, such as fairness, transparency, accountability, and respect for individual autonomy. To ensure adherence to ethical guidelines, organisations can formulate internal or external ethical review boards that are responsible for assessing AI initiatives and ensuring that they align with ethical principles.

They provide guidance, oversight, and scrutiny throughout the development process, helping to identify and address potential concerns. Ethical review boards can include experts from diverse disciplines, including ethicists, technologists, and representatives from affected communities, to ensure a comprehensive evaluation of AI systems.

The tension between autonomy and control is another ethical consideration. While it is essential to ensure transparency and human oversight over AI systems, allowing them to operate effectively and autonomously is equally important.

Organisations should establish mechanisms for human intervention, monitoring, and control while ensuring that AI systems have the necessary autonomy to make decisions and take actions within defined boundaries.

Responsible AI development doesn’t imply stifling innovation. On the contrary, executives should encourage innovation while recognising ethical boundaries. It is crucial to push the boundaries of AI technology and explore its potential, but not at the expense of ethical considerations and societal values. Rather the focus should be on fostering a culture that encourages responsible innovation and provides clear guidelines on ethical limits and the importance of considering their effect on society.

Conducting ethical risk assessments is essential to identify potential dilemmas and risks associated with AI systems. Organisations should assess how AI decisions may affect different stakeholders and consider potential harms, biases, and unintended consequences. By systematically evaluating and addressing ethical risks, leaders can make informed decisions and implement measures to minimise the negative effects of AI systems.

AI technologies are not yet nearly as powerful as many people think. They are, however, advancing rapidly and their power to improve world or to destroy it will become more evident in the next decade. We don’t have the luxury of time to play around with this technology while being asleep at the wheel. Responsible AI practices are not only ethically sound but also contribute to building trust in AI technologies and their applications, enabling a positive and sustainable future.

Business leaders will continually face balancing the power of this technology to advance their enterprises while controlling the potential harm it may cause.


bottom of page