AI engineer teams are rarely representative of gender or ethnic diversity, so data interpretation will inevitably favour people similar to its creators.
By Johan Steyn 5 July 2022
Artificial intelligence (AI) technology has already infiltrated our everyday lives. Most of the applications on our smartphones are powered by AI platforms, and the majority of business and government leaders are exploring ways to use it to better understand and act on the large amount of data being harvested from clients and citizens.
The new breed of smart technologies is a human creation and as such exhibits the characteristics we build into it. Of great concern for the future use of these technologies is the inevitable biases we programme into them.
AI and digital initiatives in business are usually driven by a limited number of engineers in the IT department. As these teams are rarely representative of gender or ethnic diversity the data interpretation will inevitably favour people similar to its creators.
According to a poll conducted by AI Now, just 10% of female AI researchers are hired by Google, while 15% are employed by Facebook. Less than 5% of all employees at Facebook, Google, and Microsoft are black, according to the results of the poll.
Experts concur that a lack of diversity in academic institutions would be one of the obstacles facing the future growth of AI. In the US, the percentage of women majoring in computer technology plummeted from 37% in 1984 to just 18% in 2015. Even if the pipeline problem is resolved, experts from the University of California, Berkeley say that this will not alter the basic power imbalances that exist in the workplace.
To create an objective product, it is essential to restore the equilibrium of the principles of equality. Due to the fact that these technologies will serve as the foundation for future technological innovation, it is of the utmost importance to address the question of diversity as soon as feasible.
Diversity in technology is important because it helps to make the industry more inclusive, innovative and competitive. It also leads to a better understanding of different cultures and communities while it helps to create a more diverse workforce that can help solve problems that may not have been thought of before.
The potential for AI to be biased is a major concern in the industry. It is important that we understand what biases exist in AI systems and how they can be avoided. Bias can occur when the data used to train an AI system does not represent the population that the system will eventually serve.
This can lead to results that are not accurate or fair, and may even perpetuate stereotypes. For example, when a training set is composed primarily of images of white men, it may result in facial recognition software being less accurate with women and people of colour. If an algorithm learns from past hiring practices, it might perpetuate bias by rewarding candidates who have similar backgrounds to those who have been successful in the past.
Many AI leaders have begun to grapple with the topic of how far human prejudices can penetrate AI systems, with potentially catastrophic implications if they do so. Risk mitigation is critical now as many firms consider implementing AI systems in their operations.
It is imperative that the teams working on this technology are from various disciplines, such as legal and human resources. The teams creating AI systems should be multi-gender, from different ethnicities and preferably from different parts of the world.