top of page
Johan Steyn

BusinessDay: ChatGPT raises serious privacy and security concerns

Free AI platforms can revolutionise your business, but you pay by surrendering the data you supply.


By Johan Steyn, 5 July 2023


I recently delivered an address on ChatGPT at an event of one of our major banks. ChatGPT is certainly the hottest ticket in town and the topic on everyone’s lips.


The bank’s internal audit team hosted the event and it was looking for guidance on using artificial intelligence (AI) in its work. In the weeks after my talk, I was invited to provide mentorship and guidance to team members as they prepared to deliver their ideas to the leadership team as part of an innovation competition.


Given the context of my earlier presentation, they were excited to learn how to “ChatGPT everything”. It reminded me of the numerous times I have encountered clients as part of my consulting work who were looking to “AI everything”.


This bank’s internal audit team comprises wonderfully enthusiastic and intelligent people. It was a great deal of fun to work with them. However, I had to encourage them to take a breath and think about AI in the correct and responsible way. I stressed that they work in a highly regulated industry and that privacy laws should be top of mind as they embrace the potential of AI.


I asked them if they knew what happened to the data they use as prompts with generative AI platforms such as ChatGPT. “When you copy sensitive data from a spreadsheet you can certainly gain valuable insights. But where does that data go and how is it stored?” The frowns on their faces made it clear the topic had not been considered up to then.


This is exactly the challenge for all businesses as they jump on the AI bandwagon. These platforms can revolutionise your business, and even though they are mostly free to use, you have to ask what price you are really paying. The answer is that you pay by freely surrendering the data you supply. This is the reason business leaders should grapple with the enduring balancing act between innovation and regulation.


In March OpenAI had to take ChatGPT offline for a few hours. A release on its website said this was necessitated by “a bug in an open-source library which allowed some users to see titles from another active user’s chat history. It’s also possible that the first message of a newly created conversation was visible in someone else’s chat history if both users were active around the same time.”


OpenAI admitted that the result was “unintentional visibility of payment-related information” for premium ChatGPT users. During this time, it is possible that active users' personal details may have been accessible to other users.


Recently, OpenAI was hit by a class-action lawsuit spearheaded by a California-based law firm. The firm accuses OpenAI of significant breaches of copyright and privacy laws. According to the suit, Microsoft too has incorporated into its AI tools a wealth of personal data from millions of individuals.


Given the potential for data breaches, many firms are considering internal use-only, ring-fenced, large language models. However, building such a model is a rigorous endeavour that demands hefty computational assets, the proper tools, and proficiency in machine learning.


Business leaders should carefully consider one of the many reputable, cloud-based platforms that provide the needed security alongside the unbeatable benefits promised by these models.




Comments


bottom of page