Hackers can conduct social engineering attacks and gain access to sensitive information.
By Johan Steyn, 1 March 2023
Published by BusinessDay: https://www.businesslive.co.za/bd/opinion/columnists/2023-03-01-johan-steyn-the-dark-side-generative-ai-in-security-breaches/
Using machine learning algorithms, artificial intelligence (AI) platforms called generative AI (GAI) generate creative output such as visuals, literature, and music. It helps software learn from data patterns and produce previously unseen results. There are various possible uses for GAI, such as the creation of art, the design of products, and the invention of new business models.
OpenAI is one of the most prominent platforms for GAI, and its work in natural language processing is well-known. The company’s ChatGPT (generative pretrained transformer) language model can generate coherent text, answer questions, and even write articles.
The rapid advance of AI has undesirable side effects that must be dealt with. The use of AI for security breaches, especially via GAI, is one of the gravest AI-related dangers.
Hacking is the unauthorised access or use of computer systems, networks, or software. Social engineering and phishing attempts are among the methods employed by hackers to gain access to sensitive data or disrupt computer systems. Through impersonation or deception, social engineers coerce individuals into divulging sensitive information such as login credentials. Phishing attacks trick individuals into divulging sensitive information through fraudulent emails or websites.
GAI has become an indispensable tool for firms seeking to enhance customer service and increase consumer engagement. They are an attractive target for hackers who wish to use the technology for malicious reasons. With GAI, hackers can conduct social engineering attacks and gain access to sensitive information.
Spear-phishing is a type of attack that targets specific people or businesses. Using the information they have gathered on the victim, hackers create bogus emails that appear authentic. GAI can be used to generate convincing spear-phishing emails by imitating the writing style of the targeted individual or company. The email may contain a link that, when clicked on, directs recipients to a bogus login page requiring them to enter their login credentials.
The use of GAI to impersonate people or organisations to gain access to sensitive data is also possible. For instance, a hacker may create a model that resembles a company’s senior executive and use it to seek login credentials from an employee. The data could be used to gain unauthorised access to the company’s computer systems.
Deepfakes are films or images made by AI that appear authentic but have been manipulated. Hackers can use GAI to create convincing deepfakes, which can be used to impersonate individuals or create fabricated evidence. For instance, a deepfake may be created to make it appear as though CEOs had declared something they had not said.
GAI can be used to develop scams suited to the interests and preferences of the target. A hacker may use social network information to create a fraudulent message that appears to be from a friend or relative.
Organisations should take steps to mitigate the risks connected with AI-based generative assaults. Educating employees to recognise and resist phishing efforts is one of the most effective methods for mitigating the risk of AI-based attacks. Employees should be instructed on the most recent phishing techniques and how to spot fraudulent emails and links.
As GAI expands its capabilities for good business use, sadly so does the capacity of hackers to attack an organisation. We should be ever vigilant, use the best that technology security offers and never underestimate the human element of security breaches.
• Steyn is on the faculty at Woxsen University, a research fellow at Stellenbosch University and founder of AIforBusiness.net
Comments