top of page

AI permissions are the new shadow IT

Employees are connecting tools faster than governance can track, creating invisible exposure across organisations.




A familiar problem is returning in a new disguise. For years, organisations worried about shadow IT: employees signing up for tools without approval, storing company data in random apps, and creating risk outside formal oversight. AI has brought it back, but with far higher stakes. Today’s tools don’t just store files. They can read your email, scan your documents, summarise your meetings, and, increasingly, take actions on your behalf. And the gateway is often a single, casual click: “Connect your account.” Most users don’t fully understand what they’ve authorised, and most organisations can’t easily see the full permission footprint spreading across their systems.


CONTEXT AND BACKGROUND

Software became easier to adopt long before governance became easier to enforce. Cloud tools reduced friction, and people naturally optimised for speed. If a new app saved time, teams used it. The IT department often only found out later, usually after a problem.


AI accelerates this because it slots into daily work so naturally. It promises real benefits: less admin, faster drafting, better search, and more productive workflows. But many of these benefits depend on integrations. The AI tool is only as useful as the data and systems it can access: email, calendars, cloud storage, knowledge bases, CRMs, and sometimes even finance tools.


That is where the risk multiplies. Traditional shadow IT created scattered data. AI-driven shadow IT creates scattered access, and access is more dangerous than storage. A tool that can see everything can leak everything. A tool that can act can make mistakes at scale.


INSIGHT AND ANALYSIS

The heart of the issue is permission sprawl. People grant AI tools broad access because they want the full experience and the onboarding prompts make it feel normal. They click “allow” without reading what is being requested. They may not understand the difference between read access and write access, between a single folder and the whole drive, between one mailbox and the entire organisation’s data.


This is made worse by the “agentic” trend. The more AI moves from assisting to acting, the more permissions matter. An AI that drafts an email is low risk if it doesn’t send it. An AI that can send, schedule, file, delete, or trigger workflows changes the game. Suddenly, you are not dealing with a productivity tool. You are dealing with a delegated operator.


There is also a governance blind spot. Many organisations do not have a clear inventory of which AI tools are connected to which systems, what scopes they have been granted, and who approved them. Security teams may monitor obvious threats, but permission sprawl is often quiet. It looks like normal login activity. It doesn’t trigger alarms until something goes wrong.


And something will go wrong. Not always through malice. Often through misunderstanding. A user connects a personal account. A tool retains data longer than expected. A shared meeting transcript includes sensitive information. An agent triggers an action it wasn’t meant to. These are not exotic scenarios. They are ordinary outcomes of casual authorisation in a complex environment.


IMPLICATIONS

For leaders, the first step is to stop treating AI usage as a general “innovation” topic and start treating it as an access and identity topic. Make it clear that connecting AI tools to corporate systems is not an individual choice. It is an organisational risk decision.


For security and IT teams, prioritise visibility. You need an inventory of AI tools in use and a clear map of integrations and permission scopes. Establish a default policy of least privilege: start with minimal access, expand only when justified, and review regularly. Make approval pathways simple, so people do not bypass them out of frustration.


For vendors, design matters. Permission requests should be transparent, granular, and easy to understand. “All or nothing” permissions are not just a security problem. They are a trust problem.


For employees, the shift is a mindset. A connected AI tool is not a toy. It is closer to giving a third party a set of keys. If you don’t know what doors those keys open, you should not hand them over.


CLOSING TAKEAWAY

Shadow IT was always about speed, outrunning control. AI makes that gap wider because the value of AI depends on access, and access can quickly become overreach. If organisations don’t get ahead of AI permissions, the next wave of incidents won’t start with sophisticated hacking. It will start with well-meaning people clicking “allow” and unintentionally expanding the organisation’s risk surface. The operational AI era demands a new discipline: treat permissions as seriously as data, and treat connected AI as part of your security perimeter, not a harmless add-on.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page