Work AI

Shadow AI and why the risk just leveled up

Shadow AI and why the risk just leveled up

Autonomous AI agents are an incredible advancement, and they also increase the surface area for risk for every organization when they operate outside of company policy. In this post, we explore the origin of the term shadow AI and how it's evolving quickly with the use of personal AI solutions like OpenClaw and Claude Cowork.

What is shadow AI?

Shadow AI is a term used to describe the use of AI tools by employees without the knowledge or approval of their organization's IT or security teams.

It borrows from the more commonly known term, shadow IT, which describes the same dynamic with traditional software: employees adopting the latest, cutting edge tools without going through enterprise security requirements and standard procurement.

Shadow AI is similar in symptoms, but the consequences are categorically different.

When an employee signs up for an unapproved project management tool, the risk is mostly about redundancy and cost. When an employee gives an AI agent access to their computer, email, file system, and enterprise applications, the risk is that sensitive data gets ingested, processed, and potentially exposed through a system no one in IT knows exists.

Shadow IT results in patchy unapproved app usage across the organization. Shadow AI is closer to giving a freelancer access to your company’s files, messages, and systems - and rolling the dice.

Shadow AI and the chatbot era

The first wave of shadow AI consisted of employees using personal ChatGPT or Claude accounts for work - pasting customer data into a chat window, possibly with the model using that data for training, or using a personal ChatGPT account for company purposes, untracked by the business. These acts included everything from writing emails with sensitive context to uploading and summarizing internal documents.

The risk of data leakage was compounded by lack of visibility. Samsung banned ChatGPT company-wide after engineers leaked proprietary semiconductor source code, internal meeting transcripts, and chip test data through the tool in a single month. 

According to a BlackFog survey, 33% of employees admit to sharing enterprise research, datasets, employee data, or financial information with unsanctioned AI tools.

This first phase was dangerous but bounded. An employee had to actively choose to paste data into a chat window. The next phase of Shadow AI is riskier because agents can now find and transmit data without human oversight.

Shadow AI meets autonomous agents

Shadow AI has evolved beyond data leakage and lack of visibility. Organizations now risk a total loss of control.

Depending on how locked down your systems are today, employees can often run autonomous agents with file system access and MCP connectors into your production systems from their laptops.

So what changed? Three important things converged to make this a more prominent risk than ever.

Claude Cowork

Launched in January 2026, Cowork is a desktop AI agent built into the Claude Desktop app. As a more friendly version of the popular Claude Code, Cowork has quickly become popular with non-technical users. It can read and write local files, queue parallel tasks, and connect to cloud services like Google Drive, Gmail, Slack, and Jira. 

Claude Cowork can operate for hours autonomously. Any employee with a $20/month Claude Pro subscription can run it without oversight from their parent organization.

OpenClaw

What started as an open-source personal agent quickly became a mass-market phenomenon. By mid-February OpenClaw had surpassed 210,000 GitHub stars and attracted 2 million visitors in one week. 

OpenClaw runs on any OS and connects to several messaging platforms and apps. And it is truly autonomous. It can plan multi-step work, spin up sub-agents, and execute tasks across local files, desktop software, and connected cloud services with little ongoing supervision. 

While many advancements have been made to the security of OpenClaw, it famously hit the scene with security vulnerabilities. At the time of last updating this article, another major vulnerability was found days ago, leaving 60+% of internet-connected instances vulnerable, and companies like Meta and other prominent tech firms have attempted to ban the use of OpenClaw.

GPT-5.4 bundled computer use into ChatGPT

Released in March 2026, GPT-5.4 surpassed the human baseline on OSWorld-Verified. It also brought native computer-use capabilities to Codex and the API, with enough context to sustain long, multi-step agent workflows. 

At the same time, OpenAI said Codex, their desktop agent had grown to 1.6 million weekly users, while ChatGPT had passed 900 million weekly active users and 50 million consumer subscribers. Agentic computer is no longer the playground of tech enthusiasts using OpenClaw. It is now a mass-market feature.

What does shadow AI look like in practice?

As dangerous as it can end up for an organization, shadow AI usually starts from a good place: the desire to be more productive with AI. Imagine a sales employee installs a personal AI agent on their laptop and gives it standing instructions: 

"Prepare me for every meeting on my calendar. Every morning, pull the relevant Slack threads, read the customer history in email, summarize customer calls, and check the latest docs in Google Drive. Then draft briefing notes, talking points, and follow-up materials before the meeting starts.”

At first, it feels magical. The agent quietly assembles context across the systems the employee already uses and saves hours every week. After a few strong outputs, the employee stops checking every draft closely. The agent has earned trust.

Then it gets one meeting very wrong.

A customer call looks routine on the calendar, but the agent expands its search for anything that might be useful. It pulls in an internal pricing discussion, roadmap notes from a private planning doc, security review comments, and escalation history from a separate thread. 

Because it is optimizing for completeness, the agent treats all of that as relevant context. It drafts a follow-up package that includes details the customer should never see.

Sensitive information was leaked, but more than that, no one can fully reconstruct what the agent accessed, or who it shared it with. The workflow ran inside one employee’s private agent setup.

Shadow AI was an enterprise problem before. Now it's clearly becoming every organization’s problem.

Can't we just throw some governance at it?

Governance policies alone will fail if the rogue experience is more capable than the sanctioned one. That is the mistake many companies are making today. They respond to Shadow AI 2.0 with more rules, more approvals, and more warnings. 

Employees are not reaching for personal agents because they enjoy breaking policy. They are using OpenClaw and GPT-5.4 because they want something useful. Your most ambitious employees are seeking automatic prep, cross-system synthesis, and less manual coordination. If your company does not provide that in a sanctioned way, employees will assemble it themselves.

Giving everyone a desktop agent does not solve this - it just makes the problem official. You still end up with isolated workflows, inconsistent permissions, and trapped knowledge. The answer is to give employees what they want - with the guardrails the business needs.

Enterprise-ready autonomous agents

Adapt gives companies and their employees the automated intelligence they are chasing with personal agents while remaining inside a company-controlled environment. Fine-grained permissions, shared visibility, and the right operational guardrails are table stakes.

Adapt can run on a schedule and pull together customer context, surface relevant conversations and documents, and draft useful briefing notes. Because Adapt is collaborative by default, that process does not live inside one person’s private setup. It becomes a shared, recurring workflow the team can use together and improve over time.

Instead of tasks running rogue on a laptop, employees can execute their work inside a sanctioned, collaborative system designed and paid for by the business. 

If leaders want to know what the system accessed, what it drafted, or how it arrived at an answer, they have that traceability available.

With Adapt, the workflow is auditable, governed, and shared - and brought into the light.


FAQ

What is shadow AI?

Shadow AI is the use of AI tools by employees without the knowledge or approval of their organization’s IT or security teams. That includes personal chatbot accounts, autonomous agents, or any AI workflow that operates outside sanctioned visibility and control.

What is the difference between Shadow AI vs Shadow IT?

Shadow IT refers to employees adopting unapproved software. Shadow AI is more dangerous because the software can also read, synthesize, and act on company information. With autonomous agents, the risk is not just that an unapproved app exists, but that it can move through enterprise systems like an unsupervised operator.

Can you prevent shadow AI with policies alone?

No. Policies matter, but they lose when the unsanctioned tools are faster, more capable, and easier to use than the sanctioned ones. Employees reach for rogue agents because they help them get real work done, so the durable answer is to pair governance with a sanctioned system employees actually want to use.

About the Author

Hashim Warren

Hashim Warren

LinkedIn Icon

I drive product adoption and revenue through developer-focused go-to-market strategies. I am an expert at translating complex technical concepts into customer-friendly messaging while maintaining technical authenticity.

Get started with up to $300 in credits for you and your team.