The Shadow AI reality: Inside Cato’s survey results
|
Listen to post:
Getting your Trinity Audio player ready...
|
AI tools have proved their worth in the workplace. They help us write, research, code, plan, and automate. They’re making employees faster and more productive, and helping businesses move and innovate at a pace that wasn’t possible before.
But AI’s rise wasn’t orchestrated by IT. It didn’t always arrive through formal adoption plans or procurement cycles. It turned up in shared links to popular GenAI and other tools, self-sanctioned and adopted by users in minutes.
What started as “summarize this research paper” turned into contract writing, code creation, SQL queries, and customer communications, putting enterprises at massive risk of exposure, and extending the attack surface from the inside out. AI tools have inadvertently created a growing shadow AI problem, with sensitive data flowing into models that operate outside enterprise control.
Cato survey results show that shadow AI governance lags as AI adoption soars | Read the PRWhat our survey says
Cato recently surveyed over 600 IT leaders across the globe to get their take on shadow AI. Responses reveal a pattern: unauthorized tools are common, risk is growing, and governance is playing catch up.
Most users have adopted AI because it speeds and improves outcomes, and they get more done. Nearly 71% of respondents in our survey say the primary driver for employees using unapproved tools is increased productivity and efficiency. When a tool saves time and adds value, people will use it, sanctioned or not.
Many IT teams have low or no visibility into AI usage. They can’t see which tools employees are using, how often, what they’re sharing, or what those tools are doing with their data. 69% percent of organizations still don’t have formal monitoring in place. Instead, they lean on occasional audits, reactive response, or don’t track usage at all.
And that lack of monitoring and visibility has predictable results. Nearly 61% of survey respondents say they’ve already found unauthorized AI tools in their environments—tools being fed corporate data with no guardrails or governance around how they’re used.
Concern is widespread. 53% of those surveyed say they’re highly or extremely concerned about the risks these unapproved AI tools introduce. Only just over 3% say they’re not concerned at all.
Shadow AI is far riskier than Shadow IT
Shadow AI builds on the old risks of Shadow IT but introduces more complex challenges. With unapproved SaaS tools, data might leak, but it could be tracked. With AI tools, particularly GenAI, data doesn’t just sit somewhere. It gets processed, learned from, and can reappear in outputs it shouldn’t be in. Once information has been shared with tools, it’s impossible to follow it. That creates a compliance nightmare for enterprises.
Most IT teams aren’t equipped for this
Even though Shadow AI is a part of day-to-day work, most organizations don’t feel fully prepared to manage the risks that come with it. Only 13% of respondents say their organization is highly effective in responding to those risks. That stat is a long way from where it needs to be when AI is touching nearly every part of the business.
And when it comes to threats, readiness drops even further. Just 9% say they’re highly effective at defending against AI-generated attacks like deepfakes, prompt injection, phishing, or model tampering. Threat actors are using AI tools in just about every technique and tactic they deploy.
Even well-resourced defender teams struggle to keep up with the magnitude and scale of AI tool use and abuse by adversaries. And when defenders lack visibility inside their own environment, it becomes even harder to recognize when AI is being used against them.
But is shadow AI the real problem?
Shadow AI happened because most legacy security architectures weren’t designed to handle it, or the distributed environments it lives in. Fragmented solutions can’t see AI tools in use, who’s using them, and what they’re using them for. When visibility, context, and control are siloed and dependent on human response, AI stays hidden and unpredictable.
The architecture built with AI, for AI
Enterprises don’t want to block AI. Its benefits far outweigh its risks when tools can be seen, controlled, and governed. With the right security architecture, AI becomes an advantage instead of a liability.
Cato SASE Cloud converges security, networking, and access in a single cloud-native and AI-native platform. It’s the foundation that gives full visibility into the AI tools employees are using, and the ability to keep that usage safe. Controls prevent sensitive data from leaving the business, limit misuse, and catch risky activity in real time. With Cato, enterprises can see, secure, and govern AI adoption and use at scale.
Want to weigh in on our latest survey?
We’re gathering insights on the cyber threats that IT and security pros think will shape next year. Have any predictions for what’s coming? Share your perspective in our 2026 Cyber Threats Survey, and we’ll send you Cato swag for helping out.
