The Shadow AI reality: Inside Catoβs survey results
|
Listen to post:
Getting your Trinity Audio player ready...
|
AI tools have proved their worth in the workplace. They help us write, research, code, plan, and automate. Theyβre making employees faster and more productive, and helping businesses move and innovate at a pace that wasnβt possible before.
But AIβs rise wasnβt orchestrated by IT, creating immediate AI security challenges for organizations trying to maintain visibility and control. AI’s rise didn’t always arrive through formal adoption plans or procurement cycles. It turned up in shared links to popular GenAI and other tools, self-sanctioned and adopted by users in minutes.
What started as βsummarize this research paperβ turned into contract writing, code creation, SQL queries, and customer communications, putting enterprises at massive risk of exposure, and extending the attack surface from the inside out. AI tools have inadvertently created a growing shadow AI problem, with sensitive data flowing into models that operate outside enterprise control.
Cato survey results show that shadow AI governance lags as AI adoption soars | Read the PRWhat our survey says
Cato recently surveyed over 600 IT leaders across the globe to get their take on shadow AI. Responses reveal a pattern: unauthorized tools are common, risk is growing, and governance is playing catch up.
Most users have adopted AI because it speeds and improves outcomes, and they get more done. Nearly 71% of respondents in our survey say the primary driver for employees using unapproved tools is increased productivity and efficiency. When a tool saves time and adds value, people will use it, sanctioned or not.
Many IT teams have low or no visibility into AI usage. They canβt see which tools employees are using, how often, what theyβre sharing, or what those tools are doing with their data. 69% percent of organizations still donβt have formal monitoring in place. Instead, they lean on occasional audits, reactive response, or donβt track usage at all.
And that lack of monitoring and visibility has predictable results. Nearly 61% of survey respondents say theyβve already found unauthorized AI tools in their environmentsβtools being fed corporate data with no guardrails or governance around how theyβre used.
Concern is widespread. 53% of those surveyed say theyβre highly or extremely concerned about the risks these unapproved AI tools introduce. Only just over 3% say theyβre not concerned at all.
Shadow AI is far riskier than Shadow IT
Shadow AI builds on the old risks of Shadow IT but introduces more complex challenges. With unapproved SaaS tools, data might leak, but it could be tracked. With AI tools, particularly GenAI, data doesnβt just sit somewhere. It gets processed, learned from, and can reappear in outputs it shouldnβt be in. Once information has been shared with tools, itβs impossible to follow it. That creates a compliance nightmare for enterprises.
Most IT teams arenβt equipped for this
Even though Shadow AI is a part of day-to-day work, most organizations donβt feel fully prepared to manage the risks that come with it. Only 13% of respondents say their organization is highly effective in responding to those risks. That stat is a long way from where it needs to be when AI is touching nearly every part of the business.
And when it comes to threats, readiness drops even further. Just 9% say theyβre highly effective at defending against AI-generated attacks like deepfakes, prompt injection, phishing, or model tampering. Threat actors are using AI tools in just about every technique and tactic they deploy.
Even well-resourced defender teams struggle to keep up with the magnitude and scale of AI tool use and abuse by adversaries. And when defenders lack visibility inside their own environment, it becomes even harder to recognize when AI is being used against them.
But is shadow AI the real problem?
Shadow AI happened because most legacy security architectures werenβt designed to handle it, or the distributed environments it lives in. Fragmented solutions canβt see AI tools in use, whoβs using them, and what theyβre using them for. When visibility, context, and control are siloed and dependent on human response, AI stays hidden and unpredictable.
The architecture built with AI, for AI
Enterprises donβt want to block AI. Its benefits far outweigh its risks when tools can be seen, controlled, and governed. With the right security architecture, AI becomes an advantage instead of a liability.
Cato SASE Cloud converges security, networking, and access in a single cloud-native and AI-native platform. Itβs the foundation that gives full visibility into the AI tools employees are using, and the ability to keep that usage safe. Controls prevent sensitive data from leaving the business, limit misuse, and catch risky activity in real time. With Cato, enterprises can see, secure, and govern AI adoption and use at scale.
Want to weigh in on our latest survey?
Weβre gathering insights on the cyber threats that IT and security pros think will shape next year. Have any predictions for whatβs coming? Share your perspective in our 2026 Cyber Threats Survey, and weβll send you Cato swag for helping out.
