Cato CTRL™ Threat Research: Inside Shadow AI – Real-World Generative AI Application Usage Trends in SASE

Table of Contents
Listen to post:
Getting your Trinity Audio player ready...
|
Executive Summary
The rapid adoption of generative AI (GenAI) in the enterprise is introducing a new category of unmanaged risk known as shadow AI. Organizations frequently lack insight into which employees are using GenAI tools and how they are being accessed, resulting in visibility limitations, policy enforcement challenges, and increased risk of data exposure. Security teams face potential data leaks and compliance violations, while IT teams struggle to integrate GenAI usage into existing governance models.
Industry frameworks such as the NIST AI Risk Management Framework (AI RMF) and MITRE ATLAS emphasize the need for real-time visibility, access control, and threat modeling to address these challenges.
Cato CTRL analyzed real-world GenAI application usage trends from the Cato SASE Cloud Platform in March 2025. Key stats include:
- 76% of GenAI-related network traffic flows comes from Cursor AI, Microsoft Copilot, and OpenAI’s ChatGPT.
- 69% of GenAI application data usage comes from ChatGPT and OpenAI services.
The insights we’ve uncovered below offer a data-driven view of how GenAI tools are being used within enterprises. We also demonstrate how security and IT teams can manage that usage effectively with new GenAI security controls for Cato CASB (Cloud Access Security Broker), which was announced on April 15 during SASEfy 2025.
Technical Overview
Shadow AI: Usage Patterns and Security Implications
Recent incidents highlight the growing risks of unmonitored GenAI and shadow AI usage. These include data exposures from careless AI tool use, manipulation of chatbots leading to financial losses, market impacts from inaccurate AI information, and vulnerabilities in AI systems that allow prompt injection attacks. Additionally, security breaches have occurred when employees unknowingly installed malicious AI plugins. These examples underscore the critical need for better oversight and control of GenAI technologies.
Key Trends and Insights
In March 2025, the Cato SASE Cloud Platform recorded a vast amount of GenAI application usage data—showing how quickly tools like ChatGPT, Copilot, and others are becoming part of everyday business workflows. This rapid adoption also signals the rise of shadow AI: unmanaged, unauthorized use beyond traditional security oversight.
The figures below provide insight into GenAI application activity, highlighting trends in network traffic flows, data volume usage, and user adoption.
An important insight is that 76% of GenAI-related network traffic flows comes from ChatGPT, Copilot, and Cursor AI (Figure 1), while 69% of GenAI application data usage comes from ChatGPT and OpenAI services (Figure 2). This concentration reflects a growing reliance on a few dominant, easily accessible GenAI platforms—often adopted without IT oversight. These trends underscore the need for organizations to prioritize visibility, user-level monitoring, and targeted policy enforcement to effectively manage shadow AI risks and protect sensitive data.
The data in Figure 1 shows that Cursor AI generates the highest network traffic (39.5%), indicating frequent or embedded use—likely by development teams. Meanwhile, the data in Figure 2 shows that OpenAI services (37.3%) account for the most data volume (MB), suggesting potential exposure of sensitive information. These patterns highlight the need for greater visibility into GenAI usage, with targeted monitoring of high-traffic tools like Cursor AI, DLP enforcement for data-heavy platforms like OpenAI, and clear policies to manage shadow AI risks across the organization.
Figure 1. GenAI application usage by network traffic flows
Figure 2. GenAI application usage by data volume (MB)
Figure 3 shows another important insight: 51% of GenAI activity originates from users interacting with Copilot, ChatGPT, and OpenAI services. This reflects a clear trend of user-driven adoption of popular GenAI platforms. It underscores the need for user-level visibility and control, as individual usage often occurs without IT oversight, increasing the risk of data exposure and compliance violations.
Figure 3. GenAI application usage by user accounts
Product Overview
Designing a Security Model for GenAI in the Enterprise
To safely adopt GenAI while protecting critical assets, organizations need more than static policies—they require real-time visibility, granular control, and governance across the network. GenAI is not just a productivity tool; it introduces a new security domain that demands a modern approach. As shadow AI continues to grow across enterprise environments, many organizations face limited oversight of its use.
During SASEfy 2025 on April 15, Cato Networks announced its latest AI innovation. Cato CASB, a native feature in the Cato SASE Cloud Platform, is now enhanced with new capabilities for generative (GenAI) applications including a shadow AI dashboard and policy engine. With the shadow AI dashboard, enterprises can detect, analyze, and gain insights into the use of GenAI. With the policy engine, enterprises can take control of user activities in GenAI applications. Combined, Cato is enabling security and IT teams to balance innovation with risk management.
2025 Cato CTRL™ Threat Report | Download the reportCato CASB’s GenAI Security Controls: Capabilities and Dashboard Examples
Cato enables secure and manageable GenAI usage across the enterprise through a focused set of capabilities:
- User-Level Visibility: Identify who is using GenAI tools and through which channels (browser, API, or embedded apps).
- GenAI App Discovery and Classification: Automatically detect and categorize both known and emerging GenAI tools, with built-in risk scoring.
- Policy Enforcement and Tenant Controls: Define rules to allow, block, or monitor usage and restrict access to approved corporate accounts.
- AI-Aware DLP: Inspect data shared with GenAI apps using ML-based classifiers for Cato DLP (Data Loss Prevention) to prevent exposure of sensitive information.
- Application-Level Controls: Monitor user actions such as uploads, downloads, logins, and conversations within GenAI tools.
To ensure ongoing visibility, Cato maintains an up-to-date GenAI application catalog (Figure 4) through:
- Network Analytics: New GenAI tools are identified by analyzing anonymized traffic across Cato’s global network.
- Threat Intelligence: Third-party feeds are integrated to validate and enhance detection accuracy.
- Trend Monitoring: Industry developments are tracked to stay ahead of rapidly evolving GenAI platforms.
Figure 4. GenAI app catalog
The next dashboard (Figure 5) provides deep visibility into GenAI usage across the network, categorizing apps (top widget) and showing actions like block, monitor, allow (mid-left widget), risk levels (mid-right widget), top users (bottom-left widget), and activity trends (bottom-right widget). It helps security and IT teams quickly detect risky behavior, enforce policies, and maintain compliance.
Figure 5. Shadow AI dashboard – apps per category overview
The next dashboard (Figure 6) highlights violations by data profile types—such as personally Identifiable Information (PII), financial data, and credit card information (top-left widget). It also identifies top users sharing sensitive data (mid-right widget), providing actionable insights that help security and IT teams detect risk patterns, enforce DLP policies, proactively safeguard sensitive information, and drill down into specific policy violation events (bottom-left widget).
Figure 6. Shadow AI dashboard – data protection
This dashboard (Figure 7) highlights how GenAI-related data protection rules—such as blocking file uploads to ChatGPT—can be configured from the App & Data Inline policy engine, with the ability to drill down into specific events. This enables security and IT teams to enforce policies and respond to risks more effectively.
Figure 7. Configuration of GenAI app and data protection rules
Conclusion
We shared key insights from GenAI activity observed across the Cato SASE Cloud Platform, where tools like ChatGPT, Copilot, and Cursor AI account for the majority of network traffic flows and ChatGPT and OpenAI services account for the majority of data usage. These patterns highlight a growing reliance on a few dominant GenAI platforms—often used without IT oversight—raising concerns around visibility, data exposure, and policy enforcement.
With the new GenAI security controls from Cato CASB, Cato is enabling security and IT teams to balance innovation with risk management.
Resources
- Learn more about the new GenAI security controls in Cato CASB in the press release and product blog.
- Learn more about Cato’s approach to AI safety and Cato’s AI/ML capabilities.
- The GenAI security controls for Cato CASB are the latest AI innovation from Cato and announced during SASEfy 2025, Cato’s global virtual event. This year’s event focused on SASE and AI. If you missed SASEfy 2025, register to watch the recording on-demand.