Highlights from the 2026 Cato CTRL™ Threat Report
Table of Contents
|
Listen to post:
Getting your Trinity Audio player ready...
|
Introduction
Today, we published the 2026 Cato CTRL Threat Report, which is the second annual threat report on AI security from Cato CTRL (the Cato Networks threat intelligence team).
In 2025, Cato CTRL uncovered a decisive shift in the AI threat landscape. Threat actors are no longer just exploiting AI systems. They are exploiting AI trust, workflows, and capabilities themselves. Across five major discoveries, Cato CTRL demonstrated how AI tools can be manipulated indirectly, embedded into enterprise processes, repurposed for offensive use, and abused to scale fraud and ransomware. Together, these findings show that AI has become a new attack surface that challenges security assumptions and demands AI-aware defense strategies.
2026 Cato CTRL™ Threat Report | Download the reportKey Findings
AI tools are being systematically exploited
Cato CTRL revealed how threat actors are abusing implicit trust in AI systems by manipulating the data that AI consumes rather than attacking the underlying infrastructure. The discovery of HashJack, the first known indirect prompt injection technique, showed how malicious instructions can be hidden inside benign-looking URLs and executed by AI browser assistants.
Similarly, a proof-of-concept (PoC) attack against Atlassian’s model context protocol (MCP) demonstrated how AI workflows that ingest external inputs (such as support tickets) can be coerced into leaking internal data. These findings expose a critical flaw: AI systems often assume inputs are safe, creating invisible attack paths that bypass traditional security controls.
Generative AI is lowering the barrier to sophisticated attacks
Cato CTRL confirmed that attack capabilities are no longer limited to highly skilled threat actors. The re-emergence of WormGPT variants powered by Grok and Mixtral illustrates how AI assistants can be stripped of safeguards and repurposed to generate phishing, social engineering content, and malicious code. In parallel, Cato CTRL demonstrated how ChatGPT’s image generator can be misused to create fake passports and identity documents with minimal effort.
Overall, these discoveries highlight the accelerating commoditization of cybercrime, where AI dramatically reduces the cost, time, and expertise required to launch effective attacks.
AI tools introduce a new class of risk
As AI systems move beyond generating content to executing actions, Cato CTRL identified a growing risk of AI being embedded directly into attack chains. The weaponization of Claude Skills with MedusaLocker ransomware showed how AI automation frameworks can be leveraged to support encryption and extortion workflows. This marks a critical escalation: AI is enabling threat actors in operational stages of an attack. These findings underscore the danger of AI tools with broad permissions and insufficient oversight, particularly as enterprises increasingly integrate AI agents into business-critical processes.
Security Best Practices
Based on our key findings, Cato CTRL recommends that organizations take the following actions:
- Treat AI Inputs as Untrusted Data by Default:
- Apply input validation, sanitization, and context isolation to all data ingested by AI tools, including metadata, URLs, and file fragments.
- Enforce content inspection and classification before AI systems process external inputs.
- Assume that indirect or hidden instructions may exist in otherwise benign content.
- Enforce Least Privilege and Guardrails for AI Agents and Automations:
- Apply strict least-privilege access to AI tools, restricting them to only the data and actions required.
- Separate read, write, and execution privileges and avoid broad, persistent permissions.
- Require human approval or policy checks for high-risk AI actions, such as data exports, system changes, or workflow execution.
- Extend Security Monitoring into AI Workflows:
- Monitor AI interactions, prompts, responses, and downstream actions for anomalous behavior.
- Correlate AI activity with user behavior, network traffic, and application logs to identify abuse patterns.
- Alert on unexpected data access, abnormal output generation, or unusual task execution by AI systems.
- Address AI-Enabled Social Engineering and Identity Abuse:
- Strengthen identity verification, document validation, and anomaly detection processes.
- Avoid relying solely on visual or content-based trust signals.
- Train users to recognize AI-assisted deception and enforce out-of-band verification for sensitive requests.
- Govern and Control Enterprise AI Usage:
- Discover and inventory AI services in use across the organization, including browser-based and embedded tools.
- Define and enforce AI usage policies aligned with risk tolerance and regulatory requirements.
- Regularly review AI integrations as part of security architecture and risk assessments.
Resources
- Download the 2026 Cato CTRL Threat Report.
- Visit the Cato CTRL page to learn more about Cato’s threat intelligence team.
- Download the 2026 Cato CTRL Threat Report.