Cato AI Labs – AI Security Research

Cato AI Labs is a team of world-class AI security researchers established to secure the AI revolution. Cato AI Labs was born out of our frustration with theoretical AI security research, to offer practical, real-world solutions.

What are we working on?

Help customers solve the deepest AI security problems

We’re using our vulnerability research background to first break current AI securiy protections, and utilize our deep low level AI knowledge to create new kinds of mitigation architectures.

Securing the Agentic World

Agents represent the most powerful but also risky abstraction on top of LLMs. Cato AI Labs is building mental models, researching attack vectors and building guardrails for agentic applications.

Zero-Day AI vulnerability research

Cato AI Labs is continuously researching AI applications and platforms in order to uncover previously unknown vulnerabilities. Cato AI Labs’ goal is to find new classes of vulnerabilities, and show the community where the risk is for AI applications.

AI threat intelligence

Cato AI Labs proactively maps the AI threat landscape, establishing rigorous controls to detect and mitigate attacks or hidden backdoors in AI models and platforms-ensuring a secure and trustworthy AI ecosystem.

Universal jailbreaks and new guardrail architectures

We’re using our vulnerability research background to break current AI security protections, and utilize our deep low level AI knowledge to create new kinds of mitigation architectures.

Creating the AI Security community

We are building a community that redefines industry standards by educating organizations and vendors on emerging AI threats, core security principles, and effective strategies to protect against evolving AI risks.

Latest Research

Read the Blog

Model Scanners – Bypassing Methods and a Novel Detection Approach

Read the Blog

CurXecute – RCE in Cursor via MCP Auto-start

Read the Blog

EchoLeak – Microsoft M365 Vulnerability