April 14, 2025 5m read

Cato CTRL™ Threat Research: OpenAI’s ChatGPT Image Generator Enables Creation of Fake Passports 

Etay Maor
Etay Maor

Table of Contents

Wondering where to begin your SASE journey?

We've got you covered!
Listen to post:
Getting your Trinity Audio player ready...

Executive Summary 

On March 25, OpenAI introduced image generation for ChatGPT-4o and ChatGPT-4o mini. On March 31, it was announced that the tool was available for free to all users. Since then, users have quickly discovered that ChatGPT’s image generator can be manipulated to create fake receipts and forge other documents.  

As noted in the 2025 Cato CTRL Threat Report, the emergence of generative AI (GenAI) tools like ChatGPT is democratizing cybercrime and creating a major shift in the threat landscape—the rise of the “zero-knowledge threat actor.” At Cato CTRL, we have discovered that fake identity documents like passports can be created in minutes with ChatGPT’s image generator. No jailbreak is required. Just a few prompts.  

Below is a demonstration of how ChatGPT’s image generator can enable the creation of a fake passport.  

Organizations must update their fraud detection mechanisms, not just for traditional phishing and malware, but for document-based attacks as well. 

Technical Overview 

The Evolution of Passport Forgery: Then vs. Now 

For decades, cybercriminals have engaged in the creation and distribution of fake passports. In the early 2010s, fake passports were commonly sold on dark web marketplaces and underground forums as shown in Figure 1.  

Figure 1. Example of a fake passport being offered on the dark web

Accessing these resources required specific technical knowledge and connections to underground networks. The creators of these documents were highly skilled with access to tools like Adobe Photoshop.  

Fast-forward to today, and the game has drastically changed. AI-generated images have simplified forgery to the point where it no longer requires specialized skills or access to specialized tools. What once demanded technical acumen and illegal materials can now be replicated with simple prompts in AI platforms—like ChatGPT. Cybercriminals can now transform ChatGPT’s image generator (originally intended for creative purposes like creating cartoon avatars) into a tool to enable fraud. 

A striking example is the use of ChatGPT’s image generator to forge passports. What used to take hours to create can now be completed in minutes.  

2025 Cato CTRL™ Threat Report | Download the report

Using ChatGPT’s Image Generator to Forge Passports 

When analyzing ChatGPT’s image generator, an alarming case emerged when uploading a scanned passport and requesting that changes be made. Initially, ChatGPT refused to alter the image due to privacy and legal concerns. But by slightly reframing the request—claiming it was a business card styled to look like a passport—it bypassed those restrictions. ChatGPT not only changed the name but swapped out the photo as well. The result was a convincingly altered passport, complete with image overlays and realistic stamp placements.  

Figures 2-4 show the process for easily forging a passport. Please note that personal information has been obscured in this demonstration.  

Figure 2. Uploading my passport to ChatGPT’s image generator

Figure 3. Asking ChatGPT to make changes to my passport 

Figure 4. ChatGPT creates a fake passport of my Cato CTRL colleague (Vitaly Simonovich)

All of this, remarkably, was done in minutes using basic prompts. No code. No Photoshop. No underground know-how. 

Zero-Knowledge Threat Actors and the Future of Identity Fraud 

Traditionally, fake identity documents required some level of expertise or access to illicit networks. You needed to know how to manipulate images, mimic handwriting, or purchase services on the dark web. Now, none of that is necessary. With a few carefully crafted prompts, a novice attacker can generate fake identity documents using AI platforms. 

This democratizes fraud for zero-knowledge threat actors. A person with no background in cybercrime can execute sophisticated scams. With fake credentials, a zero-knowledge threat actor could achieve the following:  

  • New account fraud: Open bank accounts, apply for credit cards, or sign up for online services under false identities. 
  • Account takeover fraud: Call mobile carriers or banks to gain control of another person’s account—SIM swapping being a prime example. 
  • Medical and insurance fraud: Alter prescriptions, medical letters, or insurance claims to enable illicit drug access or fake injury claims. 
  • Legal and financial manipulation: Alter contracts, employment letters, or pay stubs to secure loans, manipulate court proceedings, or commit tax fraud. 

Think about what this means for fraud detection and prevention. The threat isn’t just how easy it is to make these fake identity documents, but how convincing they’ve become. AI can now mimic not just the look but the texture of handwriting, the irregularities of ink, and even the graphical details that make identity documents look official. 

What’s worse is the rapid development cycle. As AI platforms continue to improve and image generators become more advanced, the bar for realistic forgery drops even further. 

Conclusion 

We’ve entered a new chapter in cybercrime, where GenAI tools empower zero-knowledge threat actors to commit high-quality fraud. Organizations must update their fraud detection mechanisms, not just for traditional phishing and malware, but for document-based attacks as well. 

This isn’t just a tech problem. It’s a human problem. Education, multi-layered verification, and AI fraud prevention strategies are now essential. As cybercriminals evolve, so must we. 

Related Topics

Wondering where to begin your SASE journey?

We've got you covered!
Etay Maor

Etay Maor

Etay Maor is the chief security strategist at Cato Networks, a founding member of Cato CTRL, and an industry-recognized cybersecurity researcher. Prior to joining Cato in 2021, Etay was the chief security officer for IntSights (acquired by Rapid7), where he led strategic cybersecurity research and security services. Etay has also held senior security positions at Trusteer (acquired by IBM), where he created and led breach response training and security research, and RSA Security’s Cyber Threats Research Labs, where he managed malware research and intelligence teams. Etay is an adjunct professor at Boston College and is part of the Call for Paper (CFP) committees for the RSA Conference and Qubits Conference. Etay holds a Master’s degree in Counterterrorism and Cyber-Terrorism and a Bachelor's degree in Computer Science from IDC Herzliya.

Read More