WANTED: Brilliant AI Experts Needed for Cyber Criminal Ring

Listen to post:
Getting your Trinity Audio player ready...

In a recent ad on a closed Telegram channel, a known threat actor has announced it’s recruiting AI and ML experts for the development of it’s own LLM product.

Threat actors and cybercriminals have always been early adapters of new technology: from cryptocurrencies to anonymization tools to using the Internet itself. While cybercriminals were initially very excited about the prospect of using LLMs (Large Language Models) to support and enhance their operations, reality set in very quickly – these systems have a lot of problems and are not a “know it all, solve it all” solution. This was covered in one of our previous blogs, where we reported a discussion about this topic in a Russian underground forum, where the conclusion was that LLMs are years away from being practically used for attacks.

The media has been reporting in recent months on different ChatGPT-like tools that threat actors have developed and are being used by attackers, but once again, the reality was quite different. One such example is the wide reporting about WormGPT, a tool that was described as malicious AI tool that can be used for anything from disinformation to actual attacks. Buyers of this tool were not impressed with it, seeing it was just a ChatGPT bot with the same restrictions and hallucinations they were familiar with. Feedback about this tool soon followed:

Cato Networks SASE Threat Research Report H2/2022 | Download the Report

With an urge to utilize AI, a known Russian threat actor has now advertised a recruitment message in a closed Telegram channel, looking for a developer to develop their own AI tool, dubbed xGPT. Why is this significant? First, this is a known threat actor that has already sold credentials and access to US government entities, banks, mobile networks, and other victims. Second, it looks like they are not trying to just connect to an existing LLM but rather develop a solution of their own. In this ad, the threat actor explicitly details they are looking to,” push the boundaries of what’s possible in our field” and are looking for individuals who ”have a strong background in machine learning, artificial intelligence, or related fields.

Developing, training, and deploying an LLM is not a small task. How can threat actors hope to perform this task, when enterprises need years to develop and deploy these products? The answer may lie in the recently announced GPTs, the customized ChatGPT agent product announced by OpenAI. Threat actors may create ChatGPT instances (and offer them for sale), that differ from ChatGPT in multiple ways. These differences may include a customized rule set that ignores the restrictions imposed by OpenAI on creating malicious content. Another difference may be a customized knowledge base that may include the data needed to develop malicious tools, evade detection, and more. In a recent blog, Cato Networks threat intelligence researcher Vitaly Simonovich explored the introduction and the possible ways of hacking GPTs.

It remains to be seen how this new product will be developed and sold, as well as how well it performs when compared to the disappointing (to the cybercriminals end) introduction of WormGPT and the like. However, we should keep in mind this threat actor is not one to be dismissed and overlooked.

Related Topics