Threat Insight

Chinese Cyber Espionage Group Leverage ChatGPT

Security researchers have identified a Chinese cyber espionage group that are leveraging ChatGPT in several steps in their malware campaigns.

  • Insight
Cybercriminal hacking

Their Modus is quite similar to other espionage groups. Their campaigns begins with specially crafted spear phishing email. The emails impersonate professional mail conversations.

Often the threat actor will try to build rapport with the intended victim and start seemingly innocent mail conversations, before sending an email that contains a trojanized document or spear phishing site. The objective is to install a remote access tool (RAT) that has been dubbed “GOVERSHELL” by the researchers.

Using AI allows these hackers to be highly prolific, both in crafting spear phishing emails and in adapting their malware, but have also led to numerous minor errors, some of which actually makes it easier to spot the group’s activities.

The malware is distributed via a zip archive that includes a benign executable that when it is activated will load a malicious dll. So-called dll-sideloading, a very common tactic, especially among Chinese hackers.

Researchers have also found instances where the zip archive also held useless junk files, even pornographic images, that served no purpose and was likely just “AI slop” included by the LLM on its own. Other examples of LLM mistakes in this campaign includes spear phishing emails with senders that mimics organizations that don’t exist and with obviously fake address and phone number information.

When this campaign was discovered OpenAI disabled the threat actors accounts for ChatGPT, but Truesec assesses that it will likely be relatively easy for the threat actors to register new accounts, either with ChatGPT or with some other AI service.

Assessment

The adoption of LLM and AI by threat actors was inevitable. The use of LLM offers the same advantages and disadvantages to cybercriminals and spies as it does to legitimate IT developers. Previously Truesec has identified a cybercriminal threat actor that also use AI to assist in creating fake web sites luring victims to download malicious apps, such as PDF editors.

It’s important to note that there are few signs that using LLM improves the quality in cyber attacks. Arguably, the use of LLM generated content actually decreased the quality of the campaign by the Chinese threat actors noted above. What AI does is that it allows threat actors to vastly increase the quantity of their attacks.

Using LLM may introduce easily spotted errors in phishing emails, but they still allow threat actors to craft phishing emails in different languages much faster and cheaper than if they had to employed humans with language skills to construct them. The LLM produced code can contain errors and hallucinations that may make detection easier, but it also allows threat actors to constantly change and update their malware much faster, to evade static detection. Threat actors are adopting AI coding for the same reasons as the industry does, not because it’s better but because it’s cheaper.

Ultimately cyber defense is also a numbers game. Even less sophisticated attacks will succeed sometimes. Spotting AI generated content in phishing emails is usually not too difficult if you’re aware of the danger and know what to look for, but most people can be caught off-guard. It’s also more difficult to spot if you’re unfamiliar with the area. Many of us can spot a fake LLM generated address from our own country if we look for it, but it would be much harder to spot errors in a fake address from a country we are not familiar with. Always be on extra guard if you receive an email from a new organization or from geographical areas you are not familiar with.

It’s also important to remember that more and more professional content now come in the form of AI generated text. Developing a sense for spotting AI generated text is a useful talent for many reasons.

References

[1] https://www.volexity.com/blog/2025/10/08/apt-meets-gpt-targeted-operations-with-untamed-llms/ [2] https://openai.com/global-affairs/disrupting-malicious-uses-of-ai-october-2025/

Stay ahead with cyber insights

Newsletter

Stay ahead in cybersecurity! Sign up for Truesec’s newsletter to receive the latest insights, expert tips, and industry news directly to your inbox. Join our community of professionals and stay informed about emerging threats, best practices, and exclusive updates from Truesec.