Europol warns against potential criminal uses for ChatGPT and the likes
The EU law enforcement agency published a flash report on Monday (27 March) warning that ChatGPT and other generative AI systems can be employed for online fraud and other cybercrimes.
Since it was launched at the end of November, ChatGPT has become one of the fastest-growing internet services, surpassing 100 million users within the first two months. Thanks to its unprecedented ability to generate human-like text based on prompts, the model has gone viral.
Large language models that can be used for various purposes, like Open AI’s ChatGPT, can benefit businesses and individual users. However, Europe’s police agency underlined that they also pose a law enforcement challenge as malicious actors can exploit them.
“Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing first practical examples mere weeks after the public release of ChatGPT,” reads the report.
The publication results from a series of workshops organised by Europol’s Innovation Lab to discuss potential criminal uses of ChatGPT as the most prominent example of large language models and how these models could be employed to support investigative work.
System’s weakness
The EU agency points to the fact that ChatGPT’s moderation rules can be circumvented via so-called prompt engineering, the practice of providing input in an AI model precisely to obtain a specific output.
As ChatGPT is a relatively recent technology, gaps are continuously found despite the constant deployment of patches. These loopholes might take the form of asking the AI to provide the prompt, asking it to pretend to be a fictional character, or providing the reply in code.
Other circumventions might replace the trigger words or change the context later during the interactions. The EU body stressed that the most potent workarounds, which manage to jailbreak the model from any constraint, constantly evolve and become more complex.
Criminal applications
The experts identified an array of illegal use cases for ChatGPT that also persist in OpenAI’s most advanced model, GPT-4, where the potential of the system’s harmful responses was even more advanced in some cases.
As ChatGPT can generate read-to-use information, Europol warns that the emerging technology can speed up the research process of a malicious actor without prior knowledge of a potential crime area like breaking into a home, terrorism, cybercrime or child sexual abuse.
“While all of the information ChatGPT provides is freely available on the internet, the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime,” the report says.
Phishing, the practice of sending a fake email to get users to click on a link, is a crucial application area. In the past, these scams were easily detectable due to grammar or language mistakes, whilst AI-generated text allows these impersonations in a highly realistic manner.
Similarly, online fraud can be given an increased image of legitimacy by using ChatGPT to create fake social media engagement that might help pass as legitimate a fraudulent offer. In other words, thanks to these models, “these types of phishing and online fraud can be created faster, much more authentically, and at significantly increased scale”.
In addition, the AI’s capacity to impersonate specific people’s style and speech can lead to several abuse cases regarding propaganda, hate speech and disinformation.
Besides text, ChatGPT can also produce code in different programming languages, expanding the capacity of malicious actors with little or no knowledge of IT development to transform natural language into malware.
Shortly after the public release of ChatGPT, the security company Check Point Research demonstrated how the AI model be used to create a full infection flow by creating phishing emails from spear-phishing to running a reverse shell that accepts commands in English.
“Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing. If prompts are broken down into individual steps, it is trivial to bypass these safety measures,” the report added.
Outlook
ChatGPT is considered a General Purpose AI, an AI model that can be adapted to carry out various tasks.
As the European Parliament is finalising its position on the AI Act, MEPs have been discussing introducing some strict requirements for this foundation model, such as risk management, robustness and quality control.
However, Europol seems to think that the challenge posed by these systems will only increase as they become increasingly available and sophisticated, for instance, with the generation of highly convincing deep fakes.
Another risk is that these large language models might become available on the dark web without any safeguards and be trained with particularly harmful data. The type of data that will feed these systems and how they could be policed are major question marks for the future.

Leading EU lawmakers propose obligations for General Purpose AI
The EU lawmakers spearheading the work on the AI Act pitched significant obligations for providers of large language models like ChatGPT and Stable Diffusion while seeking to clarify the responsibilities alongside the AI value chain.