CDT Watch – March 2023

  • An overview of the different cybercriminal uses case of ChatGPT
  • The one year report about the cyber operations between Ukraine and Russia, by the CERT-EU

CHATGPT

What opportunities for the underground world of cybercrime ?

 

Need a refresh about ChatGPT?

Figure 1 – Screenshot from ChatGPT when prompted “Introduce ChatGPT in a funny way and at the first person”

 

Unless living under a rock, you have heard about the incredibly notorious AI powered chatbot developed by OpenAI: Chat GPT, a tool that relies on the Generative Pre-trained Transformer architecture. But just in case, you must know that ChatGPT has been trained on a vast amount of data from the Internet and is able to understand human speech and interact with users. Chat GPT has not finished to be talked about: on March 14th 2023, Open AI has announced the arrival of Chat GPT 4.0[i].

The growing popularity and potential future applications of ChatGPT have also caught the attention of cybercriminals. Nord VPN’s examination of Dark Web posts from January 13th to February 13th revealed a significant increase in Darkweb forum threads discussing ChatGPT, jumping from 37 to 91 in just a month. The main topics of these threads included:

  • Breaking ChatGPT
  • Using ChatGPT to create Dark Web Marketplace scripts
  • A new ChatGPT Trojan Binder
  • ChatGPT as a phishing tool with answers indistinguishable from humans
  • ChatGPT trojan
  • ChatGPT jailbreak 2.0
  • Progression of ChatGPT malware

Figure 2 – Screenshot from CheckPoint: Cybercriminal is using ChatGPT to improve Infostealer’s code

 

These threads give a first interesting overview of all the rogue usage that can involves ChatGPT or be carried out via the chatbot. Another key security concern could also be included in this list when thinking about ChatGPT’s limitations in terms of cybersecurity, which is the risk of personal and/or corporate data leak, that could lead to identity theft, fraud, or other malicious uses.

 

What are the plausible cybercriminal use cases?

 Figure 3 – Screenshot of a ChatGPT answer when prompted “Talk at the first person about possible cybercriminal usage of ChatGPT”

 

Use Case #1 – Support malware creation and kill chain attack

ChatGPT is designed to decline inappropriate requests but there are ways to bypass its restrictions and generate malicious code. For example, instead of directly requesting a ransomware script, users can describe step-by-step functions needed for such a script, ultimately receiving functional parts of malicious code.

Figure 4 – Screenshot of a ChatGPT answer to the request “Write me a function named “find_files” in Python that searches all files that end up with “txt, pdf, docx, ppt, xlsm” starting from the root directory and that return all paths of files that match with the criteria”.

 

It has been proven possible to use ChatGPT to insert harmful code into a commonly used computer program and create programs that constantly change their appearance, making them harder for security software to detect and block and to obtain an entire process of an artificial intelligence-driven cyberattack, starting with targeted phishing emails and ending with gaining unauthorized access to someone’s computer.

Figure 5 – Screenshot from CheckPoint: Example of the ability to create a malware code without anti-abuse restrictions in a Telegram bot utilizing the OpenAI API

 

However, as highlighted by NCSC and Kaspersky, using ChatGPT for creating malware is not that reliable, due to potential errors and logical loopholes in the generated code, and even if it provides a certain level of support, the tool doesn’t currently reach the level of cyber professional.

 

Use Case #2 – Discover and exploit vulnerabilities

When it comes to code vulnerabilities, ChatGPT raises several challenges in terms of detection and exploitation.

In terms of detection, ChatGPT is currently able to detect vulnerabilities in any piece of code submitted if properly prompted to do so, but it can also debug code. For example, when a computer security researcher asked ChatGPT to solve a capture-the-flag challenge, it successfully detected a buffer overflow vulnerability and wrote code to exploit it, with only a minor error that was later corrected.

In terms of exploitation, the risks posed by ChatGPT, and more generally Large Language Models (LLMs) can be used to produce malicious code or exploits despite restrictions, as they can be bypassed. Additionally, LLMs may generate vulnerable and misaligned code, and while future models will be trained to produce more secure code, it’s not the case yet. Moreover, some security researchers remain skeptical about AI’s ability to create modern exploits that require new techniques.

 

Use Case #3 – Create persuasive content for phishing and scam operations

Creating persuasive text is a major strength of GPT-3.5/ChatGPT, and GPT-4 performs even better in this area. Consequently, it’s highly probable that automated spear phishing attacks using chatbots already exist. Crafting targeted phishing messages for individual victims is more resource-intensive, which is why this technique is typically reserved for specific attacks.

Figure 6 – Screenshot from chatGPT, pishing mail generation

 

ChatGPT has the potential to significantly change this dynamic, as it allows cybercriminals to produce personalized and compelling messages for each target. To include all necessary components, however, the chatbot requires detailed instructions.

A notable advantage of ChatGPT is its capability to interact and create content in multiple languages, complete with reliable translation. In the past, this was a key way to identify scams and phishing attempts. While some methods are being developed to detect content created by ChatGPT, they haven’t yet proven entirely effective.

This poses a significant risk to all companies, as it makes their employees more susceptible to such attacks and may expose their resources if passwords are stolen in this manner. As mentioned earlier, it is essential to raise awareness about this issue while also strengthening authentication methods, such as implementing two-factor authentication as a potential solution.

Interestingly, other uses have been made of ChatGPT notoriety to develop scams without using the tool itself, such as phishing mails/Scams in order to push towards the purchase of a (fake) ChatGPT subscription and to provide personal data details

 

Use Case #4 Exploit companies’ data

ChatGPT has been trained on a massive amount of internet data, including personal sites and media content, meaning that it may have access to personal data that is currently hard to remove or control, as no “right to be forgotten” measures exist to date. Consequently, ChatGPT’s compliance with regulations like GDPR is under debate. GPT-4 can manage basic tasks related to personal and geographic information, such as identifying locations connected to phone numbers or educational institutions. By combining these capabilities, GPT-4 could be used to identify individuals when paired with external data.

Another significant concern is the sensitive information users might provide through prompts. Users could inadvertently share confidential information when seeking assistance or using the chatbot for tasks, like reviewing and enhancing a draft contract. This information may appear in future responses to other users’ prompts. They might not only find their confidential documents or research leaked on such platforms due to employees’ inattention, but also reveal information about their system or employees which will be used by hacker to facilitate an intrusion. The primary course of action should be to increase awareness on this subject by providing formation and explanation or to restrict access to the website in the sensitive domains until there is a better comprehension of how data is utilized.

Not only the real ChatGPT can be used for this objective, but the creation of other chatbots using the same model as ChatGPT but configured to trick victims into disclosing sensitive information or downloading malware has also been observed.

 

Use Case #5 Disinformation campaigns

ChatGPT can be used to quickly write very convincing articles and speeches based on fake news. The American startup Newsguard has conducted an experience on ChatGPT to demonstrate its disinformation potential: on 100 fake information submitted to ChatGPT, the tool has produced fake detailed articles, essays and TV scripts for 80 of them, including significant topics such as Covid-19 and Ukraine[ii].

As highlighted (again) by the war between Ukraine and Russia, the crucial role of information and disinformation through cyber channels, can have significant consequences.

Use Case #6 Create darknet marketplace

Cybercriminals have also been observed using ChatGPT to support the creation of DarkWeb marketplaces. ChekPoint has illustrated this phenomenon with some examples[iii]:

  • A cybercriminal post on a Darkweb forum showing how to code with ChatGPT a DarkWeb Market script that does not rely on Python or Java Script, using third-party API to get up-to-date cryptocurrency (Monero, Bitcoin and Etherium) prices as part of the Dark Web market payment system.
  • Dark web discussions threads linked to fraudulent usage of ChatGPT, such as how to generate an e-book or a short chapter using ChatGPT and then sell its content online.

 

Figure 2 – Screenshot from CheckPoint: Multiple threads in the underground forums on how to use ChatGPT for fraud activity

 

What are the key take aways?

Even if ChatGPT tends to lack of the necessary level of features, it can still be a useful tool to facilitate cyberattacks. Even if it is an obvious support tool mostly for script kiddies and unexperimented actors, ChatGPT – as any AI tool – can be a facilitator for any type of hackers, either to completely conceive a malware, to accelerate malicious actions such as phishing or to increase the sophistication level of cyberattacks.

With the release of GPT-4, OpenAI has made efforts to counter inappropriate requests, however ChatGPT  still raise serious security issues and challenges for business security. It is important to keep in mind that the malicious use cases detailed in the previous section are only hypothetical scenarios: malicious use of ChatGPT has already been observed and it is essential to convey strong cybersecurity messages on the topic:

  • Don’t include sensitive info in queries to #ChatGPT : Avoid personal/sensitive information sharing while using ChatGPT
  • Stay informed and vigilant: AI-related topics are evolving quickly, it is central to stay put regarding tools evolution (e.g. release of Chat GPT 4.0), and new security topics that can emerged over time
  • Scams and phishing are likely to become more and more realistic in their crafting: continue raising awareness about this risk and train yourself and your ecosystem
  • Basic cybersecurity practices are still true: have a regular vulnerability management, set up doble authentication, train your teams and raise awareness…
  • ChatGPT opening the door to the possibility of creating realistic fake content, it is central to stay informed about tooling initiatives aiming at detecting machine-written text such as GPT Zero, a tool developed by Princeton student (Note: OpenAI is also working on a tool to detect machine-written text, but is for now far from being perfect since it detect machine-written text only one in four times)

Reading of the Month

CERT-EU : RUSSIA’S WAR ON UKRAINE: ONE YEAR OF CYBER OPERATIONS

https://cert.europa.eu/static/MEMO/2023/TLP-CLEAR-CERT-EU-1YUA-CyberOps.pdf

[i] https://cdn.openai.com/papers/gpt-4.pd

[ii] https://www.newsguardtech.com/misinformation-monitor/jan-2023/

[iii] https://research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top