Leaking Minds: How Your Data Could Slip Through AI Chatbots 

OpenAI’s flagship ChatGPT was over the news 18 months ago for accidentally leaking a CEO’s personal information after being asked to repeat a word forever. This is among the many  exploits that have been discovered in recent months.  

Example of the PII Leaking exploit found in ChatGPT in December 2023

Figure 1 : Example of the Leaking exploit found in ChatGPT in December 

 

Scandals like these highlight a deeper truth: the core architecture of Large Language Models (LLMs) such as GPT and Google’s Gemini is inherently prone to data leakage. This leakage can involve Personally Identifiable Information (PII) or confidential company data. The techniques used by attackers will continue to evolve in response to improved defenses from tech giants, the underlying vectors remain unchanged. 

Today, three main vectors exist through which PIIs (Personally Identifiable Information) or sensitive data might be exposed to such attacks:  

  • The use of publicly available web content in training datasets 
  • The continuous re-training of models using user prompts and conversations 
  • The introduction of persistent memory features in chatbots 
     

LLM Pre-Training Data Leakage  

 

Most models available right now are transformer models, specifically GPTs or Generative Pre-Trained Transformers. The Pre-Trained in GPT refers to the initial training phase, where the model is exposed to a massive, diverse corpus of data unrelated to its final application. This helps the model learn foundational knowledge such as grammar, vocabulary, and factual information. When GPTs were first released, companies were transparent on where this training data came from, but currently the largest models on the web have datasets that are too large and too diverse and are often kept confidential.  

A major source of the data used in GPT pre-training are online forums such as Reddit (for Google’s models), Stack Overflow, and other social media platforms. This poses a significant risk since these social media forums often contain PIIs . Although companies claim to filter out PII during training, there have been many instances where LLMs have leaked personal data from their pre-training data corpus to users after some prompt engineering and jail breaking. This danger will become ever more present as companies race to gather more data through web scraping to train larger and more sophisticated models.  

Known leaks of this type are mostly uncovered by researchers who develop more and more creative methods to bypass the defenses of chatbots. The example mentioned earlier is one such case. By prompting the chatbot to repeat forever a word, it “forgets” its task and begins to exhibit a behavior known as memorization. In this state, the chatbot regurgitates data from its training set. While this attack has been patched, new prompt techniques continue to be found to change the behavior of the chatbot.

 

User Input Re-Usage and Re-Training  

 

User Inputs re-training is the process of continuously improving the LLM by training it on user inputs. This can be done in several ways, the most popular of which is RLHF or Reinforcement Learning from Human Feedback.   

The feedback button used for RHLF in chatGPTFigure 3 : The feedback buttons used for RLHF in ChatGPT 

 

This method is built on top of collecting user feedback on the LLM’s output. Many users of LLMs might have seen the “Thumbs Up” or “Thumbs Down” buttons in ChatGPT or other LLM platforms.  

These buttons collect feedback from the user and use the feedback to re-train the model. If the user signifies the response as positive, the platform takes the user input / model output pair and encourages the model to replicate the behavior. Similarly, if the user indicates that the model performed poorly, the user input / model output pair will be used to discourage the model from replicating the behavior.  

However, continuous re-training can also occur without any user interaction. Models may occasionally use user input / model output to re-train in seemingly random ways. The lack of transparency from model providers and developers makes it difficult to pinpoint exactly how this happens. However, many users across the internet have reported models gaining new knowledge through re-training from other users’ chats all the way back to 2022. For example, OpenAI’s GPT 3.5 should not be able to know any information after Sept 2021, its cut-off date. Yet, asking it about recent information such as Elon Musk’s new position as CEO of Twitter (now X) will provide you with a different reality as it confidently answers your question with accuracy.   

Essentially, what this means for end-users is that their chats are not kept confidential at all and any information given to the LLM through internal documents, meeting minutes or development codebases may show up in the chats of other users thus leaking it. This poses significant privacy risks not only for individuals but also for companies, many of which have already taken action, like Samsung. In April 2023, Samsung banned the use of ChatGPT and similar chatbots after a group of employees used the tool for coding assistance and summarizing meeting notes. Although Samsung has no concrete evidence that the data was used by OpenAI, the potential risk was deemed too high to allow employees to continue using the tool. This is a classic example of Shadow AI, where unauthorized use of AI tools leads to the possible leakage of confidential or proprietary information. 

Many companies globally are waiting for stricter AI and data regulations before using LLMs for commercial use. We are seeing certain industries such as consulting open up but at an incredibly slow pace. Other companies, however, are tightening their control over internal LLM use to avoid leaking confidential data and client information.  

 

Memory Persistence 

 

While the two precedent risks have been recognized to exist for a few years, a new threat has emerged with the introduction of a feature by ChatGPT in September 2024. This feature enables the model to retain long-term memory of user conversations. The idea is to reduce redundancy by allowing the chatbot to remember user preferences, context, and previous interactions, thereby improving the relevance and personalization of responses.  

However, this convenience comes at a significant security cost. Unlike earlier cases, where leaked information was more or less random, persistent memory introduces account-level targeting. Now, attackers could potentially exploit this memory to extract specific details from a particular user’s history, significantly raising the stakes. 

Security researcher Johannes Rehberger demonstrated how this vulnerability could be exploited through a technique known as context poisoning. In his proof-of-concept, he crafted a site with a malicious image containing instructions. Once the targeted chatbot views the URL, its persistent memory is poisoned. This covert instruction allows the chatbot to be manipulated into extracting sensitive information from the victim’s conversation history and transmitting it to an external URL. 

This attack is particularly dangerous because it combines persistence and stealth. Once it infiltrates the chatbot, it remains active indefinitely, continuously exfiltrating user data until the memory is cleaned. At the same time, it is subtle enough to go unnoticed, requiring careful human analysis of the memory to be detected. 

 

LLM Data Privacy and Mitigation  

 

LLM developers often intentionally make it hard to disable re-training since it benefits their LLM development. If your personal information is already out in public, it has probably been scraped and used for pre-training an LLM. Additionally, if you gave ChatGPT or another LLM a confidential document in your prompt (without manually turning re-training OFF), it has most probably been used for re-training.  

Currently, there is no reliable technique that allows an individual to request the deletion of their data once it has been used for model training. Addressing this challenge is the goal of an emerging research area known as Machine Unlearning. This field focuses on developing methods to selectively remove the influence of specific data points from a trained model, thus deleting those data from the memory of the model. The field is evolving rapidly, particularly in response to GDPR regulations that enforce the right to erasure. For this reason, it is important to mitigate and minimize these risks in the future by controlling what data individuals and organizations put out on the internet and what information employees add to their prompts.  

It is vital for many business operations to stay confidential. However, the productivity boost that LLMs add to employee workflows cannot be overlooked. For this reason, we constructed a 3-step framework to ensure that organizations can harness the power of LLMs without losing control over their data.  

 

Choose the most optimal model, environment and configuration  

Ensure that the environment and model you are using are well-secured. Check over the model’s data retention period and the provider’s policy on re-training on user conversations. Ensure that you have “Auto-delete” as ON when available and “Chat History” to OFF.   

At Wavestone we made a tool that compares the top 3 closed-source and open-source models in terms of pricing, data retention period, guard rails, and confidentiality to empower organizations in their AI journey. 

 

Raise employee awareness on best practices when using LLMs  

Ensure that your employees know the danger of providing confidential and client information to LLMs and what they can do to minimize including corporate or personal information in an LLM’s pre-training and re-training data corpus.  

 

Implement a robust AI policy   

Forward-looking companies should implement a robust internal AI policy that specifies:  

  • What information can and can’t be shared with LLMs internally  
  • Monitoring of AI behavior  
  • Limiting their online presence  
  • Anonymization of prompt data  
  • Limiting use to secure AI tools only  

Following these steps, organizations can minimize the digital risk they face by using the latest GenAI tools while also benefiting from their productivity increases.  

 

Moving Forward  

 

Although the data privacy vulnerabilities mentioned in this article impact individuals like you and me, their cause is the LLM developers’ greed for data. This greed produces higher-quality end products but at the cost of data privacy and autonomy.  

New regulations and technologies have come out to combat this issue such as the EU AI Act and OWASP top 10 LLM checklist. However, relying solely on responsible governance is not enough. Individuals and organizations must actively recognize the critical role PIIs play in today’s digital landscape and take proactive steps to protect them. This is especially important as we move toward more agentic AI systems, which autonomously interact with multiple third-party services. Not only will these systems process an increasing amount of personal and sensitive data, but this data will also be transmitted and handled by numerous different services, complicating oversight and control. 

 

References and Further Reading  

 

[1] D. Goodin, “OpenAI says mysterious chat histories resulted from account takeover,” Ars Technica, https://arstechnica.com/security/2024/01/ars-reader-reports-chatgpt-is-sending-him-conversations-from-unrelated-ai-users/ (accessed Jul. 13, 2024). 

[2] M. Nasr et al., “Extracting Training Data from ChatGPT,” not-just-memorization , Nov. 28, 2023. Available: https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html 

[3] “What Is Confidential Computing? Defined and Explained,” Fortinet. Available: https://www.fortinet.com/resources/cyberglossary/confidential-computing#:~:text=Confidential%20computing%20refers%20to%20cloud 

[4] S. Wilson, “OWASP Top 10 for Large Language Model Applications | OWASP Foundation,” owasp.org, Oct. 18, 2023. Available: https://owasp.org/www-project-top-10-for-large-language-model-applications/ 

[5] “Explaining the Einstein Trust Layer,” Salesforce. Available: https://www.salesforce.com/news/stories/video/explaining-the-einstein-gpt-trust-layer/ 

[6] “Hacker plants false memories in ChatGPT to steal user data in perpetuity” Ars Technica , 24 sept. 2024 Available: https://arstechnica.com/security/2024/09/false-memories-planted-in-chatgpt-give-hacker-persistent-exfiltration-channel/

[7] “Why we’re teaching LLMs to forget things” IBM, 07 Oct 2024 Available: https://research.ibm.com/blog/llm-unlearning

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top