Meta begins training its AI bots on your European data this week – you can object!
How to prevent your public data and metadata from being used, and why it matters

This week, Meta has announced that it is beginning to feed all public data and metadata on its multiple platforms in Europe (including Facebook) into its Artificial Intelligence (AI) training modules. This follows the course of OpenAI and others in collecting mass amounts of personal information worldwide.
But you have a choice to object and opt out. Taking this choice seriously is important because once your data is fed into AI systems, it is impossible to remove.
Here’s what you need to know about it, and what to do this week.
What does this mean, and for who?
Meta has already been training its AI models with people’s data available on its platforms worldwide. So far, this has excluded Europe due to the EU’s more robust data protections through the General Data Protection Regulation (GDPR).
With the launch of Meta’s Meta AI in Europe in March 2025, one of the only continents with strong data protection laws, including limitations on the use of people’s metadata, is now at the brink of AI’s data collection reach.
How to opt out and object to this use of data for AI?
Meta has fortunately provided the possibility of opting-out of this AI program. Notices will be sent as an in-app notification and by email. As of writing, Nym’s editors have yet to receive notifications on our Europe-based Meta apps. Keep in mind that in order to opt-out, you will need to complete a separate form to do so.
You can read Meta’s public announcement here.
Important: Failure to do so according to an undeclared timeframe may result in the irretrievable integration of your personal data into Meta’s AI systems – see below for how.
What kind of data will be used?
Meta’s AI training project includes “public data” by adults shared on apps such as Facebook, Instagram, and Meta AI. This includes:
- Comments
- Posts
- Questions and queries through Meta AI
Keep in mind that vast amounts of people’s metadata about their interactions, likes, associations, preferences, and behaviors will likely be used also. Metadata is globally collected by data brokers and Big Tech companies, including Google and OpenAI, to develop individual profiles for marketing, commercial, and surveillance purposes.
What kind of data will not be used by Meta?
The content of private messages, Meta claims, will not be used in the training of AI models. This is because in Europe under the GDPR, encrypted content is legally protected.
Some communication platforms on Meta services do use end-to-end (e2e) encryption. This makes the content of your communications on platforms like WhatsApp and Facebook Messenger accessible only to you and your correspondent. No third party, even the app developers themselves, will be able to view the content of your messages. Unfortunately, e2e encryption doesn’t protect us from advanced surveillance techniques.
As Nym has long advocated, end-to-end encryption is limited insofar as it does not protect the metadata of communications. Information about who you communicate with, when, how often, and from where can be visible to and used and collected by developers and anyone capable of surveilling the network.

Listen to Nym’s Chief Scientist, Claudia Diaz, explain the importance of metadata.
The unseen consequence
What may not be clear to maybe people in Europe who automatically accept this proposal is that doing so would undermine a right they possess under the GDPR, namely, the “right to be forgotten”. This endows people with the right to request their personal information be removed from the public web. With the integration of personal data into AI systems, this would effectively become impossible to extract.
“It’s good that Meta is providing an opt-out which they aren’t necessarily offering elsewhere in the world,” Claudia Diaz noted. “Unfortunately, as we know, most people will simply go along with the default. In this case, once the data has been fed to the models there won't be any way to pull it back if people change their mind down the line.”
As Chiara Castro notes in her reporting:
“It's crucial to understand that once fed into an LLM database, you will be completely losing control over your data, as these systems make it very hard (if not impossible) to exercise the GDPR’s right to be forgotten.”
Share
Table of contents
Keep Reading...
Nym is more than a VPN
The first app that protects you from AI surveillance thanks to a noise-generating mixnet

What is metadata & what can it reveal about you?
Understanding the raw material of digital surveillance

Why end-to-end encryption is essential for online privacy
Explore the meaning of end-to-end encryption, how it works, and its advantages and limitations.