Germany backs down as Meta moves forward with AI training on personal data
Posted on May 27, 2025

Meta has officially started training its AI models using data from Instagram and Facebook users in Europe, a decision that has sparked controversy among privacy experts and GDPR specialists across the continent. In response, Germany’s Hamburg-based data protection authority (DPA) initiated—but then swiftly abandoned—a legal urgency procedure aimed at halting the tech giant’s plans within national borders.
In a statement to Euractiv, the DPA explained its decision, saying: “Given the forthcoming EU-wide evaluation of Meta's practices, an isolated urgency procedure for Germany is not a suitable path.” The regulator emphasized that a German-only ban would have little practical effect and could undermine broader regulatory cohesion across the EU.
A fragmented EU response
This retreat comes after a Cologne regional court ruled last week in favor of Meta, allowing it to continue mining user data for AI development. Further compounding the DPA’s decision was the absence of objections from Ireland’s data protection authority—Meta’s primary regulator under the GDPR—signaling that any meaningful opposition must occur at the pan-European level.
The implications are significant. In the absence of a coordinated EU enforcement strategy, tech giants like Meta can maneuver within regulatory gaps. “This is just the very beginning of the story,” said Dr. Ilia Kolochenko, CEO at ImmuniWeb and a Fellow at the European Law Institute (ELI). “Whilst it is quite unlikely that EU authorities will impose a flat ban on AI training with Personally Identifiable Information (PII) of European residents, some important restrictions may come into play.”
The GDPR dilemma
At the heart of the controversy is how Meta collects and processes personal data for AI purposes. Currently, users are offered an opt-out mechanism—something many experts argue does not meet GDPR requirements for informed consent.
“First, the current opt-out method used by Meta is likely to be replaced by an opt-in method, explicitly asking users whether they wish their data to be used for AI training,” Kolochenko explained. “Second, children may be excluded from AI training programs altogether or strict parental consent will be required.”
Beyond consent, technical challenges loom large—especially when it comes to correcting or deleting data embedded in AI models. “Once erroneous PII is ingested by a Large Language Model (LLM), it is extremely difficult to remove or correct it,” he added. “This is probably where Meta will have the biggest challenge in complying with GDPR.”
A wider legal minefield
While GDPR takes center stage, other European laws will also come into play. Meta could face increased obligations related to hate speech, illegal content, and intellectual property. “Meta will likely have to deal with sophisticated pre-screening of users' posts that contain illicit materials, calls to violence or discrimination, as well as content that infringes copyright or trademarks,” Kolochenko noted.
Meta’s move to leverage user data for AI training is a watershed moment, not just for data privacy but for the evolving role of Big Tech in Europe’s digital ecosystem. For regulators, the case underscores the urgent need for coordinated oversight, while for businesses, it raises the stakes on ethical AI and compliance strategies.
As Meta charges ahead, EU regulators are being watched closely. Whether they can mount a unified response—or once again fall prey to fragmented enforcement—may determine how far AI can go when fueled by personal data.
Sophie Vanheeghe Stevenard
© REUTERS/Gonzalo Fuentes