The Irish Data Protection Commission (DPC) and the UK's Information Commissioner's Office (ICO) have both disagreed with Meta's idea to use what people post on Facebook and Instagram to teach large language models (LLMs).
The DPC, representing several European data protection groups, told Meta to wait before starting the training. The ICO asked Meta to stop and think over its plans.
Meta said it's "disappointed" with these requests, saying they're bad for European progress, although we would have thought it was just bad for Meta’s AI plans.
Both regulators are happy with Meta's pause and plan to continue working with them and other AI creators to protect users' rights.
The issue started when Meta said last month it would train LLMs using user data.
A privacy rights group called noyb ("none of your business") complained to EU authorities, putting pressure on Ireland's DPC.
Noyb said Meta's plans broke "at least ten" GDPR rules, including one about choosing to let your data be used, not having to say no.
Users couldn't really say no; they could only fill out a hidden form on Instagram, and Meta could ignore it.
Meta argued that its "legitimate interest" was more important than data protection laws; after all, it hoped to make piles of cash from AI. This is something it often says to defend how it uses data, but European officials usually don't agree; in fact, it is hard to see how anyone who is not a big multinational could agree.
On a webpage, Meta says it thinks using "legitimate interest" is the best way to handle public data for AI training while still respecting people's rights.
This suggests Meta doesn't think asking people for permission would give it enough data for AI training.
Noyb is pleased with the pushback from regulators. But Max Schrems, the chair of noyb, says it will keep a close eye on things.
"So far, there has been no official change to the Meta privacy policy that would make this commitment legally binding. The cases we have filed are ongoing and will require an official decision."