Meta on Friday said it’s delaying its efforts to train the company’s large language models (LLMs) using public content shared by adult users on Facebook and Instagram in the European Union following a request from the Irish Data Protection Commission (DPC).
The company expressed disappointment at having to put its AI plans on pause, stating it had taken into account feedback from regulators and data protection authorities in the region.
At issue is Meta’s plan to use personal data to train its artificial intelligence (AI) models without seeking users’ explicit consent, instead relying on the legal basis of ‘Legitimate Interests’ for processing first and third-party data in the region.
These changes were expected to come into effect on June 26, before when the company said users could opt out of having their data used by submitting a request “if they wish.” Meta is already utilizing user-generated content to train its AI in other markets such as the U.S.
“This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe,” Stefano Fratta, global engagement director of Meta privacy policy, said.
“We remain highly confident that our approach complies with European laws and regulations. AI training is not unique to our services, and we’re more transparent than many of our industry counterparts.”
It also said it cannot bring Meta AI to Europe without being able to train its AI models on locally-collected information that captures the diverse languages, geography, and cultural references, noting that doing so would otherwise amount to a “second-rate experience.”
Besides working with the DPC to bring the AI tool to Europe, it noted the delay will help it address requests it received from the U.K. regulator, the Information Commissioner’s Office (ICO), prior to commencing the training.
“In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset,” Stephen Almond, executive director of regulatory risk at the ICO, said.
“We will continue to monitor major developers of generative AI, including Meta, to review the safeguards they have put in place and ensure the information rights of U.K. users are protected.”
The development comes as Austrian non-profit noyb (none of your business) filed a complaint in 11 European countries alleging violation of the General Data Protection Regulation (GDPR) in the region by collecting users’ data to develop unspecified AI technologies and share it with any third-party.
“Meta is basically saying that it can use ‘any data from any source for any purpose and make it available to anyone in the world,’ as long as it’s done via ‘AI technology,'” noyb’s founder Max Schrems said. “This is clearly the opposite of GDPR compliance.”
“Meta doesn’t say what it will use the data for, so it could either be a simple chatbot, extremely aggressive personalized advertising or even a killer drone. Meta also says that user data can be made available to any ‘third-party’ – which means anyone in the world.”
Noyb also criticized Meta for making disingenuous claims and framing the delay as a “collective punishment,” pointing out that the GDPR privacy law permits personal data to be processed as long as users give their informed opt-in consent.
“Meta could therefore roll out AI technology in Europe, if it would just bother to ask people to agree, but it seems Meta is doing everything to ever gain opt-in consent for any processing,” it said.