Home AI Elon Musk’s X targeted with nine privacy complaints after grabbing EU users’ data for training Grok

Elon Musk’s X targeted with nine privacy complaints after grabbing EU users’ data for training Grok

by ccadm


X, the social media platform owned by Elon Musk, has been targeted with a series of privacy complaints after it helped itself to the data of users in the European Union for training AI models without asking people’s consent.

Late last month an eagle-eyed social media user spotted a setting indicating that X had quietly begun processing the post data of regional users to train its Grok AI chatbot. The revelation led to an expression of “surprise” from the Irish Data Protection Commission (DPC), the watchdog that leads on oversight of X’s compliance with the bloc’s General Data Protection Regulation (GDPR).

The GDPR, which can sanction confirmed infringements with fines of up to 4% of global annual turnover, requires all uses of personal data to have a valid legal basis. The nine complaints against X, which have been filed with data protection authorities in Austria, Belgium, France, Greece, Ireland, Italy, the Netherlands, Poland and Spain, accuse it of failing this step by processing Europeans’ posts to train AI without obtaining their consent.

Commenting in a statement, Max Schrems, chairman of privacy rights nonprofit noyb which is supporting the complaints, said: “We have seen countless instances of inefficient and partial enforcement by the DPC in the past years. We want to ensure that Twitter fully complies with EU law, which — at a bare minimum — requires to ask users for consent in this case.”

The DPC has already taken some action over X’s processing for AI model training, instigating legal action in the Irish High Court seeking an injunction to force it to stop using the data. But noyb contends that the DPC’s actions thus far are insufficient, pointing out that there’s no way for X users to get the company to delete “already ingested data.” In response, noyb has filed GDPR complaints in Ireland and seven other countries.

The complaints argue X does not have a valid basis for using the data of some 60 million people in the EU to train AIs without obtaining their consent. The platform appears to be relying on a legal basis that’s known as “legitimate interest” for the AI-related processing. However privacy experts say it needs to obtain people’s consent.

“Companies that interact directly with users simply need to show them a yes/no prompt before using their data. They do this regularly for lots of other things, so it would definitely be possible for AI training as well,” suggested Schrems.

In June, Meta paused a similar plan to process user data for training AIs after noyb backed some GDPR complaints and regulators stepped in.

But X’s approach of quietly helping itself to user data for AI training without even notifying people appears to have allowed it to fly under the radar for several weeks.

According to the DPC, X was processing Europeans’ data for AI model training between May 7 and August 1.

Users of X did gain the ability to opt out of the processing via a setting added to the web version of the platform — seemingly in late July. But there was no way to block the processing prior to that. And of course it’s tricky to opt out of your data being used for AI training if you don’t even know it’s happening in the first place.

This is important because the GDPR is explicitly intended to protect Europeans from unexpected uses of their information which could have ramifications for their rights and freedoms.

In arguing the case against X’s choice of legal basis, noyb points to a judgement by Europe’s top court last summer — which related to a competition complaint against Meta’s use of people’s data for ad targeting — where the judges ruled that a legitimate interest legal basis was not valid for that use-case and user consent should be obtained.

Noyb also points out that providers of generative AI systems typically claim they’re unable to comply with other core GDPR requirements, such as the right to be forgotten or the right to obtain a copy of your personal data. Such concerns feature in other outstanding GDPR complaints against OpenAI’s ChatGPT.



Source link

Related Articles