Home Science & TechSecurity LinkedIn Halts AI Data Processing in UK Amid Privacy Concerns Raised by ICO

LinkedIn Halts AI Data Processing in UK Amid Privacy Concerns Raised by ICO

by ccadm


Sep 21, 2024Ravie LakshmananPrivacy / Artificial Intelligence

The U.K. Information Commissioner’s Office (ICO) has confirmed that professional social networking platform LinkedIn has suspended processing users’ data in the country to train its artificial intelligence (AI) models.

“We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its U.K. users,” Stephen Almond, executive director of regulatory risk, said.

“We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.”

Almond also said the ICO intends to closely keep an eye on companies that offer generative AI capabilities, including Microsoft and LinkedIn, to ensure that they have adequate safeguards in place and take steps to protect the information rights of U.K. users.

Cybersecurity

The development comes after the Microsoft-owned company admitted to training its own AI on users’ data without seeking their explicit consent as part of an updated privacy policy that went into effect on September 18, 2024, 404 Media reported.

“At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice,” Linked said.

The company also noted in a separate FAQ that it seeks to “minimize personal data in the data sets used to train the models, including by using privacy enhancing technologies to redact or remove personal data from the training dataset.”

Users who reside outside Europe can opt out of the practice by heading to the “Data privacy” section in account settings and turning off the “Data for Generative AI Improvement” setting.

“Opting out means that LinkedIn and its affiliates won’t use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place,” LinkedIn noted.

LinkedIn’s decision to quietly opt in all users for training its AI models comes only days after Meta acknowledged that it has scraped non-private user data for similar purposes going as far back as 2007. The social media company has since resumed training on U.K. users’ data.

Last August, Zoom abandoned its plans to use customer content for AI model training after concerns were raised over how that data could be used in response to changes in the app’s terms of service.

The latest development underscores the growing scrutiny of AI, specifically surrounding how individuals’ data and content could be used to train large AI language models.

Cybersecurity

It also comes as the U.S. Federal Trade Commission (FTC) published a report that essentially said large social media and video streaming platforms have engaged in vast surveillance of users with lax privacy controls and inadequate safeguards for kids and teens.

The users’ personal information is then often combined with data gleaned from artificial intelligence, tracking pixels, and third-party data brokers to create more complete consumer profiles before being monetized by selling it to other willing buyers.

“The companies collected and could indefinitely retain troves of data, including information from data brokers, and about both users and non-users of their platforms,” the FTC said, adding their data collection, minimization, and retention practices were “woefully inadequate.”

“Many companies engaged in broad data sharing that raises serious concerns regarding the adequacy of the companies’ data handling controls and oversight. Some companies did not delete all user data in response to user deletion requests.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.





Source link

Related Articles