Home AI Meta’s Oversight Board probes explicit AI-generated images posted on Instagram and Facebook

Meta’s Oversight Board probes explicit AI-generated images posted on Instagram and Facebook

by ccadm


The Oversight Board, Meta’s semi-independent policy council, is turning its attention to how the company’s social platforms are handling explicit, AI-generated images. Tuesday, it announced investigations into two separate cases over how Instagram in India and Facebook in the U.S. handled AI-generated images of public figures after Meta’s systems fell short on detecting and responding to the explicit content.

In both cases, the sites have now taken down the media. The board is not naming the individuals targeted by the AI images “to avoid gender-based harassment,” according to an e-mail Meta sent to TechCrunch.

The board takes up cases about Meta’s moderation decisions. Users have to appeal to Meta first about a moderation move before approaching the Oversight Board. The board is due to publish its full findings and conclusions in the future.

The cases

Describing the first case, the board said that a user reported an AI-generated nude of a public figure from India on Instagram as pornography. The image was posted by an account that exclusively posts images of Indian women created by AI, and the majority of users who react to these images are based in India.

Meta failed to take down the image after the first report, and the ticket for the report was closed automatically after 48 hours after the company didn’t review the report further. When the original complainant appealed the decision, the report was again closed automatically without any oversight from Meta. In other words, after two reports, the explicit AI-generated image remained on Instagram.

The user then finally appealed to the board. The company only acted at that point to remove the objectionable content and removed the image for breaching its community standards on bullying and harassment.

The second case relates to Facebook, where a user posted an explicit, AI-generated image that resembled a U.S. public figure in a Group focusing on AI creations. In this case, the social network took down the image as it was posted by another user earlier, and Meta had added it to a Media Matching Service Bank under “derogatory sexualized photoshop or drawings” category.

When TechCrunch asked about why the board selected a case where the company successfully took down an explicit AI-generated image, the board said it selects cases “that are emblematic of broader issues across Meta’s platforms.” It added that these cases help the advisory board to look at the global effectiveness of Meta’s policy and processes for various topics.

“We know that Meta is quicker and more effective at moderating content in some markets and languages than others. By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way,” Oversight Board Co-Chair Helle Thorning-Schmidt said in a statement.

“The Board believes it’s important to explore whether Meta’s policies and enforcement practices are effective at addressing this problem.”

The problem of deep fake porn and online gender-based violence

Some — not all — generative AI tools in recent years have expanded to allow users to generate porn. As TechCrunch reported previously, groups like Unstable Diffusion are trying to monetize AI porn with murky ethical lines and bias in data.

In regions like India, deepfakes have also become an issue of concern. Last year, a report from the BBC noted that the number of deepfaked videos of Indian actresses has soared in recent times. Data suggests that women are more commonly subjects for deepfaked videos.

Earlier this year, Deputy IT Minister Rajeev Chandrasekhar expressed dissatisfaction with tech companies’ approach to countering deepfakes.

“If a platform thinks that they can get away without taking down deepfake videos, or merely maintain a casual approach to it, we have the power to protect our citizens by blocking such platforms,” Chandrasekhar said in a press conference at that time.

While India has mulled bringing specific deepfake-related rules into the law, nothing is set in stone yet.

While the country there are provisions for reporting online gender-based violence under law, experts note that the process could be tedious, and there is often little support. In a study published last year, the Indian advocacy group IT for Change noted that courts in India need to have robust processes to address online gender-based violence and not trivialize these cases.

Aparajita Bharti, co-founder at The Quantum Hub, an India-based public policy consulting firm, said that there should be limits on AI models to stop them from creating explicit content that causes harm.

“Generative AI’s main risk is that the volume of such content would increase because it is easy to generate such content and with a high degree of sophistication. Therefore, we need to first prevent the creation of such content by training AI models to limit output in case the intention to harm someone is already clear. We should also introduce default labeling for easy detection as well,” Bharti told TechCrunch over an email.

Devika Malik, a platform policy expert who previously worked in Meta’s South Asia policy team, said that while social networks have policies against non-consensual intimate imagery, enforcement is largely reliant on user reporting.

“This places an unfair onus on the affected user to prove their identity and the lack of consent (as is the case with Meta’s policy). This can get more error-prone when it comes to synthetic media, and to say, the time taken to capture and verify these external signals enables the content to gain harmful traction,” Malik said.

There are currently only a few laws globally that address the production and distribution of porn generated using AI tools. A handful of U.S. states have laws against deepfakes. The UK introduced a law this week to criminalize the creation of sexually explicit AI-powered imagery.

Meta’s response and the next steps

In response to the Oversight Board’s cases, Meta said it took down both pieces of content. However, the social media company didn’t address the fact that it failed to remove content on Instagram after initial reports by users or for how long the content was up on the platform.

Meta said that it uses a mix of artificial intelligence and human review to detect sexually suggestive content. The social media giant said that it doesn’t recommend this kind of content in places like Instagram Explore or Reels recommendations.

The Oversight Board has sought public comments — with a deadline of April 30 — on the matter that addresses harms by deep fake porn, contextual information about the proliferation of such content in regions like the U.S. and India, and possible pitfalls of Meta’s approach in detecting AI-generated explicit imagery.

The board will investigate the cases and public comments and post the decision on the site in a few weeks.

These cases indicate that large platforms are still grappling with older moderation processes while AI-powered tools have enabled users to create and distribute different types of content quickly and easily. Companies like Meta are experimenting with tools that use AI for content generation, with some efforts to detect such imagery. In April, the company announced that it would apply “Made with AI” badges to deepfakes if it could detect the content using  “industry standard AI image indicators” or user disclosures.

Platform policy expert Malik said that labeling is often inefficient because system to detect AI-generated imagery is still not reliable.

“Labelling has been shown to have limited impact when it comes to limiting the distribution of harmful content. If we think back to the case of AI-generated images of Taylor Swift, millions of users were directed to those images through X’s own trending topic ‘Taylor Swift AI’. So, people and the platform knew that the content was not authentic, and it was still algorithmically amplified,” Malik noted.

However, perpetrators are constantly finding ways to escape these detection systems and post problematic content on social platforms.



Source link

Related Articles