Home AI Oversight Board wants Meta to refine its policies around AI-generated explicit images

Oversight Board wants Meta to refine its policies around AI-generated explicit images

by ccadm


Following investigations into how Meta handles AI-generated explicit images, the company’s semi-independent observer body, the Oversight Board, is now urging the company to refine its policies around such images. The Board wants Meta to change the terminology it uses from “derogatory” to “non-consensual,” and move its policies on such images to the “Sexual Exploitation Community Standards” section from the “Bullying and Harassment” section.

Right now, Meta’s policies around explicit images generated by AI-generated branch out from a “derogatory sexualized photoshop” rule in its Bullying and Harassment section. The Board also urged Meta to replace the word “photoshop” with a generalized term for manipulated media.

Additionally, Meta prohibits non-consensual imagery if it is “non-commercial or produced in a private setting.” The Board suggested that this clause shouldn’t be mandatory to remove or ban images generated by AI-generated or manipulated without consent.

These recommendations come in the wake of two high-profile cases where explicit, AI-generated images of public figures posted on Instagram and Facebook landed Meta in hot water.

One of these cases involved an AI-generated nude image of an Indian public figure that was posted on Instagram. Several users reported the image but Meta did not take it down, and in fact closed the ticket within 48 hours with no further review. Users appealed that decision but the ticket was closed again. The company only acted after the Oversight Board took up the case, removed the content, and banned the account.

The other AI-generated image resembled a public figure from the U.S. and was posted on Facebook. Meta already had the image in its Media Matching Service (MMS) repository (a bank of images that violate its terms of service that can be used to detect similar images) due to media reports, and it quickly removed the picture when another user uploaded it on Facebook.

Notably, Meta only added the image of the Indian public figure to the MMS bank after the Oversight Board nudged it to. The company apparently told the Board the repository didn’t have the image before then because there were no media reports around the issue.

“This is worrying because many victims of deepfake intimate images are not in the public eye and are either forced to accept the spread of their non-consensual depictions or report every instance,” the Board said in its note.

Breakthrough Trust, an Indian organization that campaigns to reduce online gender-based violence, noted that these issues and Meta’s policies have cultural implications. In comments submitted to the Oversight Board, Breakthrough said non-consensual imagery is often trivialized as an identity theft issue rather than gender-based violence.

“Victims often face secondary victimization while reporting such cases in police stations/courts (“why did you put your picture out etc.” even when it’s not their pictures such as deepfakes). Once on the internet, the picture goes beyond the source platform very fast, and merely taking it down on the source platform is not enough because it quickly spreads to other platforms,” Barsha Charkorborty, the head of media at the organization, wrote to the Oversight Board.

Over a call, Charkorborty told TechCrunch that users often don’t know that their reports have been automatically marked as “resolved” in 48 hours, and Meta shouldn’t apply the same timeline for all cases. Plus, she suggested that the company should also work on building more user awareness around such issues.

Devika Malik, a platform policy expert who previously worked in Meta’s South Asia policy team, told TechCrunch earlier this year that platforms largely rely on user reporting for taking down non-consensual imagery, which might not be a reliable approach when tackling AI-generated media.

“This places an unfair onus on the affected user to prove their identity and the lack of consent (as is the case with Meta’s policy). This can get more error-prone when it comes to synthetic media, and to say, the time taken to capture and verify these external signals enables the content to gain harmful traction,” Malik said.

Aparajita Bharti, Founding Partner of Delhi-based think tank The Quantum Hub (TQH), said that Meta should allow users to provide more context when reporting content, as they might not be aware of the different categories of rule violations under Meta’s policy.

“We hope that Meta goes over and above the final ruling [of the Oversight Board] to enable flexible and user-focused channels to report content of this nature,” she said.

“We acknowledge that users cannot be expected to have a perfect understanding of the nuanced difference between different heads of reporting, and advocated for systems that prevent real issues from falling through the cracks on account of technicalities of Meta content moderation policies.’



Source link

Related Articles