Meta's Oversight Board probes explicit AI-generated images posted on Instagram and Facebook | TechCrunch

Meta's Oversight Board probes explicit AI-generated images posted on Instagram and Facebook | TechCrunch

Board of SupervisorsMeta's semi-independent policy council is turning its attention to how the company's social platforms are handling candid, AI-generated images. On Tuesday, it announced two separate investigations into how Instagram in India and Facebook in the US handled AI-generated images of public figures when Meta's systems failed to detect and respond to explicit content. I failed.

In both cases, the sites have now taken down the media. According to an email Meta sent to TechCrunch, the board is not naming the individuals targeted by the AI ​​images “to avoid gender-based harassment.”

The board raises issues about Meta's moderation decisions. Before approaching the Oversight Board, users must first appeal to Meta about the moderation measure. The board is going to publish its full results and results in future.


Explaining the first case, the board said a user reported AI-generated nudes of a public figure in India on Instagram as pornography. The photo was posted by an account that exclusively posts AI-generated photos of Indian women, and the majority of users who reacted to the photos are based in India.

Meta failed to take down the image after the first report, and the report ticket was automatically closed after 48 hours after the company did not investigate the report further. When the original complainant appealed the decision, the report was again automatically closed without any oversight from Meta. In other words, after two reports, the apparent AI-generated photo remained on Instagram.

The consumer eventually appealed to the board. The company only took action to remove the objectionable content at that point and removed the image for violating its community standards on bullying and harassment.

Another case concerns Facebook, where a user posted a candid, AI-generated photo resembling an American public figure to a group focused on AI creations. In this case, the social network removed the image as it was previously posted by another user, and Meta added it to the media-matching service bank under the category “derogatory sexual photoshop or drawing”.

When asked by TechCrunch why the board chose a case where a company successfully pulled off a clear AI-generated image, the board said it chooses cases “that involve Meta's platforms.” They are symptomatic of wider problems.” It added that these cases help the Advisory Board to see the global effectiveness of META's policy and processes for different topics.

“We know that Meta is faster and more effective at moderating content in some markets and languages ​​than others. With one case from the US and one from India, we want to see if Meta is universal for all women.” is fairly protected,” Oversight Board Co-Chair Hayley Thorning-Schmidt said in a statement.

“The board believes it is important to know whether Meta's policies and enforcement practices are effective in addressing this issue.”

Deep fake porn and online gender-based violence problem

Some—not all—generative AI tools in recent years have expanded to allow Users to generate porn. As TechCrunch previously reported, groups like Volatile spreads are trying to monetize AI porn. with Immoral letters And Bias in data.

Deep faxes have also become a concern in regions such as India. Last year, a report by The BBC He said that the number of deepfake videos of Indian actresses has increased in recent days. Data It turns out That women are usually victims of deep fake videos.

Earlier this year, Deputy IT Minister Rajeev Chandrasekhar Expressed dissatisfaction with tech companies' approach to countering deepfakes..

“If a platform thinks it can get away with not removing deepfake videos, or just maintains a comfortable approach to it, then we've got those,” Chandrasekhar said at a press conference at the time. Platforms have the power to protect their citizens by blocking them,” Chandrasekhar said at a press conference at the time.

Although India has considered enacting specific laws on deep counterfeiting, nothing has happened so far.

While the country has provisions for online reporting of gender-based violence under the law, experts note that The process can be difficult, and often has very little support. In a study published last year, the Indian advocacy group IT for Change It noted that courts in India need a strong process to deal with online gender-based violence and should not trivialize these cases.

Aparajita Bharti, co-founder of Quantum Hub, an India-based public policy consulting firm, said there should be restrictions on AI models to prevent them from creating clearly harmful content.

“The main risk of generative AI is that the volume of such content will increase because it is easy to generate such content and with a high degree of sophistication. Therefore, we need to first train AI models to generate such content. Content creation needs to be stopped so that the intent to harm someone is clear in advance. We should also introduce default labeling for easy detection,” Bharti told TechCrunch in an email.

There are currently only a few laws globally that address the production and distribution of pornography using AI tools. A handful of US states. There are laws against deep faxing. Britain introduced a law this week Criminalize the creation of sexually explicit AI-powered images..

Meta's response and next steps

In response to the oversight board's concerns, Meta said it removed both pieces of content. However, the social media company did not address the fact that it failed to remove content on Instagram after initial reports from users or how long the content remained on the platform.

Meta said it uses a combination of artificial intelligence and human review to detect sexually suggestive content. The social media giant said it doesn't recommend this type of content in places like Instagram Explore or Rails Recommendations.

The Oversight Board has called. Public comments — with an April 30 deadline — on the issue of deep fake porn damage, contextual information about the proliferation of such content in regions like the US and India, and AI-generated Potential pitfalls of the meta-approach in detecting clear images.

The board will investigate the cases and public comments and post a decision on the site within a few weeks.

These cases show that major platforms are still dealing with old moderation processes while AI-powered tools have enabled users to create and distribute different types of content quickly and easily. Companies like Meta are experimenting with such devices. Use AI. For content preparationwith some effort Find images like this. In April, the company announced that it would Install “Made with AI” badges. For DeepFax if it can detect content using “industry standard AI image indicators” or user disclosures.

However, perpetrators are constantly finding ways to evade these detection systems and post problematic content on social platforms.

About the Author

Leave a Reply