Meta is preparing tools to identify images generated by artificial intelligence (AI). The company's president of global affairs, Nick Clegg, announced the news in a publication this Tuesday (6).
All of Meta's AI-generated images feature invisible metadata and identifiers to show that they were created by generative models. However, this is not present in competing services.
According to the executive, Meta is developing tools and markers to identify images generated by AI on neighboring platforms, such as generators from Google, OpenAI, Microsoft and Adobe.
“As the difference between human and synthetic content becomes blurred, people want to know where the line is,” said Clegg. “People often encounter AI-generated content for the first time, and our users have told us they appreciate the transparency around this technology,” he highlighted.
Therefore, Clegg recognizes the importance of highlighting when an image is generated by a generative model.
Greater transparency helps Meta prevent the spread of harmful AI-generated content. “People in organizations who want to actively deceive others with AI-generated content will look for ways to bypass detection mechanisms,” Clegg said.
On Meta's social networks — Instagram, Facebook and Threads — all posts with AI-generated images will be identified with a specific tag.
According to Clegg, the capability is under development and the identifiers will be implemented over the next few months. The executive highlighted that agility is important to preserve the integrity of the elections taking place this year in several countries.
However, the Meta tool will be limited to images. Identifiers will not yet be able to distinguish AI-generated audio or video content.