Images generated by DALL-E 3 artificial intelligence will have an invisible watermark, OpenAI announced this Tuesday (6). Identifiers will be added to the figures’ metadata.
The measure complies with the standards of the Coalition for Content Provenance and Authenticity (C2PA), an entity made up of companies dedicated to developing systems that provide context and history for digital media.
Images generated by DALL-E 3 will have identifiers in the metadata and a badge in the upper left corner. Source: OpenAI/Reproduction
According to the company, C2PA brands will appear in images generated in ChatGPT on the web and in applications that use the DALL-E 3 API. The mobile version will receive the new feature on February 12th.
In addition to identifiers in the metadata, all images will have a “CR” symbol in the top left corner.
According to OpenAI, the addition of those marked does not imply any negative effects on latency or the quality of the image generated by the AI. The company reports that the feature will also increase the size of figures for some orders.
Method is not infallible
Adding tags to metadata is not a foolproof solution to stopping the spread of fake images, however. OpenAI itself points out that data can be “easily removed by accident or design.”
When sharing images on social networks, for example, the metadata content changes depending on the platform. Taking a screenshot of the image in question also camouflages the content.
In any case, the addition of OpenAI is an important step towards ensuring transparency in AI-generated images. This is a great demand for companies in the sector, as several countries will go through elections this year.