Last summer’s launch of the Pixel 9 was accompanied by a fair bit of controversy over the incredible possibilities of AI editing tools for images. So much so that they can be made to say anything and everything: a photo of a road becomes a stream, a train carriage is suddenly flooded, dodgy utensils and drugs are added to an innocent photo of a loved one.
You don’t need to be a graduate in the rigorous art of photo editing to create such visuals, just ask Gemini, Google’s AI assistant. Such images question reality and in the wrong hands, they can cause drama. That’s why it’s absolutely necessary for publishers and manufacturers to agree to report the presence of generative AI in photos.
Samsung has added traceability metadata to images edited with the Galaxy S25’s AI tools. Google has just announced the deployment of SynthID in photos manipulated with the Magic Editor function of Google Photos. This should make it easier to identify manipulated images.
This watermark system was unveiled a year and a half ago. Developed by Google's DeepMind team, this watermark integrates metadata directly into images, videos, audio files and even texts. Images entirely generated by the Imagen model already include this watermark. And since last October, image descriptions specify whether they have been altered by AI in Google Photos. The watermark goes one step further.
SynthID images can be identified in the “About this image” menu. Google specifies, however, that images that have undergone only a very slight modification (a flower has changed color, for example) may not include the watermark.
Source: Google
0 Comments