Watermark project tries to halt deepfakery

IT STANDS to reason that a freelance photographer's primary worry about AI will be copyright infringement. For consumers of media photos, however, the most pressing concern is being tricked by so-called "deepfakes" - convincing images of places and events that don't exist, and of recognisable people in situations that never happened.

To meet this challenge, Google's DeepMind group has released a software tool called SynthID that embeds unbreakable watermarks into AI-generated images. Unlike conventional photo watermarking, which is either plainly visible or sneakily destructive in reproduction, SynthID watermarks are supposed to remain invisible to the human eye. They are claimed to be impervious to cropping, scaling and even aggressive JPG compression. Even if AI-generated material is Photoshopped into other images, SynthID should still be able to detect its own watermarks.

DeepMind sees it as a corporate tool, initially being rolled out to its major customers for testing with its Vertex AI platform and the Imagen image generator.

At the lightweight end of the scale, SynthID could be used, for example, to ensure an ad agency does not accidentally disseminate imagery created by AI during brainstorming sessions instead of the approved final artwork. However, with the US about to enter a presidential election year, it should not be too difficult to imagine scenarios in which SynthID's ability to spot an AI deepfake could be a significant safeguard for democracy.

Naturally, Google would like the technology to be adopted as a "standard" - did anyone say "monopoly"? - although such hopes fly in the face of the unregulated and proprietary methods preferred by the key players in the IT industry. Just as others pile onto the generative AI bandwagon, we can probably expect a glut of AI detection "standards", all incompatible with each other, in the short term at least.

Also, one wonders whether DeepMind might have been able to come up with an AI watermarking system for original photos - so that these could be swiftly detected when misappropriated and regurgitated by an AI image generator.

But then, where's the money in that for Google?