SynthID, a method to watermark AI-generated images for easy identification, has been introduced by Google’s AI division, DeepMind. The new tool, which embeds a digital watermark into an image’s pixels, was developed in collaboration with Google Cloud, according to a blog post by DeepMind.
The watermark can be used to identify images created by AI, although it won’t be visible to the human eye. The technology is now only available in beta to a select group of Vertex AI users who use Imagen, the company’s text-to-image model.
Images have a traditional watermark applied on top of them, typically in the form of transparent text or graphics. These techniques, however, have been shown to be ineffective at identifying AI images.
Read More: UK to Invest £100m in AI Chips Production Amid Global Competition
Since the watermark is invisible to the human eye, it will prevent it from detracting from the image. Two machine learning models are used by SynthID to recognise and watermark photos. The models assist it in identifying probable AI-generated content while also helping it visually align the watermark on the image. A wide variety of photos were used to train both models.
In a blog post, Google stated that being able to recognise AI-produced content is essential for giving users awareness of when they are interacting with created media and for preventing the spread of false information. SynthID is developed by Google DeepMind and refined in partnership with Google Research.
The blog says, “Google Cloud is the first cloud provider to offer a tool for creating AI-generated images responsibly and identifying them with confidence.” Although SynthID isn’t completely immune to extreme visual modifications, it does offer a technical solution that holds promise for enabling individuals and organizations to ethically use AI-generated content.