So Artificial Sharing Artificial Intelligence AI news with the world

Bringing you the Latest AI News AND GUIDES

Google Launches SynthID, an AI Image Watermark That’s Invisible to the Naked Eye

Google Launches SynthID, an AI Image Watermark That’s Invisible to the Naked Eye

Google Is Concerned About AI Generated Image Misuse So It’s Released A New Tool To Fight It

Google Launches SynthID, an AI Image Watermark That’s Invisible to the Naked Eye
This Image was created with stable diffusion

Google recently unveiled SynthID, a tool designed to combat deepfakes and the misuse of AI-generated images. SynthID, developed by Google DeepMind, embeds a digital watermark into the pixels of an image, rendering it invisible to the human eye but detectable for identification purposes.

How SynthID Works

SynthID is specifically created for watermarking and identifying AI-generated images. It employs generative AI models to embed watermarks directly into images produced by Imagen, Google’s text-to-image generator. The watermark persists even when images are altered, filtered, or have their colors changed. This watermark is designed to be invisible to humans but easily recognizable by detection tools.

Ok it might not protect users from other AI generative tools like stable diffusion and midjourney, but it’s a start. Maybe moving forward these companies might follow google and apply hidden watermarks too.

SynthID is engineered to withstand various image transformations, including cropping, resizing, rotating, or compression. It has no adverse impact on image quality or the user experience. The tool offers a means to ascertain the image’s source and authenticity while also providing information about the likelihood that an image was AI-generated and the confidence level of the detection.

Why SynthID Matters

Google’s development of SynthID is part of its commitment to deploying responsible AI and addressing the challenges and risks posed by generative AI technologies. Generative AI offers creative potential but can also be used to spread false or misleading information, making the ability to identify AI-generated content crucial. This knowledge empowers users and aids in combating the propagation of misinformation.

While Microsoft, Adobe, and other companies have also worked on watermarking and detecting AI-generated images, Google asserts that SynthID is the first tool that can watermark and identify AI-generated images in a way that is imperceptible to humans but discernible by machines.

Google Is Concerned About AI Generated Image Misuse So It's Released A New Tool To Fight It
Google is investing in security against deep fakes.

What’s Next for SynthID

SynthID is currently accessible to a limited number of Vertex AI customers using Imagen. Google plans to further refine the tool through real-world data and feedback and expand its availability and capabilities over time. Google hopes that SynthID will stimulate discussions about the future of AI and synthetic media, along with the ethical and social considerations associated with these technologies.

SynthID, while not foolproof against extreme image manipulations, represents a promising technical approach for creating and identifying AI-generated images responsibly. Additionally, it may evolve to encompass other AI modalities beyond imagery, including audio, video, and text.


Support us

Leave a Reply

Your email address will not be published. Required fields are marked *