Tanda Air SynthID Google Melawan Deepfakes dan Maklumat Salah Digital

Google has unveiled a groundbreaking solution to combat the rising issue of misleading content and deepfakes. Their new tool, called SynthID, utilizes invisible watermarks to protect AI-generated images and strengthen online visual integrity. This collaboration between Google Research, Google Cloud, and Google DeepMind is a significant step towards addressing the challenge of digital misinformation.

SynthID works by embedding indelible signatures within the pixels of AI-generated images, serving as an invisible shield to authenticate their origin. While imperceptible to the human eye, these digital watermarks can be detected by algorithms. The introduction of SynthID not only reinforces the fight against deepfakes but also safeguards copyright-protected images.

This innovative tool operates within Google’s Vertex AI platform, which is designed for developing AI applications and models. SynthID exclusively supports Imagen, Google’s text-to-image model. It scans incoming images, searching for the SynthID watermark, and categorizes the detection certainty into three tiers: detected, undetected, and possibly detected. This resilience enables SynthID to identify AI-generated images even if they have been modified or compressed.

The effectiveness of SynthID lies in the synergy between two AI models—one dedicated to watermarking and the other to identification. Google trained these models together using a diverse set of images, enabling SynthID to pierce through modifications such as filters, color alterations, and compression, maintaining its ability to detect AI-generated images.

While SynthID does not offer absolute certainty in identifying watermarked images, it strikes a balance between accuracy and cautious analysis. This strategic approach ensures that it accurately distinguishes between images that may potentially bear the watermark and those that are more likely to contain it.

Google’s SynthID is not the only watermarks solution in the market. Companies like Imatag and Steg.AI have developed their own techniques resilient to image cropping, resizing, and edits. Microsoft has also committed to cryptographic watermarking, while Shutterstock, Midjourney, and OpenAI’s DALL-E 2 have implemented their own approaches to mark AI-generated content.

With the rise of generative AI and its potential for misinformation, the introduction of SynthID is a significant advancement towards maintaining transparency and authenticity in the digital world. This powerful tool, enabled by AI technology, empowers users to discern between genuine content and AI-generated creations, effectively countering the spread of misinformation.

– How to Detect and Handle Deepfakes in the Age of AI?
– AI-Generated Art Denied Copyrights by US Court
– EU Calls for Measures to Identify Deepfakes and AI Content
– MIT’s PhotoGuard Uses AI to Defend Against AI Image Manipulation
– OpenAI’s AI Detection Tool Fails to Detect 74% of AI-Generated Content
– 4 Tech Giants – OpenAI, Google, Microsoft, and Anthropic Unite for Safe AI