Gemini Detects SynthID in Videos & Images
Quickly verify short videos and images for Google’s invisible SynthID watermark, with timestamped results that show exactly where AI content appears.
19 Dec 2025 (Updated 28 Dec 2025) - Written by Lorenzo Pellegrini
Lorenzo Pellegrini
19 Dec 2025 (Updated 28 Dec 2025)
Google’s Gemini App Now Detects Invisible SynthID Watermarks in Short Videos and Images
Google has added a verification feature to its Gemini app that scans uploaded short videos and images for imperceptible SynthID watermarks embedded by Google’s own generative models (such as Imagen and Veo), reporting which segments contain AI-generated audio or visuals and where the watermark is detected within the file.
What the new Gemini verification does
Gemini lets users upload a video or image and ask whether it was created or edited using Google AI; the app then scans both the visual and audio tracks for the imperceptible SynthID watermark and returns a response that can specify timestamps or segments where SynthID was found.
The same mechanism works for images: Gemini’s existing image verification checks for SynthID signals embedded during generation and reports detection results to the user.
How SynthID works (overview)
SynthID is an imperceptible, machine-readable watermarking system developed by Google DeepMind and Google to embed provenance signals into media produced by Google’s generative models; it is intended to help identify content created or edited by those models while remaining invisible to human viewers.
When content is produced by Google’s AI tools, SynthID is embedded at generation time, which allows Gemini’s verification surface to read the watermark and indicate whether—and where—Google’s models were used.
Scope and limitations
- Detection limited to Google-generated content: Gemini’s verifier looks specifically for SynthID and therefore can only reliably identify content created or edited with Google’s own generative systems; it does not detect AI-generated media from other vendors or general signs of synthetic manipulation.
- Probabilistic signal, not definitive proof: Google frames SynthID detection as a useful provenance signal rather than an infallible judgment, because watermarks can theoretically be removed or degraded by heavy editing or recompression.
- File constraints: Current practical limits include maximum file sizes and lengths for uploads (reports indicate limits such as 100 MB and around 90 seconds for video uploads in consumer-facing interfaces), so Gemini’s tool is tailored to short-form media in its present form.
How Gemini presents results to users
Instead of a simple yes/no, Gemini provides context-aware responses: it scans audio and visual tracks independently, indicates whether SynthID was detected in either or both, and can highlight specific timestamps (for videos) where the watermark appears.
This approach aims to make verification actionable—for example, telling a user that SynthID was detected in the audio from 10–20 seconds while no SynthID was found in the visuals—rather than offering a blanket verdict.
Why video verification matters now
As generative video quality improves, visual inspection alone becomes unreliable; embedding and detecting provenance signals like SynthID provides a machine-readable way to trace whether certain clips or segments originated from a specific set of models, helping journalists, platforms, and consumers evaluate media trustworthiness.
Google’s strategy couples generation and verification within the same ecosystem—models embed SynthID at creation and Gemini serves as a verification surface—creating a closed-loop system for traceability of content produced by Google’s tools.
Practical uses and considerations for creators and consumers
- Creators using Google’s tools: Embedding SynthID by default can signal provenance and support transparency for audiences and platforms.
- Journalists and researchers: Timestamped detection helps locate which segments of a clip were AI-generated, aiding verification workflows.
- Limitations to keep in mind: Detection does not imply intent or malicious use, and absence of SynthID does not prove a clip is human-shot because non-Google AI tools and deliberate watermark removal remain outside this detection scope.
What experts and Google say
Google describes the feature as an expansion of its content transparency efforts: upload a video or ask Gemini whether it was generated with Google AI and the app will scan for SynthID across audio and visuals and reply with contextual findings and timestamps where detected.
Independent reporting notes that video support is a significant escalation because moving content raises complexity over single images, and that Gemini’s verifier intentionally limits claims to detecting SynthID rather than attempting to label all AI-generated media.
Conclusion
Google’s addition of SynthID scanning to the Gemini app extends its provenance tooling from images to short videos, enabling users to check whether Google’s generative models produced or edited media and to see exactly which segments contain the machine-readable watermark; however, the capability is restricted to Google-generated content and should be treated as a probabilistic signal rather than absolute proof.
As AI-generated media proliferates, tools that embed and detect provenance—paired with clear disclosure and platform policies—will play an increasing role in media verification and public trust.
