When Zoom CEO Eric Yuan opened a quarterly earnings call using his AI avatar, a small badge appeared in the corner of the screen: “CREATED WITH ZOOM AI COMPANION.” The intention was clear—signal transparency, reassure viewers, and imply that users can reliably distinguish real humans from AI-generated clips.
But there’s an obvious problem:
Anyone can recreate that badge in under 30 seconds.
I tried it. It’s trivial. And if I can do it, attackers can replicate it flawlessly. With Zoom preparing to launch photorealistic AI avatars in early 2025—digital replicas of employees reading scripted messages—the watermark becomes not a safety control, but a false comfort. And false comfort is more dangerous than no security at all.
Security Theatre Disguised as Protection
Zoom’s avatars in 2025 aren’t autonomous—they’re scripted clips. The watermark is meant to signal synthetic content, but it’s only pixels. Even if Zoom adds cryptographic verification behind the scenes, most users won’t check it. They will trust the badge, not the technology.
This creates three escalating risks:
1. False Confidence
Users begin to interpret the badge as proof of authenticity rather than a caution sign.
2. Legitimised Deception
Attackers can add identical badges to deepfakes, making them appear official.
3. Lower Vigilance
Users stop questioning content: “It had the Zoom badge, so I trusted it.”
Human Behaviour Makes It Even More Dangerous
Deepfake-enabled fraud already exploits authority structures. In the $25.6 million Arup incident, employees obeyed fake executives on a video call despite doubts. Now imagine those deepfakes carrying a familiar Zoom watermark.
In most organisations:
- Questioning executives feels risky
- Hierarchy suppresses skepticism
- Remote work normalises odd communication patterns
Attackers don’t need to beat a security system—they just need to look legitimate.
Normalization Turns Fraud Into Noise
As AI avatars become a standard part of workflows, fake messages will blend seamlessly into daily operations. Suspicion drops, signals blur, and fraud becomes harder to detect.
This risk extends beyond Zoom. HeyGen already enables real-time avatars. Microsoft and Google will follow. The avatar ecosystem is expanding faster than corporate security culture can adapt.
What Real Security Should Demand
Effective protection would require:
- Cryptographic signing of every avatar clip
- Biometric enrollment verification
- Tamper-proof provenance metadata
- Revocation controls for compromised avatars
None of these protections are standard today—and even if they existed, employees would still rely on the visible badge attackers can copy.
What Organisations Must Do Now
- Treat watermarks as zero-trust. They are not authentication.
- Treat all video instructions as potentially synthetic. Verify through secondary channels.
- Retire video as evidence. In the AI era, video is content, not proof.
Security theatre doesn’t just fail to protect—it actively increases risk by creating misplaced trust. As AI avatars become mainstream, organisations must update verification norms, not rely on a pixel-based illusion of safety.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



