What happens when fake videos lead to real consequences?
Monday mornings tend to be a rush. New day, new week and new crisis. This morning brings us to the news of the Bacchans suing Youtube over a controversial deepfake video that sparked outrage amongst the fans and an uproar around the ethical implications. The bacchans are seeking ₹4 crore in damages. The couple alleges that the platform hosted sexually explicit AI-generated deepfake videos featuring Aishwarya Rai, some depicting her in fabricated scenarios with actor Salman Khan. These videos, which garnered millions of views, were reportedly used to train AI models without the couple’s consent. The Bachchans argue that such content not only violates their privacy but also misuses their likeness for commercial purposes without authorization.
This legal action has prompted YouTube to remove hundreds of AI-generated Bollywood videos, including those featuring the Bachchans, highlighting the platform’s responsibility in regulating user-generated content.
The case has ignited a broader debate about the ethical implications of AI technology and the need for stringent regulations to protect individuals from digital exploitation.
The Rise of Deepfakes
In the simplest terms, deepfakes are AI-generated videos or audio where someone’s likeness is digitally replaced, creating content that looks and sounds real but isn’t. What once began as an amusing hobby on Reddit in 2017, with “face swap” apps swapping faces in GIFs and memes, has now grown into a sophisticated technology capable of producing hyper-realistic celebrity videos and politically charged clips.
The journey of deepfakes from novelty to notoriety is both fascinating and unsettling. In 2018, director Jordan Peele lent his voice and expertise to a Barack Obama deepfake PSA, demonstrating how convincingly AI could mimic a person while raising awareness about its potential dangers. By 2019, artists experimented with deeper commentary: a controversial video featuring Mark Zuckerberg manipulating “data control” highlighted how deepfakes could be used for satire and critique but also stirred debates about misinformation.
Then came 2021, when deepfake Tom Cruise videos took TikTok by storm. While these clips were mostly harmless entertainment, the hyper-realistic execution left viewers questioning reality and digital trust. What was once confined to hobbyist circles had entered mainstream consciousness, proving that a few lines of code could blur the line between truth and fabrication.
The invasion of illusions:
Deepfakes have evolved from digital curiosities to potent weapons of emotional and reputational destruction. These AI-generated videos and images, which can convincingly superimpose individuals into fabricated scenarios, have led to real-world consequences for both celebrities and ordinary people alike.
In India, actress Rashmika Mandanna became a victim of this technology when a deepfake video featuring her likeness went viral. The video, which was later confirmed to be AI-generated, sparked widespread outrage and led to calls for stricter regulations on digital content. Mandanna expressed her distress over the incident, highlighting the profound impact such violations can have on an individual’s mental well-being and public image.
Internationally, the situation mirrors these challenges. In early 2024, sexually explicit AI-generated images of pop star Taylor Swift circulated widely on social media platforms, prompting outrage among fans and calls for legislative action. U.S. lawmakers condemned the incident and pushed for federal legislation, such as the Preventing Deepfakes of Intimate Images Act, to combat nonconsensual deepfake pornography.
The psychological toll on victims is profound. Experts have noted that being subjected to such digital violations can lead to anxiety, depression, and a pervasive sense of powerlessness. The anonymity afforded by digital platforms often exacerbates the trauma, as perpetrators can disseminate harmful content without facing immediate repercussions.
Moreover, the proliferation of deepfake pornography disproportionately affects women. Studies indicate that a significant majority of non-consensual deepfake content targets female individuals, contributing to their objectification and dehumanization in society.
Ethical Grey Zones
As deepfake technology surges ahead, legal frameworks worldwide are struggling to keep pace. In India, the Information Technology Act (2000) and sections of the Indian Penal Code (IPC) addressing defamation, obscenity, and sexual harassment were designed long before AI-generated synthetic content became a reality. While these laws can technically be invoked in certain cases, they often fall short in addressing the speed, anonymity, and scale at which deepfakes spread online.
To tackle this, the Digital India Act, currently under consideration, proposes specific clauses for labeling synthetic media, creating accountability for platforms, and instituting penalties for nonconsensual or misleading AI-generated content. Advocates argue that mandatory labeling and provenance tracking are crucial to distinguish reality from fabrication, though the bill is still in draft form.
Globally, approaches vary but share a common goal of regulating misuse. The UK Online Safety Bill criminalizes the creation and distribution of deepfake pornography, while the EU’s AI Act mandates clear labeling for AI-generated content to maintain transparency. In the United States, California has pioneered laws banning political deepfakes in the run-up to elections to prevent misinformation from influencing democratic processes.
The Responsibility of Platforms
In India, YouTube has only recently implemented mechanisms to allow creators and public figures to request deepfake takedowns, a move long overdue given the viral spread of AI-generated videos.
To counter the surge of synthetic media, companies are experimenting with AI watermarking and authenticity tagging, technologies designed to track content origin and label manipulated media. Google’s SynthID represents a leading effort, embedding imperceptible signals in AI-generated images to aid detection.
Between Fear and Freedom
AI-generated content straddles a fine line between innovation and misuse. Artists and filmmakers increasingly use ethical deepfakes to recreate deceased actors, localize films across languages, or digitally restore historical footage.
The question arises: when does creativity cross into manipulation? Deepfakes challenge the assumption that visuals are inherently truthful, forcing viewers to confront the unsettling reality that “seeing is no longer believing.” This tension impacts not only celebrities but ordinary citizens, as synthetic content increasingly targets private individuals in non-consensual ways.
The Road Ahead
As deepfakes proliferate, the path forward demands a careful balance between regulation, public awareness, and ethical AI design. Governments, legal bodies, and industry stakeholders must collaborate to implement frameworks that protect privacy and reputation while preserving freedom of expression.
Media literacy campaigns are equally essential, teaching audiences to recognize manipulated content and question the authenticity of what they consume. Schools, digital platforms, and civil society organizations have a shared responsibility to equip citizens with these critical skills.
Tech platforms and content creators must also embrace accountability measures—including proactive monitoring, transparent labeling of AI-generated content, and rapid takedown mechanisms for harmful videos. As the Bachchan lawsuit demonstrates, unchecked virality can inflict real-world damage.
Ultimately, the challenge posed by deepfakes goes beyond technology. It is a struggle for the integrity of information itself.
By: Sushrut Tewari, a writer covering trends, innovation, and brand storytelling in India and beyond.