Abstract
The rise of generative artificial intelligence has transformed the digital ecosystem from a space of communication into a space of synthetic replication. Deepfake technology now enables the creation of hyper-realistic videos, audio recordings, and images capable of imitating an individual’s face, voice, expressions, and behavioural mannerisms with startling precision. While such technologies possess legitimate applications in cinema, accessibility, education, and digital creativity, their misuse has exposed profound legal vulnerabilities concerning personality rights, privacy, dignity, and informational autonomy.
This article critically examines the growing conflict between deepfake technology and personality rights in India through the lens of the Bombay High Court’s decision in Suniel V. Shetty v. John Doe Ashok Kumar (2025). The judgment marks a significant judicial intervention in recognising that AI-generated impersonation is not merely a commercial infringement but also a constitutional violation affecting dignity under Article 21 of the Constitution of India. The article analyses the legal foundations of personality rights in India, the inadequacy of existing statutory protections, intermediary obligations under digital governance frameworks, and the urgent necessity for a comprehensive legal response.
It argues that deepfakes fundamentally destabilise the relationship between identity and consent and therefore require a transition from traditional reputation-based remedies to a broader doctrine of digital identity protection. Ultimately, the article proposes recognition of a distinct “Right to Digital Integrity” capable of safeguarding individuals against unauthorised synthetic replication in the age of artificial intelligence.
Introduction: When Reality Becomes Synthetic
Somewhere in late 2023, a video featuring actress Rashmika Mandanna circulated across Indian social media platforms. At first glance, the video appeared entirely authentic. The facial movements, expressions, body language, and overall appearance looked real. Yet the video was fabricated. Artificial intelligence had superimposed her face onto another person’s body with extraordinary precision. The video spread rapidly before most viewers even realised it was fake.
The incident triggered public outrage, political concern, and renewed conversations about regulating artificial intelligence. However, the deeper issue was not simply misinformation. The real crisis was far more unsettling: for the first time, technology had made it possible to separate a person’s identity from their consent.
Deepfake technology represents a dramatic shift in how identity functions in the digital age. Unlike traditional photo editing or manipulated media, deepfakes do not merely alter existing content. They create entirely synthetic performances by reproducing the face, voice, and behavioural patterns of real individuals. A person can now be made to appear speaking words they never said, endorsing products they never approved, or participating in acts they never committed.
This technological evolution creates one of the most urgent legal questions of the modern digital era: who owns a person’s identity once it can be algorithmically reproduced?
Indian law currently lacks a dedicated statutory framework dealing specifically with deepfakes or synthetic identity misuse. Courts have therefore been compelled to adapt constitutional principles, intellectual property doctrines, and privacy jurisprudence to address harms arising from artificial intelligence. In this context, the Bombay High Court’s judgment in Suniel V. Shetty v. John Doe Ashok Kumar emerges as a landmark development in Indian cyber jurisprudence.
The judgment is significant because it moves beyond traditional celebrity endorsement disputes and recognises deepfakes as a direct assault on dignity, privacy, and personality rights protected under Article 21 of the Constitution.
Understanding Deepfake Technology and the Nature of Harm
Deepfakes are AI-generated synthetic media created using advanced machine learning systems such as Generative Adversarial Networks (GANs) and diffusion models. These systems are trained on large volumes of images, audio recordings, and videos to learn facial structures, speech patterns, body movements, and emotional expressions.
The resulting output is often almost indistinguishable from reality.
Deepfakes generally appear in three forms:
Face Swapping
This involves replacing one individual’s face with another in a video or image. Although commonly used in entertainment and parody, face swapping has become deeply associated with non-consensual intimate content and misinformation.
Voice Cloning
AI systems can now reproduce a person’s voice using minimal audio samples. Such cloned voices have increasingly been used in fraud, extortion, impersonation, and fake endorsements.
Real-Time Synthetic Manipulation
The most sophisticated form of deepfake technology allows real-time digital puppeteering, where one individual controls another person’s synthetic identity live on screen.
The legal problem with deepfakes is not merely that they are deceptive. The real issue is that they disconnect identity from consent.
A person’s face, voice, gestures, and likeness become digital raw material capable of exploitation without authorisation. This creates overlapping forms of legal harm:
- Commercial exploitation through fake endorsements;
- Reputational injury through fabricated speech or conduct;
- Privacy violations through non-consensual replication;
- Emotional and psychological harm;
- Identity theft in its most advanced technological form.
The law has historically recognised theft of property, reputation, and confidential information. Deepfakes force the law to confront a new reality: theft of personhood itself.
Personality Rights in Indian Jurisprudence
Unlike several jurisdictions such as the United States, India does not possess a codified “Right of Publicity” statute. Personality rights in India have instead evolved through judicial interpretation.
These rights broadly include protection over:
- Name;
- Image;
- Voice;
- Signature;
- Likeness;
- Persona;
- Distinctive attributes and mannerisms.
Indian courts have increasingly recognised that a person’s identity possesses both commercial and dignitary value.
Constitutional Foundation under Article 21
The constitutional basis for personality rights originates from Article 21 of the Constitution of India, which guarantees the right to life and personal liberty.
In Maneka Gandhi v. Union of India, the Supreme Court expanded the meaning of “life” to include the right to live with dignity. This interpretation transformed Article 21 into a reservoir of substantive rights.
Subsequently, in Justice K.S. Puttaswamy v. Union of India, the Supreme Court unanimously recognised privacy as a fundamental right intrinsic to dignity and autonomy.
The judgment acknowledged concepts such as:
- Informational privacy;
- Decisional autonomy;
- Control over personal information;
- Protection of identity.
This constitutional framework becomes critically relevant in deepfake cases because facial features, voice patterns, and behavioural characteristics constitute deeply personal and identifiable data.
The unauthorised synthetic replication of such characteristics directly undermines autonomy over identity.
Judicial Recognition of Personality Rights
Indian courts have gradually recognised personality rights through intellectual property and passing-off jurisprudence.
In ICC Development (International) Ltd. v. Arvee Enterprises, the Delhi High Court observed that publicity rights vest in an individual’s personality and encompass name, voice, likeness, image, and other distinctive characteristics.
Similarly, in D.M. Entertainment Pvt. Ltd. v. Baby Gift House, the Delhi High Court protected singer Daler Mehndi’s persona against unauthorised commercial exploitation.
In Titan Industries Ltd. v. Ramkumar Jewellers, the Court held that celebrity identity possesses commercial value capable of legal protection under passing-off principles.
These judgments collectively establish that personality rights in India are enforceable despite the absence of a dedicated statute.
However, traditional personality rights jurisprudence evolved in an era before artificial intelligence. Courts primarily dealt with advertisements, endorsements, or unauthorised commercial use of celebrity images. Deepfakes represent something fundamentally more dangerous.
They create synthetic performances.
The victim is no longer merely associated with a product or publication. The victim appears to personally speak, act, endorse, or behave in ways they never actually did.
The Bombay High Court’s Intervention in Suniel Shetty v. John Doe
The Bombay High Court’s judgment in Suniel V. Shetty v. John Doe Ashok Kumar marks one of the most important judicial responses to AI-generated identity misuse in India.
Actor Suniel Shetty approached the Court seeking protection of his personality rights, privacy rights, and right to live with dignity under Article 21 of the Constitution. The suit alleged that various unidentified parties had created and circulated deepfake content, AI-generated images, impersonation material, and misleading promotional content using his persona and that of his family members.
The Court framed several crucial legal questions:
- Whether celebrity personality rights are enforceable against AI-generated misuse;
- Whether deepfakes violate privacy and dignity under Article 21;
- Whether synthetic impersonation constitutes passing off;
- Whether intermediaries can be directed to remove infringing content;
- Whether ex-parte relief may be granted against anonymous defendants.
The Court answered these questions decisively in favour of the plaintiff.
Recognition of Deepfakes as Constitutional Harm
The Court made an extremely significant observation while describing the misuse of technology:
“The same can best be described as a lethal combination of a depraved mind and the misuse of technology resultantly causing harm to the Plaintiff’s personality rights.”
This language is notable because the Court did not treat the dispute merely as a commercial disagreement. Instead, it recognised the moral and constitutional dimensions of AI-generated identity misuse.
The Court further observed:
“The unauthorized creation/uploading of deepfake images of the Plaintiff on social media platforms constitutes a grave infringement not only of his personality rights but also of his right to live with dignity.”
This marks an important jurisprudential shift.
Indian courts are increasingly recognising that deepfake harms are not confined to economic loss or defamation. They implicate dignity itself.
The Court also held that the unauthorised use of AI-generated images involving the plaintiff’s family members constituted a blatant invasion of privacy and fundamental rights.
Passing Off and Synthetic Endorsements
The Court accepted that AI-generated impersonation and fake promotional content may constitute passing off.
Passing off traditionally requires:
- Goodwill;
- Misrepresentation;
- Resulting damage.
Deepfakes satisfy all three.
Celebrities possess substantial commercial goodwill attached to their identity. AI-generated endorsements falsely imply association or approval. Such misuse damages both the celebrity’s reputation and consumer trust.
The Court therefore recognised that synthetic media may amount to commercial misrepresentation even without direct human impersonation.
Ex-Parte Relief and Dynamic Injunctions
Recognising the speed at which deepfake content spreads online, the Court granted ex-parte interim relief against both identified and anonymous defendants.
This aspect is particularly important.
Digital harms often become irreversible within hours. Traditional litigation timelines are ill-suited for viral synthetic media. By the time notice is issued and hearings conclude, the damage may already be global.
The Court therefore adopted a pragmatic approach and directed intermediary platforms such as Meta and X Corp to remove infringing content and comply with future takedown requests involving similar material.
This demonstrates judicial recognition that platform cooperation is essential in combating deepfake harms.
The Inadequacy of Existing Indian Law
Although Indian courts have shown considerable creativity in protecting personality rights, existing statutory protections remain fragmented and reactive.
Information Technology Act, 2000
Certain provisions of the Information Technology Act may indirectly apply to deepfakes.
Section 66E
Criminalises violation of privacy through non-consensual capture or transmission of images.
Section 67 and 67A
Penalise obscene and sexually explicit content.
These provisions are useful in cases involving intimate deepfake pornography. However, they do not comprehensively address non-sexual synthetic impersonation, fake endorsements, political disinformation, or identity replication.
Bharatiya Nyaya Sanhita (BNS)
The Bharatiya Nyaya Sanhita modernises certain aspects of criminal law by expanding defamation provisions to include visible representations.
While this may assist in addressing reputational injury caused by deepfakes, criminal defamation remains a limited remedy because it addresses outcomes rather than the underlying act of synthetic replication.
Digital Personal Data Protection Act, 2023
The DPDPA offers potentially significant protections because facial structures and voice patterns constitute biometric personal data.
Training AI systems on such data without consent arguably violates informational privacy.
However, the Act does not explicitly regulate synthetic identity creation or provide specialised remedies for deepfake misuse.
The Core Structural Problem
The deeper issue is that Indian law still lacks a codified right over digital identity itself.
A person may own physical property, copyrighted work, trademarks, and confidential information. Yet the law remains uncertain regarding ownership over one’s algorithmically reproducible persona.
This gap becomes especially dangerous for ordinary individuals who lack the resources, fame, or legal access available to celebrities.
The wealthy may secure injunctions.
Most victims cannot.
Deepfakes and the Crisis of Consent
Perhaps the most disturbing aspect of deepfake technology is the collapse of consent.
Traditionally, identity misuse involved unauthorised publication or misrepresentation. Deepfakes go further.
They manufacture a digital self.
The victim appears to willingly participate in fabricated conduct. This creates profound psychological and social harm because visual and auditory evidence possesses extraordinary persuasive power.
The law has always relied heavily upon visual authenticity.
Deepfakes destabilise that foundation.
Victims frequently experience:
- Anxiety and emotional trauma;
- Loss of professional credibility;
- Social ostracisation;
- Psychological distress;
- Permanent fear of resurfacing content.
Women disproportionately suffer such abuse, particularly through non-consensual intimate deepfake material.
This is not merely an issue of privacy.
It is an issue of bodily autonomy translated into digital form.
The legal system must therefore recognise that identity is not simply information.
Identity is personhood.
Comparative Global Responses
Several jurisdictions have already begun developing specialised deepfake regulations.
United States
Many American states recognise statutory rights of publicity.
California has enacted legislation restricting deceptive political deepfakes and non-consensual synthetic pornography.
Recent proposals such as the NO AI FRAUD Act seek to establish enforceable rights over voice and likeness.
European Union
The EU AI Act classifies certain synthetic media applications as high-risk and imposes transparency obligations.
AI-generated content must often be labelled and traceable.
China
China has adopted stringent “deep synthesis” regulations requiring watermarking, identity verification, and mandatory disclosure of AI-generated content.
While criticised for excessive state control, these regulations demonstrate a proactive approach to synthetic media governance.
Compared to these jurisdictions, India remains dependent primarily on constitutional interpretation and intermediary liability.
This is unsustainable in the long term.
Toward a Right to Digital Integrity
The deepfake crisis cannot be solved solely through takedown orders or scattered litigation.
India requires a comprehensive legal framework centred on digital identity protection.
The most meaningful reform would be recognition of a distinct “Right to Digital Integrity.”
Such a right should include:
- Control over one’s digital likeness;
- Consent-based use of biometric identity;
- Protection against synthetic replication;
- Civil and criminal remedies for misuse;
- Compensation mechanisms;
- Fast-track injunctive relief.
This right should not remain limited to celebrities.
Every individual possesses dignity.
Every individual possesses identity.
The law cannot reserve meaningful protection only for those wealthy enough to litigate.
Reforming Intermediary Liability
Intermediaries must also bear greater responsibility.
Platforms hosting synthetic media should be required to:
- Label AI-generated content;
- Implement deepfake detection systems;
- Respond rapidly to complaints;
- Preserve origin data for investigation;
- Publish transparency reports.
At the same time, safeguards must protect legitimate satire, parody, journalism, and artistic expression.
The challenge is not banning artificial intelligence.
The challenge is preventing the weaponisation of identity.
Conclusion
Deepfake technology represents one of the greatest constitutional and legal challenges of the digital age.
The danger of deepfakes lies not merely in deception but in their ability to sever identity from consent. They transform a person’s face, voice, and personality into manipulable digital assets capable of exploitation without authorisation.
The Bombay High Court’s judgment in Suniel V. Shetty v. John Doe Ashok Kumar is therefore far more than a celebrity protection case.
It represents an important constitutional recognition that synthetic identity misuse is an assault on dignity itself.
By acknowledging that AI-generated impersonation violates personality rights, privacy, and the right to live with dignity under Article 21, the Court has taken a decisive step toward adapting Indian law to the realities of artificial intelligence.
Yet judicial innovation alone cannot resolve the structural deficiencies of the current legal framework.
India still lacks:
- A codified right of publicity;
- Comprehensive deepfake legislation;
- Effective international enforcement mechanisms;
- Uniform platform accountability;
- Accessible remedies for ordinary victims.
As artificial intelligence becomes increasingly sophisticated, the law must evolve with equal urgency.
The future of digital freedom depends not merely on regulating technology but on preserving the integrity of human identity within technological systems.
A democratic constitutional order cannot permit individuals to lose ownership over their own face, voice, and existence.
In the age of synthetic media, dignity itself must become digitally enforceable.



