INTRODUCTION
A deep fake is a type of digitally manipulated media which usually uses an image, video, or audio recording which is created using artificial intelligence (AI) to make something appear real when it is not. The term comes from the word “deep learning” which is a form of AI that learns patterns from large amounts of data, and “fake” means fabricated or altered content.
Deep fake technology works by analyzing real photos, videos, or voice samples of a person and then generating new content that closely imitates their face, expressions, voice, or movements. As a result, a person can be shown saying or doing things they never actually did.
WHEN DANGEROUS USE CAUSES REAL HARM
Artificial intelligence tools such as Grok, Gemini, ChatGPT and other generative models are designed to assist users with information, creativity, and problem-solving. However, like any powerful technology, they can cause serious harm when misused. One of the most troubling forms of abuse involves attempts to use AI to digitally “undress” women or children, or to create sexualized images without consent. This misuse highlights the urgent need for ethical safeguards, accountability, and public awareness.
Some users have attempted to manipulate AI systems to generate altered images or descriptions that remove clothing or sexualize real people. Even when such content is fabricated, the harm is real. These acts fall under non-consensual sexual exploitation, a violation of personal dignity and privacy.
For women and girls, this type of abuse reinforces harassment, objectification, and gender-based violence. Victims may experience emotional distress, reputational damage, fear, and long-term psychological harm despite never consenting to or participating in the content.
1. In a striking recent example from India, a Telugu actress filed a formal complaint with the Hyderabad police after a campaign of online harassment escalated into the circulation of AI-generated vulgar content involving her image. According to reports, the harassment started after she responded publicly to comments about women’s clothing, and soon she was targeted with sexually abusive messages and fabricated content. The complaint included more than 30 links showing offensive material, and authorities have registered a case against dozens of individuals, including social media influencers and account operators, under relevant cybercrime laws. Platforms are working with police to identify those responsible and remove the objectionable material[1][1].
AI tools, when misused, can become powerful instruments of oppression especially against women and children. Instead of encouraging open dialogue and free expression, such misuse creates an environment of fear and intimidation. When someone, particularly a woman or a child, voices an opinion in public that others disagree with, they may be targeted not with reasoned debate, but with fabricated AI-generated content designed to shame, silence, or discredit them. Even when such content is proven to be false, the emotional distress, social stigma, and lasting harm often remain. The burden unfairly shifts to the victim to defend themselves against something they never did.
Women and children are already more vulnerable to online harassment, leading many to withdraw from public spaces, social media, or civic discussions altogether. In this way AI misuse does not merely harm individuals, it harms an entire set of gender and curbs freedom of speech.
2. A case was filed in Lucknow where a man was arrested after using AI-generated nude images sourced from public profiles to blackmail young women. He reportedly fabricated the images using generative tools and forced victims to pay to avoid the distribution of supposedly compromising pictures. The accused confessed to blackmailing a girl from Vikas Nagar in May 2024 by creating and threatening to circulate nude images unless she paid him. Another victim, a student, was coerced into meeting him at a hotel to avoid dissemination of compromising photos," STF officials stated. Police investigations were triggered by a complaint filed at Ghazipur (Indiranagar) police station by a woman whose daughter faced severe psychological distress due to the cyber-blackmail[2][2].
3. Another case was filed in Delhi where a 21-year-old man in New Delhi was arrested for allegedly blackmailing a female college student by creating AI-generated sexually explicit images of her from her profile photos on a social media app, threatening to share them unless she paid him money; he had befriended her using a fake female account, manipulated her pictures with AI to make them explicit, and tried to extort payment, leading the victim to report the matter to cyber police, who traced the chats, payments and arrested him[3][3].
These cases highlight how AI tools even those not designed for sexual content can be twisted by users into producing or amplifying exploitative material that violates dignity, harms reputations, and leads to trauma.
DEEP FAKE AND PERSONALITY RIGHTS:
Cases where personality right is granted
Recently Indian law has strongly reinforced personality rights as a crucial shield against AI and deep fake misuse, linking them to the fundamental right to dignity and privacy under Article 21 of the Constitution. Courts have actively granted injunctions to celebrities and public figures to stop unauthorized exploitation of their identity.
Anil Kapoor v. Simply Life India & Ors. (2023)[4][4]: The Delhi High Court issued a pioneering omnibus ex parte injunction that explicitly addressed AI deep fakes. The ruling prohibited the unauthorized use of the actor's name, image, voice, likeness for commercial gain. This case set an important precedent for protecting a performer's entire persona in the digital age.
Sunil Shetty v. John Doe Ashok Kumar (2025)[5][5]: The Bombay High Court described the misuse of technology to create deep fakes as a "lethal combination of a depraved mind and the misuse of technology". The court granted an urgent ex-parte interim injunction, ordering social media platforms Meta and X Corp to take down deep fake images and videos, some of which were obscene, of the actor and his family. The court held that such content infringed not only commercial rights but also the right to live with dignity.
Aishwarya Rai Bachchan v. Aishwaryaworld.com & Ors. (2025)[6][6]: The Delhi High Court granted significant relief against websites and platforms circulating AI-generated, morphed intimate content, fake endorsements, and merchandise. The court directed e-commerce platforms and Defendants to remove infringing URLs within 72 hours and to provide subscriber information to help identify the infringers.
Asha Bhosle v. Mayk Inc. & Ors. (2025)[7][7]: The Bombay High Court established India's first judicial precedent specifically against AI voice cloning. It ruled that voice is an integral part of a person's identity and cannot be replicated without consent, granting an expansive injunction that covered voice models and synthetic voices across all media.
Karan Johar v. India Pride Advisory (P) Ltd. (2025)[8][8]: The Delhi High Court restrained entities from misusing the filmmaker's persona using AI, deep fakes, and GIFs for commercial purposes. However, the court also noted that content falling under fair use exceptions like parody or satire might be permissible.
Deep fake is also being used in political campaigns, in 2025 during the Lok Sabha election Ranveer Singh was seen criticizing a Political party which was a deep fake video. Aamir Khan was also seen supporting a Political party which was also a deep fake video. Such videos also impact Political campaigns, elections, etc.
In recent times, several courts across India have delivered strong judgments against the misuse of deep fake technology. The judiciary has consistently recognized that the creation and circulation of fabricated digital content can lead to serious societal consequences if left unchecked. Indian courts have observed that deep fakes not only harm individual victims but also threaten privacy, dignity, and trust in digital spaces.
In the cases discussed above, courts have ruled decisively in favor of the victims, acknowledging the personal and reputational damage caused by non-consensual AI-generated content. Importantly, these judgments have reinforced the protection of personality rights, affirming that an individual’s identity, likeness, and dignity cannot be exploited or manipulated without consent even through artificial intelligence.
By taking a firm stance against deep fake misuse, Indian courts have sent a clear message that technological advancement cannot come at the cost of human rights. These rulings reflect a growing judicial awareness of the real-world impact of digital abuse and underline the responsibility of individuals and platforms to ensure ethical and lawful use of emerging technologies.
Cases where personality rights are not granted:
However, While Indian courts increasingly recognize personality and publicity rights, they have consistently clarified that such rights are not absolute. Judicial decisions demonstrate a careful balancing exercise between a celebrity’s proprietary interests in their persona and competing constitutional and legal principles such as freedom of expression, the “own-name” defense, artistic creativity, and public interest.
ICC Development (International) Ltd. v. Arvee Enterprises and Anr. 2003 (26) PTC 245[9][9]: The court held that the right of publicity has evolved from the right of privacy and applies only to an individual or any indicia of an individual’s personality. Therefore, the Right of Publicity does not extend to non-living entities. An individual may acquire the Right of Publicity by virtue of association with an event; however, that right does not apply to the event in question, nor the organizer behind the event. Any effort to take away the right of publicity from the individual to the organizer (non-human entity) of the event would violate Articles 19 and 21 of the Constitution of India.
Gautam Gambhir v. D.A.P. & Co. & Anr. (Delhi High Court, 13 December 2017)[10][10]: In this case where was a Concerned claim that restaurants operating under names such as “Hawalat by Gautam Gambhir” unlawfully exploited the cricketer’s personality and goodwill by implying his endorsement. The court dismissed the suit, holding that the defendant was entitled to run a business in his own personal name, which happened to be identical to the plaintiff’s, and that there was no evidence of deliberate misrepresentation, false endorsement, or widespread public confusion; a single instance of alleged confusion was insufficient. The judgment clarified that in India, personality or publicity rights are not absolute and will not override the “own-name” defence unless there is clear proof that the use of a name actively misleads the public or damages the celebrity’s reputation or commercial interests.
Krishna Kishore Singh v. Sarla A. Saraogi & Ors[11][11] : The Delhi High Court dealt with a suit by Krishna Kishore Singh, father of the late actor Sushant Singh Rajput, seeking to restrain filmmakers from making and distributing a film based on his son’s life without permission, claiming it violated privacy, personality and publicity rights and could prejudice ongoing investigations, the court refused to grant an injunction, holding that rights to privacy and personality are not inheritable and cease with death, that information widely reported in the public domain can be used in creative works, and that restricting the film’s release would unduly infringe the defendants’ constitutional freedom of speech and expression, while preserving the plaintiff’s right to seek damages if he could establish infringement on the merits.
Jaikishan Kakubhai Saraf v. The Peppy Store & Ors. (Delhi High Court, 15 May 2024)[12][12]: The Court restrained several defendants from commercially exploiting the plaintiff’s name, image, likeness, voice, persona, and registered trademarks such as “Bhidu” through merchandise, AI chatbots, distorted videos, and online content, holding that such use prima facie violated his personality and publicity rights.
However, it declined to grant an injunction and refused interim relief against a Defendant (YouTuber) who had created a “Thug Life” edit using clips of the plaintiff, noting that the portrayal introduced no falsehoods and merely amplified an existing public perception of him as a formidable and commendable figure, and cautioning that restraining such content without hearing the creator could set a precedent that chills free speech, thereby emphasizing the need to balance the plaintiff’s personality, publicity, and moral rights with the defendant’s right to artistic and economic expression.
Ramgopal Verma & Ors. Vs Perumalla Amrutha [6] 2020 Scc Online Ts 3018[13][13]: In this case the respondent/plaintiff filed a suit seeking a perpetual injunction against the appellants to restrain them from releasing a film titled "Murder," allegedly based on her life events involving her marriage, her husband's murder, and subsequent family tragedies. The respondent claimed that the appellants collected real-life information about her and her deceased family members without consent and intended to portray these events in a film, causing her mental agony and public embarrassment. The court recognized that events such as the respondent's marriage, her husband's murder, and subsequent legal proceedings were already widely reported in the media and public domain. It held that once events become a matter of public record, the right to privacy diminishes in relation to those events.
This approach highlights the Court’s emphasis on balancing personality, publicity, and moral rights with the right to artistic, economic, and expressive freedom, particularly where the use is transformative, non-deceptive, and not purely commercial.
HOW DOES LAW PROTECT ONE FROM SUCH MISUSE?
The Copyright act and the Trademarks act protects the individual’s rights and work from getting infringed, used, passed off, etc. The Act ensures that if someone’s work, name, face is used in a wrong way they have the right to fight for their rights.
Here are some of the relevant sections under the Copyright and Trademarks Act and states what rights do they protect under such acts.
The Copyright Act, 1957:
Sections 38, 38A, and 38B grant performers specific rights, including the right to be attributed as the performer and the right to prevent the distortion or modification of their performance in a way that harms their reputation (moral rights).
Section 51 defines copyright infringement, which applies if a deep fake uses a substantial portion of a copyrighted work (e.g., a film or sound recording) without permission.
Section 57 grants moral rights to authors to restrain or claim damages for any distortion or modification of their work.
The Trademarks Act, 1999:
Individuals, especially celebrities, can register their names, signatures, or catchphrases as trademarks under Section 2(m).
Section 14 restricts the registration of a living person's name as a trademark without their consent.
The common law remedy of "passing off" is frequently used to protect against the unauthorized commercial use of an individual's persona, which creates a false impression of endorsement.
Passing off, in this context, implies the unauthorized use of a person’s name, image, or other similar personal attributes to create a false association with any matter or material with an intent to deceive the public. However, the case of passing off may hold water only if goodwill is demonstrated, which could potentially be done only if the celebrity or the individual’s goodwill has commercial implications in a specific jurisdiction.
RESPONSIBILITY OF AI DEVELOPERS AND PLATFORMS:
AI companies, including those behind tools like Grok, Gemini or ChatGPT, have a responsibility to actively prevent misuse. This includes:
· Strong content moderation and filtering
· Clear bans on sexual and exploitative use
· Rapid response to abuse reports
· Ongoing safety research and audits
· Cooperation with law enforcement when required.
