Brands
YSTV
Discover
Events
Newsletter
More

Follow Us

twitterfacebookinstagramyoutube
Yourstory
search

Brands

Resources

Stories

General

In-Depth

Announcement

Reports

News

Funding

Startup Sectors

Women in tech

Sportstech

Agritech

E-Commerce

Education

Lifestyle

Entertainment

Art & Culture

Travel & Leisure

Curtain Raiser

Wine and Food

Videos

ADVERTISEMENT
Advertise with us

Even a two-second voice clip or one image is enough to create deepfakes, say experts

With the recent outrage over actor Rashmika Mandanna’s deepfake, HerStory examines the state of women’s safety on the internet and why swift redressal systems are important to counter them.

Even a two-second voice clip or one image is enough to create deepfakes, say experts

Saturday November 11, 2023 , 8 min Read

Recently, a video of actor Rashmika Mandanna surfaced on different social media platforms, showing her entering an elevator in a black yoga body suit, and smiling at the camera. Later, the footage was found to be a deepfake, morphing Mandanna’s head on the body of a British-Indian influencer, Zara Patel.

This incident has brought into focus just how easily AI can be manipulated, and reiterated the lack of agency women have in their online presence.

deepfake

Deepfakes use artificial intelligence to create images/videos to make realistic-looking fake content.

Reacting to the deepfake, Mandanna said, “I feel really hurt to share this and have to talk about the deepfake video of me being spread online. Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm.”

Following widespread outrage and criticism over this incident, the central government issued an advisory to major social media companies to identify misinformation, deepfakes, and other content that violates rules, and remove them within 36 hours after being reported.

"The Centre today issued an advisory to the significant social media intermediaries to ensure that due diligence is exercised, and reasonable efforts are made to identify misinformation and deepfakes, and in particular, information that violates the provisions of rules and regulations and/or user agreements," the Ministry of Electronics and IT (MeitY) said in a statement.

Union Minister of State for Electronics and Information Technology, Rajiv Chandrashekhar called deepfakes a “dangerous and damaging form of misinformation”. In a post on X, he reminded social media platforms that under the IT rules notified in April this year, they are under legal obligation to ensure no misinformation is posted. Further, if any misinformation is reported by any user or the government, it must removed within 36 hours. He added that platforms that do not comply can be taken to court under ‘Rule 7’ by the aggrieved person/s under provisions of IPC.

Soon after, despite the huge furore, deep fakes of actor Katrina Kaif started doing the rounds on the Internet. An original scene from her movie Tiger 3 that showed the actor in a fight sequence with Asian American actor Michelle Lee was morphed to present her in another outfit, which was perceived as vulgar and indecent.

Any image or audio can be misused

deepfakes

Pattathil Dhanya Menon - India's first woman cybercrime investigator

So, for the uninitiated, what exactly are deepfakes? A portmanteau of “deep learning” and fake, deepfakes use artificial intelligence to create images/videos to make realistic-looking fake content.

The term originated around 2017 from Reddit when some users were sharing fake videos they’d created.

Since then, deepfakes have gone mainstream—with commercial apps launched for users to morph their faces onto celebrities' for the purpose of amusement.

In fact, the technology was also used to ‘resurrect’ several deceased celebrities.

However, it is largely being used for harassment purposes nowadays, especially targeting women users.

A 2021 nationwide survey by Bumble found that 83% of Indian women have experienced some form of online harassment or another. The study also found that one in three women experience it weekly.

A LocalCircles survey conducted earlier this year revealed that 76% of urban Indian women are using the internet to stay in touch with family, friends, etc., while 57% are using it for finding information and entertainment.

The entry of deepfakes has raised another pertinent question on the safety of women online.

India’s first woman cybercrime investigator, Pattathil Dhanya Menon, says while AI has changed the face of the media and entertainment industry, its misuse is cause for deep concern.

“It’s easy to manipulate a photo or video available on the internet to scam people for money. And, that’s been happening a lot. With more deepfakes like the Rashmika Mandanna one coming out into the open, I hope the issue is amplified and the solutions are immediate,” she says.

She and her peers are studying how to deal with issues of deepfakes because most First Information Reports (FIR) registered so far still talk about financial issues. She points out that deepfakes can be countered as they fall under “impersonation” under the IT Act.

“It is legally admissible under the IT act and one can take action,” adds Menon.

Senthil Nayagam, Founder of Muonium AI which uses generative AI for creating content for movies and music, says even a two-second audio clip available can be cloned and misused on the internet.

“Earlier, you needed an hour of audio, but gradually the requirement has come down. Today, two seconds of sample is enough to clone a voice,” he says.

Nayagam adds that staying away from social media is not a possibility but a quick redressal system in terms of reporting and action taken by social media platforms is what is needed.

Soumen Datta, Associate Partner, Digital Transformation, BDO India, says that in 2019, a staggering 15,000 deep-fake videos surfaced online with 99% of videos featuring morphed faces of celebrities.

“To address these challenges, a combination of technological advancements, policy interventions, public awareness campaigns, and ethical considerations are required to mitigate the negative impacts of deepfake technology on our society. It is very important to have specific laws against deep-fakes in countries like India where we are dealing with a massive 140 crore+ population with an emerging presence on social media platforms,” he says.

In an announcement on November 8, Meta said advertisers will have to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic-sounding audio, that was digitally created or altered to:

  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

Menon believes that there is no rule so far that can be exercised to control any kind of deepfake or an AI intervention.

“We have only seen the draft of the Digital India Act until now and it addresses a lot of issues. It is yet to be seen if this will be included in the Act. All these require a lot of deliberations and discussions,” she says.

She adds that social media platforms must be made responsible and accountable for their content.

How safe are women on the internet?

Shruthi Manjula Mohan, a regular Instagram user, says, “If an actress like Rashmika Mandanna with access to resources and legal options is worried about her safety, imagine the plight of the larger public and the women. It's threatening to even note how consent and regulations are bypassed by these newer technologies.”

As someone who posts personal images and videos regularly, Mohan feels that her first reaction would be to curl into a shell and delete one’s presence on social media platforms or go private.

But the larger question, she believes is the safety of women on the internet.

“If I ever am in such a situation, I will seek legal help. I will educate myself about the gaps in the platform, as opposed to withdrawing myself from the joy of a virtual presence,” she says.

She emphasises that the rise of deepfakes calls for stringent laws to punish the perpetrators, and severe punishment for the platform for compromising the safety of its users.

“Social media platforms have to take cognisance of the loopholes in their systems before putting the public through the evils of their platforms,” she adds.

Riya Mehta, another Instagram user, has stopped posting regularly and instead keeps checking if anyone is misusing her images on fake accounts.

“This Rashmika Mandanna incident is shocking because, with AI, only an expert can determine if an image is fake or real. These incidents are becoming common and are taking the joy out of posting on social media,” she says.

So, how can one be safe from miscreants on the internet?

Menon says there’s no way one can be safe on the internet; it all lies in the hands of the person who wants to use your content.

“The less you are exposed, the safer you are. But even one picture of yours on the internet can be used wrongfully. When you create AI deepfakes, you need multiple images of a person to make it look very convincing. Celebrity deepfakes are more common because there are high-dimensional images and videos of them available online. Is it impossible to create of others? No,” she says.

What then, is the solution?

“The solutions are at a very nascent stage as far as investigations of forensics are concerned from my side. Until now, we have been studying the possibilities of AI and deepfakes. But on the other side, we are just starting to explore and learn more about it. Not only stricter laws but swift implementation and understanding of the technologies by people in the legal system is very important,” she signs off.


Edited by Saheli Sen Gupta