Deepfakes, doxing and digital harm: New report maps India’s gendered online abuse crisis
From manipulated images and online stalking to caste- and sexuality-based targeting, digital harm has deepened even as redress systems lag. The latest data and survivor accounts reveal a crisis where access has expanded, but protection has not.
For survivors of online abuse, the internet often feels like an extension of the same patriarchal control they face offline, where fear, shame, and lack of trust in institutions make reporting difficult. A new study by Breakthrough India, a Delhi-based NGO, and Equality Now, a global feminist legal advocacy organisation released on November 4, documenting how online abuse in India reflects and reinforces existing inequalities of caste, class, gender, and sexuality. Online abuse includes acts like doxing (sharing someone’s personal information), stalking, impersonation, image-based abuse, and the non-consensual sharing of intimate images.
Titled ‘Experiencing Technology-Facilitated Gender-Based Violence in India’, the report noted that women, Dalit women and LGBTQIA+ persons are disproportionately targeted, with harms compounded by permanence of the abused content online, virality and “recidivism” (the same content resurfacing). It also flagged slow redressal mechanisms, legal gaps and the limited technical and institutional ability of law enforcement and the justice system to effectively investigate and prosecute cases of online abuse.
Deepfakes and ‘nudify’ apps are changing the scale and texture of abuse.
A recent news analysis by The Guardian reported a surge in AI-manipulated imagery used for extortion and harassment. It detailed how one survivor’s loan-application photo was altered and blasted across WhatsApp with her number, triggering a barrage of sexual calls. The piece, which reported findings from RATI Foundation, a Mumbai-based NGO that works on gender-based violence, child protection, and online safety, and Tattle, a digital rights collective, said 10% of all calls to RATI’s online abuse helpline in 2025 involved deepfakes or AI-manipulated sexual images.
Macro indicators point the same way: according to the Press Information Bureau, cybersecurity incidents rose from 1.029 million in 2022 to 2.268 million in 2024, according to government data.
The Government of India’s Comprehensive Modular Survey – Telecom in 2025 says 86.3% of households have internet access within the premises. This has been attributed to massive digital infrastructure expansion, mobile affordability improvements, and post-pandemic digitisation. And this wider digital access, without strong safety checks, is leaving more people exposed to online gender-based abuse.
PIB reports show that cybercrime reporting in India has risen sharply in recent years. Analyses of the National Cyber Crime Reporting Portal and NCRB state trend notes show that offences like impersonation, online stalking, and image-based abuse are among the most common.
Equality Now/Breakthrough have also highlighted slow takedowns and inconsistent responses following online crimes; independent policy reviews echo that India lacks deepfake-specific provisions, leaving enforcement to a patchwork of the IT Act and IPC sections. The study also says that recent briefs to courts and regulators call for clearer duties on platforms and due-process-compatible removal timelines.
The Election Commission of India (ECI) and the Ministry of Electronics and IT (MeitY) have issued new advisories asking social media platforms to quickly remove AI-generated or deepfake content that could spread misinformation during elections.
For example, in March 2024, MeitY directed intermediaries to take down deepfakes within 24 hours of receiving complaints. The ECI also reminded political parties not to use manipulated media in campaigns.
What helps, according to the research
The Breakthrough India report urged survivor-centred justice (swift removal and evidence preservation), platform accountability tailored to Indian languages/contexts, capacity-building for police, judiciary and lawyers on digital evidence, and intersectional data to track harms across caste, class and sexuality.
Global feminist tech groups studying India note that AI-driven gender-based violence is mostly sexual in nature, and call for safer platform design and faster response systems.
In an interview with HerStory, feminist researcher, journalist, and gender consultant Sanjukta Basu spoke about how women with public profiles increasingly withdraw from online life to avoid being targeted—shrinking voice and visibility. She talks about how solutions must therefore be structural too: enforceable duties for platforms, deepfake-aware law and policing, and survivor pathways that restore agency rather than retraumatising them.
The Breakthrough India–Equality Now report recommended safety-by-design measures in technology platforms, mandatory transparency reports on takedown timelines, and the establishment of a dedicated nodal agency to coordinate between tech companies, law enforcement, and survivors.
It urges for digital rights to be recognised as part of gender justice, ensuring that policies on data protection, AI use, and online safety are shaped by the lived experiences of women and marginalised communities. The study emphasised that training frontline responders—police, lawyers, and judges—on gendered and caste-aware digital harm is key to preventing secondary victimisation during reporting.
Most importantly, it called for intersectional data collection on online abuse, disaggregated by gender, caste, class, region, and sexuality—arguing that India cannot address what it does not measure.
“The internet cannot be gender-equal in an unequal world,” it said.
The way forward lies not only in faster takedowns and stronger laws, but in redesigning systems of technology and justice so that women and queer users can inhabit digital spaces without fear, and with full agency.
Edited by Jyoti Narayan


