AI-generated deepfake IDs: A looming threat to KYC
The growing sophistication of deepfake technology, combined with the increasing accessibility of these tools has become an endemic challenge, feeding fraud rings and organised crime at scale.
Know Your Customer (KYC) is the bedrock of financial security. Every time a user opens a bank account, applies for a loan, or signs up for a new digital service, they undergo a KYC verification. This process, designed to prevent money laundering and fraud, has historically been a critical barrier against illicit activities, typically involving government-issued IDs, proof of address, and sometimes even a video call.
However, current Anti-Money Laundering (AML) and Know Your Customer (KYC) processes, which rely on components such as document verification, selfie/image capture, and basic liveness detection, are fundamentally vulnerable. Each of these components, meticulously designed as safeguards, can now be challenged with a high degree of accuracy by deepfake technology.
Consider real-time face-swapping apps that are now readily available online. These allow attackers to impersonate a legitimate user with uncanny realism, matching expressions, blink rates, and lip movements.
According to Entrust’s 2025 Identity Fraud Report, deepfake attacks occurred on average once every five minutes in 2024, while digital document forgeries surged 244% year-over-year, now comprising 57% of all document fraud.
A report by Sensity reveals that over 10,000 tools are available for image generation, more than 2,200 tools for replacing faces in videos and creating digital avatars, and over 1,000 tools for generating or cloning voices. This isn't niche technology confined to experts; it's widely accessible; it's democratised deception.
In a Video KYC context, these sophisticated synthetic threats can:
● Pass both passive and active liveness checks
● Evade biometric systems entirely
● Fool both automated and manual review workflows
Most KYC systems are simply not equipped to detect such advanced attacks, leaving them exposed to high-confidence fraud. Experts now recognise that traditional liveness detection alone can no longer serve as a dependable safeguard. The fight against deepfakes is a direct test of the adaptability of fraud detection systems.
Financial institutions need sophisticated AI-powered detection systems that go beyond simple gestures to analyse details often imperceptible to the human eye or ear.
This demands machine learning models to be trained on diverse and large datasets of both real and synthetic content, capable of identifying anomalies in facial movements, eye blinks, lip synchronization, vocal inflections, and even the minute pixel-level details of an image or video.
This depth of technical capability is what prevents deepfakes from passing through.
The systemic threat that deepfakes pose
The growing sophistication of deepfake technology, combined with the increasing accessibility of these tools has become an endemic challenge, feeding fraud rings and organised crime at scale.
Consider the rise of Fraud-as-a-Service operations, which essentially refer to underground markets that offer the components needed to construct synthetic identities. For instance, one can find photo sets with hundreds of images of different ethnicities, tailored to specific fraudulent needs.
Specifically, in the context of money laundering, these synthetic identities can also serve the purpose of acting as ‘mules’. Recruiting these human mules could often be a logistical challenge for extensive fraud rings. Now, with the ability to create deepfakes and synthetic identities, fraudsters can generate entirely fictitious individuals to serve as mules.
For instance, in one of the high-profile fraud investigations, a digital bank reported a surge in accounts that had passed selfie checks and document verification, but were later identified as synthetic deepfakes. The institution noted that the attackers had used commercially available tools to craft entire synthetic profiles, circumventing all traditional verification protocols.
Next-gen defences, minus the friction
Modern fraud detection strategies are being recalibrated for this very challenge. It’s no longer enough to detect whether a face is “LIVE”. Organisations must also ensure that the face hasn’t been synthetically generated or tampered with.
Rather than relying on simple gestures like head turns or blinks, which are easily spoofed by deepfakes, advanced systems now employ passive liveness detection techniques. This involves a multi-layered analysis of:
● Spatial distortions around facial features
● Temporal inconsistencies in expressions
● Lighting anomalies in synthetic frames
● Morphing artifacts invisible to the human eye
● Authenticity of camera source (Injection Attack Prevention)
All without introducing unnecessary user friction, ensuring a response time of less than 1 second, suitable for real-time onboarding environments.
According to the Financial Action Task Force (FATF), digital ID systems that combine layered authentication, robust data integrity checks, and passive behavioural analysis are more effective in countering synthetic identity fraud in real-time onboarding scenarios
With a defence-in-depth approach, the most robust identity decisioning systems don’t look at signals in isolation. Superior liveness detection is reinforced by comprehensive document verification, global AML checks, and device intelligence that work in layers to detect even the most convincing synthetic identities.
By Sandesh GS, CTO, Bureau, a no-code fraud prevention and identity decisioning platform.
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)


