Know Your Customer (KYC) is an essential step in fraud prevention. Methods to verify that an individual is who they say they are, have come a long way with biometrics, facial recognition, and other digital checks used to verify a person's status and legitimacy. However, like all processes, KYC has points of failure, and cybercriminals and fraudsters circumvent KYC.
Knowing who you are dealing with is essential to reduce financial crimes like fraud and money laundering. Typically, KYC fraud involves increasingly sophisticated synthetic IDs, with McKinsey finding that synthetic ID fraud is the fastest-growing financial crime in the USA. However, a new threat on the block has arrived to make KYC checks even more challenging. Digitization has provided fraudsters with ample and novel ways to circumvent KYC processes. Deepfakes and generative-AI-enabled document fraud are an existential threat to knowing who your customer is. As AI provides even more sophisticated ways to build synthetic IDs, what does the future hold for KYC?
Deepfakes and Synthetic ID
While deepfake fraud may seem cutting-edge, this technique is not so new. In 2019, a British CEO transferred $243,000 to a fraudster. The scammer tricked the CEO into thinking he was talking to the head of the company's parent organization by using a "deepfake" that simulated the head's voice. Deepfakes are now a ubiquitous problem, with an estimated 500,000 video and voice deepfakes shared on social media sites globally in 2023. They are cheap to make, too. A new tool from TenCent creates a three-minute deepfake video for around $145. Importantly, fake identity documents and KYC fraud are used to facilitate organized crime, such as money laundering and terrorist financing. If these ID documents and associated verification elements can be made cheaply and simply, the method quickly becomes a favorite tactic for cybercriminals.
From deepfaked ID to KYC
Know Your Customer (KYC) has always been a risky business; this first step in verifying the relationship between a service and a human is a natural target point. Synthetic IDs are created by mixing real identifiers with fake names, stolen Social Security Numbers being an example of a real identifier. Synthetic identities, even conventional, non-AI-based, are hard to detect. Estimates from Reuters show a staggering 95% of synthetic identities used for KYC are not detected during onboarding. Deep fake-enabled synthetic ID will be extremely difficult to detect unless the verification is ongoing and tied to monitoring. Effective identification is no longer a one-off check; it requires ongoing deep interpolation of transactions and behavior.
Two core verification methods used during KYC are the most likely to be at risk from deep fake KYC fraudsters:
Facial recognition
The market for facial recognition deployments is expected to reach $19.3 bn by 2032. This reflects how facial recognition is becoming standard KYC practice in many onboarding use cases. For example, challenger bank apps may require facial recognition during the account setup. Deepfakes are an ideal way to trick KYC processes used by FinTech vendors such as payment apps. Even processes that require liveliness checks can be spoofed by deepfake videos.
Identity documents
Fake identity documents such as passports and driver's licenses have also been used to trick KYC checks. In 2003, the FBI warned about issuing driver's licenses without due diligence and verification. Fast forward to a 2022 report from Onfido that found a shift from synthetic ID to more than 90% of ID fraud based on a "complete reproduction of an original document." Generative AI is likely to be applied to generate realistic-looking identity documents, such as passports and identity cards; features such as hallmarks typically used to authenticate these forms of identification will also be replicated.
AI-enabled KYC fraud leaves the process of KYC at serious risk. Once the KYC process is broken, money laundering and terrorist financing can soar. However, even small-time cybercriminals will likely take advantage of this new threat as successful cyber-attack methods often end up packaged as "as-a-Service." If this happens with deepfakes for KYC, then banks, FinTechs, and eCommerce will find it increasingly difficult to identify customers with a degree of assurance.
An integrated view to fight deepfake KYC
Financial criminals are no strangers to using novel methods to obfuscate and circumvent checks. When a new technology comes on the scene, a new payment mechanism is introduced, or a new way of doing business enters the arena, financial fraudsters will find ways to abuse the system. The use of deepfake KYC is another channel that will be used to carry out financial crimes.
Deepfake KYC is a significant threat that will help financial crime propagate and succeed. However, fighting this level of intelligent fraud can be achieved by using intelligent technologies applied across the layers of human interactions and payment systems. The detection and prevention of challenging deepfake KYC fraud must use an integrated view of financial crime. A single one-shot solution that detects a deepfake video cannot fix this issue alone. The threat of deepfakes in KYC is a profoundly complex thread that weaves within multiple pathways of obfuscation associated with complex financial crimes such as money laundering. The multi-part and complicated chains of financial crime are only made apparent by utilizing AI and machine learning to provide deep, real-time, and ongoing monitoring of customers. Applying "real-time" monitoring is one of the most critical features of anti-financial crime solutions. It will take more than the establishment that an image or voice is fake to stop financial crime.
This integrated approach is why Eastnets has developed a suite, rather than a point solution, to tackle complex financial crimes. Our mantra is, "The need for good due diligence never stops."
If you want to stop deepfake fraudsters from harming your business, talk to Eastnets about how you can apply our anti-financial crime solutions.