Fraud is surging in Europe’s financial services sector, with both the scale and sophistication of attacks growing on the back of advancements in artificial intelligence (AI).
A survey by Signicat, involving 1,206 fraud decision-makers at financial institutions across seven European countries, reveals that 76% of respondents believe fraud is now a bigger threat than three years ago, and 66% specifically view AI-driven identity fraud as a greater concern.
The study, conducted by Censuswide in March and April 2024, found that while fraud is increasing overall, some types of fraud are becoming more popular. Deepfakes and impersonalization, in particular, have emerged among the top three most common types of identify fraud reported by financial institutions.
This trend aligns with Signicat’s own experience in detecting deepfake fraud. In 2021, deepfakes accounted for a mere 0.1% of fraud attempts. However, the figure has soared to 6.5% in 2024, marking a staggering 2,137% increase over the period. This sharp increase in deepfake attempted fraud is part of a broader trend, with overall fraud attempts tracked by Signicat rising by 80% over the past three years.
The rise of deepfakes
More specifically, the research found that deepfakes have risen to become the most prevalent threat in eID fraud. These schemes aim to undermine national digital identity systems either by taking over an existing account or creating a false identity.
eID systems are digital platforms that allow individuals or organizations to prove their identity online, securely and efficiently. These systems are designed to verify a person’s identity electronically, enabling them to access various services, both public and private, without needing physical documents.
While eID are effective against many types of fraud, some criminals are able to subvert these systems, leveraging AI to sample voices and video with striking accuracy to deceive organizations or individuals. For example, a deepfake might prompt a victim to authorize payments where no ID check is required on the fraudster’s end, leaving eIDs ineffective in such scenarios. Deepfakes can also be used to direct victims to fake eID verification sites to steal logins or as part of a “man-in-the-middle” attack.
60% of fraud executives polled for the Signicat study stated that eID fraud is a bigger threat today than three years ago, compared to 74% saying the same of fraud in general. This suggests that while eIDs are helping slow the growth of fraud, they are not enough to fully stop it.
The cost of AI-driven fraud
Overall, findings from the Signicat survey reveal that criminals are increasingly relying AI to conduct fraud. An estimated 42.5% of detected fraud attempts now use AI, the study found, with 29% of them considered successful. For some respondents, that rate is even higher, with one in nine sharing an estimated AI usage in fraud attempts as high as 70% for their organization.
These AI-driven frauds are becoming an increasingly expensive challenge. Respondents estimate that, of the revenue loss to fraud, 38% was due to AI-driven identity fraud. This suggests that, while AI-driven identity fraud is not yet more successful than other means of identity fraud, it is more lucrative and used for more sophisticated, high-value scams.
A separate analysis by Deloitte’s Center for Financial Services predicts that generative AI (genAI) could enable fraud losses to reach US$40 billion in the US alone by 2027, rising at a compound annual growth rate of 32% between 2023 and 2027. This growth is expected to be fueled by advancements in genAI, which will enable the creation of highly realistic images and videos that can convincingly impersonate individuals.
Businesses embrace technology to address evolving fraud landscape
Results from the Signicat study show that financial services providers are aware of the threat of AI. Over three-quarters of the businesses polled have specialist teams dedicated to the issue of AI-driven identity fraud, are upgrading their fraud prevention technology stack, and expect to have more budget to do so.
However, despite this awareness, just under a quarter of respondents have actually started implementing measures. Most are planning to do so in the next year. Smaller organizations are actually further behind, with only 18% having mitigation in place already. This indicates a delay in deploying necessary defenses against AI-driven identity fraud.
Financial services companies are also adopting advanced technologies to combat this evolving threat. JP Morgan, for example, uses the AI to reduce fraud, streamline payment validations, and improve overall customer satisfaction.
Additionally, companies are partnering with tech firms to enhance their fraud detection capabilities. For example, Microsoft has partnered with risk assessment firm Moody’s to develop enhanced risk, data, analytics, research and collaboration solutions powered by genAI.
Featured image credit: edited from freepik
Comments