Deepfake Gains Momentum as a Powerful Tool for Fraud and Deception

Switzerland

News / Switzerland 806 Views 0

The rise of artificial intelligence (AI) and machine learning (ML) has introduced new and sophisticated tools for threat actors to scam and defraud individuals. One particular concern is the emergence of deepfake technology, which allows for the creation of fake videos, images and audio to impersonate people. This technology has become increasingly popular over the past couple of years, fueled by growing accessibility.

New research by Kaspersky reveals how readily available deepfake tools have become on darknet marketplaces, with prices for creating fake videos starting as low as US$300 per minute. This accessibility poses a significant threat to businesses and individuals alike as cybercriminals now possess the means to impersonate others, carry out fraudulent schemes, and steal sensitive data.

Despite these risks, there’s a notable gap in digital literacy among Internet users. A survey conducted by the Russian cybersecurity firm shows that while 51% of employees in the Middle East, Turkey and Africa claimed they could distinguish a deepfake from a real image, only 25% could actually do so in a test.

Similarly, research by the University College London reveals that people are struggling to identify deepfake audio, even after training. A study conducted in 2023 on 529 individuals found that participants were able to identify fake speech only 73% of the time. This number improved only slightly after participants received training to recognize aspects of deepfake speech, highlighting the need for advanced detection systems to combat the growing threat posed by deepfakes.

The rise of deepfakes

Deepfakes utilize deep learning algorithms to generate highly realistic simulations of people saying or going things they never actually did. The technology has gained attention due to its potential for various malicious activities, including spreading false information, blackmailing people or even disrupting elections.

Ahead of the Indonesian elections on February 14, 2024, a AI-generated deepfake video depicting late Indonesian president Suharto advocating for the political party he once presided over went viral. The video, which cloned his face and voice, racked up 4.7 million views on X.

Most recently, a finance worker at a multinational company’s Hong Kong office was tricked into paying out US$25 million to fraudsters using deepfake technology to pose as the company’s CFO in a video conference call.

Deepfakes are also widely used to pornographic videos featuring the faces of celebrities or non-consenting individuals. One recent study found that 98% of deepfake videos online were pornographic and that 99% of those targeted were women or girls. This raises concerns about privacy breaches, emotional distress and the exploitation of individuals for malicious purposes.

Europe takes steps to tackle the risks associated with deepfakes

In response to the risks posed by generative AI such as deepfakes, the European Commission (EC) launched in March a probe into bigtech’s use of AI and their handling of computer-generated media.

The EC’s requests, which are aimed at companies including Meta, Microsoft, Snap, TikTok and X, seek detailed information on the platforms’ mitigation measures for various risks linked to generative AI, including the dissemination of false information, the viral spread of deepfakes, and the automated manipulation of services that could influence voters.

Additionally, the EC is seeking information on risk assessments and mitigation measures related to a range of issues, including electoral processes, illegal content, fundamental rights protection, gender-based violence, minors’ protection, mental well-being, personal data protection, consumer protection, and intellectual property.

The probe demonstrates the EC’s commitment to addressing the challenges posed by generative AI and follows the approval of the AI Act. The legislation, the first comprehensive regulation on AI by a major regulator anywhere, assigns applications of AI to three risk categories: applications and systems that create an unacceptable risk, such as government-run social scoring, which are banned; high-risk applications, such as a CV-scanning tool that ranks job applicants, which are subject to specific legal requirements; and applications not explicitly banned or listed as high-risk, and which are largely left unregulated.

Key provisions of the AI Act include proper data governance and an appropriate risk management system for high-risk systems as well as transparency requirements for deepfake material.

The AI Act is part of the EU’s digital strategy, the bloc’s comprehensive approach to digital transformation, encompassing various aspects such as connectivity, digital skills, cybersecurity, and the digital economy.

The number of deepfakes has soared over the past years, rising by ten times globally between 2022 and 2023 alone, a report by identify verification company Sumsub released last year shows. North America witnessed the strongest growth during the period, recording a 1,740% increase in deepfakes. The region is followed by Asia-Pacific (+1,530%), Europe (+780%), the Middle East and Africa (+450%) and Latin America (+410%).

Featured image credit: Edited from freepk

Comments