Generative AI makes digital fraud industrial
The number of advanced digital fraud cases has increased by 180% this year compared to the same period in 2024, amid the massive use of artificial intelligence for fake identities, deepfakes, and autonomous systems capable of executing complex attacks without human intervention, according to a recently published specialized report.
Thus, according to data compiled by Sumsub, cited by the National Cyber Security Directorate (DNSC), the share of sophisticated fraud grew from 10% to 28% in a single year, signaling a shift from simple attacks to “precision operations,” CE Report quotes AGERPRES.
In this context, phishing attacks remain the most common method (45%), but more and more fraud originates from supply-chain breaches (36%), and synthetic identities are increasingly used.
"The global fight against digital fraud has become much more complicated, as cybercriminals have moved from high-volume opportunistic attacks to sophisticated, AI-driven operations; these are not only harder to detect but can also cause substantially greater damage. An analysis of data from more than four million fraud attempts, as well as surveys of 300 fraud and risk professionals and another 1,200 end users, (...) highlighted what the identity verification company described as a ‘notable shift toward sophistication’ over the past year. Frauds involving advanced deception techniques, social engineering, AI-generated identities, and telemetry manipulation increased by 180% from the previous year, and the share of such incidents in the total volume of fraud rose from 10% in 2024 to 28% in 2025," the Sumsub specialists noted.
They also found that attackers are increasingly using autonomous systems capable of executing multi-step fraud with minimal human intervention. Thus, AI-generated documents accounted for only 2% of all fake IDs and records used in digital fraud last year, but this seemingly small share — fueled by tools such as ChatGPT, Grok, and Gemini — represents a worrying upward trend.
In the U.S., there was a 15% year-over-year decrease in overall fraud rates. However, 21% of incidents involved the use of synthetic identities or AI-generated individuals, followed by chargeback abuse (16%) and account takeovers (19%).
One of the main themes of Sumsub’s report is how AI tools have industrialized digital fraud in 2025. The company found that scammers are using generative AI models to create almost perfect identity-fraud documents — passports, driver’s licenses, utility bills — with precise holograms, realistic fonts, and textures.
“In many cases, scammers used text-to-video systems to create highly convincing deepfakes meant to bypass liveness checks,” the report states.
Even more worryingly, 2025 saw the emergence of AI agents capable of executing the entire fraud chain autonomously.
"These are not traditional bots. They combine generative AI, automation frameworks, and reinforcement learning to create synthetic identities, interact in real time with verification systems, and adjust their behavior based on outcomes. They are still in early stages today, but the current trajectory indicates they could become mainstream in the next 18 months, especially in organized fraud networks. This is evolution. We have moved from high-volume, low-skill scams — which defenses can filter — to precision-designed attacks built specifically to bypass advanced verification systems," explained Sumsub’s head of AI, Pavel Goldman-Kalaydin.
The Sumsub report also highlighted several measures organizations will need to take to protect themselves from the AI-driven fraud wave. The list includes multi-layer identity verification mechanisms, AI-based fraud detection tools, behavioral analytics, and the sharing of threat-intelligence information.










