In 2024, we saw an alarming evolution of financial crime, driven by AI, generative tools, and the exploitation of systemic vulnerabilities. From AI-powered fraud and deepfake scams to audacious fraud-as-a-service operations, the year showcased not only the scale but the creativity of fraudsters worldwide. In this inaugural year of the Fraud Awards, we spotlight the most disruptive, costly, and creative fraud schemes of 2024. These cases serve as both cautionary tales and urgent calls for action as organizations look to stay one step ahead of increasingly sophisticated fraud tactics. Most Disruptive Fraud Trend of 2024 AI-Powered Fraud: Redefining Financial Crime AI-powered fraud is not new, but in 2024, it has emerged as the most disruptive fraud trend, leveraging generative AI to fuel hyper-personalized, scalable, and realistic attacks. From deepfake scams to automated phishing campaigns, fraudsters are exploiting AI at an alarming scale, posing severe challenges to businesses and financial institutions worldwide. AI-driven fraud now accounts for 42.5% of all detected fraud attempts, making it a top priority for organizations looking to protect themselves against evolving threats. Why AI Fraud Is the Most Disruptive Trend in 2024 In 2024, AI fraud has reached a tipping point, transforming the landscape of financial crime. From deepfake technology enabling hyper-realistic impersonation to AI-enhanced phishing campaigns that mimic human behavior, the threat is evolving rapidly, forcing organizations to adapt or risk devastating consequences. Below, we explore why AI-powered fraud has earned its title as the most disruptive trend of the year. Deepfakes in Fraud Attacks Deepfake incidents have surged by 700% this year, becoming a key weapon for fraudsters. They are being used in: CEO impersonation scams to authorize fraudulent transactions. Investment schemes and romance scams with hyper-realistic videos and audio. Fake job interviews or identity verification processes to bypass security systems. AI-Enhanced Phishing and Social Engineering Generative AI tools have transformed phishing and social engineering tactics: Hyper-Personalized Attacks AI analyzes victims’ behaviors to craft convincing, customized emails or messages. AI Chatbots Fraudsters automate thousands of phishing campaigns using AI-powered chatbots that mimic human responses. Customer Service Impersonations Fake AI-driven agents trick individuals into revealing sensitive information. Key Factors Driving the Surge of AI Fraud The rise of AI-powered fraud in 2024 is no accident. As generative AI tools become more accessible, fraudsters have found new ways to exploit its capabilities to scale attacks, automate scams, and bypass traditional fraud detection systems. What was once the domain of highly skilled cybercriminals is now within reach for even low-level bad actors, thanks to easy-to-use AI tools. These advancements have allowed fraud to occur at speed and scale, with techniques that are increasingly sophisticated and difficult to detect. Below, we break down the key drivers fueling the rapid surge of AI-driven fraud. Accessibility of Generative AI Widely available tools lower the barrier for executing sophisticated scams. Speed and Scale AI enables fraud to occur at unprecedented speed, targeting victims globally in real-time. Advanced Tactics AI tools produce nearly undetectable deepfake content, bypassing traditional fraud detection systems. Financial and Operational Impact The financial toll of AI-driven fraud is staggering, with projected global losses reaching $40 billion by 2027 (CAGR 32%). Businesses face direct financial losses, reputational damage from deepfake impersonation or phishing attacks, and rising compliance costs as regulatory bodies adapt to AI’s risks. How Businesses Can Stay Ahead of AI-Powered Fraud To mitigate AI-driven fraud, organizations must adopt a multi-layered security approach: AI-Powered Fraud Detection: Leverage advanced machine learning tools for real-time anomaly detection. Deepfake Detection Tools: Invest in technology that identifies manipulated video, audio, or imagery. Behavioral Analytics: Use AI to track behavioral patterns and spot irregularities. Employee Training: Educate teams on AI-enhanced phishing attacks, impersonations, and evolving social engineering threats. Key Takeaways: Why AI Fraud Tops the List in 2024 AI-powered fraud has reached a new level of sophistication, scaling faster and bypassing defenses with deepfakes, AI-driven phishing, and automated scams. Organizations must act now by combining cutting-edge AI detection tools, behavioral analytics, and comprehensive security strategies to stay one step ahead of fraudsters. Fastest-Growing Fraud Synthetic Identity Fraud Synthetic identity fraud has rapidly emerged as the fastest-growing financial crime in the United States, surging by 18% in 2024. Powered by generative AI and widespread data breaches, fraudsters can now create synthetic identities at scale, blending stolen personally identifiable information (PII) with fictitious data to bypass traditional verification systems. Once fabricated, these synthetic identities are used to establish accounts, secure credit, and commit financial fraud, often going unnoticed until significant damage occurs. Alarming Growth and Financial Impact Synthetic identity fraud now accounts for 85-95% of all fraud losses, reflecting its dominant role in financial crime. Key statistics highlight its staggering growth: In the first half of 2024, lender exposure in the U.S. reached $3.2 billion, the highest recorded level to date. Losses from synthetic identity fraud are projected to reach $23 billion by 2030, a sharp increase from $6 billion in 2016. What Is Synthetic Identity Fraud and Why Is It Exploding? Synthetic identity fraud combines real and fictitious personal data—such as names, Social Security numbers, addresses, and even biometric data—to create entirely new identities that appear legitimate. Key Drivers of Its Growth The rapid rise of synthetic identity fraud can be traced to a combination of technological advancements and systemic vulnerabilities. From the use of generative AI to automate identity creation to the exploitation of widespread data breaches and digital platforms, fraudsters now have the tools and opportunities to operate at an unprecedented scale. Here’s a closer look at the key factors driving this explosive growth. AI and Automation Generative AI enables fraudsters to produce synthetic PII at scale, increasing the speed and efficiency of identity fabrication. Data Breaches The rise in large-scale breaches provides fraudsters with access to stolen personal information, fueling synthetic identity creation. Digital Banking and E-Commerce The rapid growth of digital platforms gives fraudsters more opportunities to open accounts and exploit weaknesses in online verification systems. Challenges in Detection Synthetic identity fraud is notoriously difficult to detect, as fraudsters have mastered the art of blending real and fabricated information to appear legitimate. Traditional verification systems often fail to spot subtle inconsistencies, allowing these identities to slip through undetected. As a result, financial institutions and retailers alike face mounting challenges in identifying and preventing this growing threat: Traditional Systems Fall Short Legacy fraud detection tools are often unable to spot subtle inconsistencies that expose synthetic identities. Complex Verification Fraudsters create authentic-looking social media profiles and establish credit histories over time, making detection even harder. Industry Struggles 48% of retailers and 53% of financial services firms cite synthetic identity fraud as the leading challenge in online identity verification. Sectors at Highest Risk While synthetic identity fraud poses a threat across industries, certain sectors face disproportionate exposure due to their reliance on digital transactions and credit approvals. Auto lending and consumer lending, in particular, have become prime targets, with staggering financial losses highlighting the urgent need for stronger fraud prevention measures: Auto Loans Auto lending sees the highest exposure, with fraud balances nearly double those of bankcard sectors. Consumer Lending Lenders faced $3.2 billion in synthetic identity exposure in just the first half of 2024, a 7% year-over-year increase. Prevention Strategies: How to Combat Synthetic Identity Fraud Financial institutions and businesses must adopt multi-layered fraud prevention strategies to detect and stop synthetic identities: AI-Powered Fraud Detection Use advanced machine learning tools to analyze behavioral data and flag suspicious activity in real time. Omnichannel Verification Vet users through multiple methods, such as verbal, digital, and geographical checks, for robust identity validation. Digital Footprint Analysis Invest in tools that analyze social and digital behavior to uncover synthetic identities early. Behavioral Analytics Track user behaviors and interactions to identify anomalies that signal synthetic accounts. The Bottom Line As synthetic identity fraud continues to evolve, financial institutions must remain vigilant. By integrating AI-driven technologies, behavioral analytics, and comprehensive verification methods, businesses can mitigate risk and stay ahead of fraudsters exploiting this silent epidemic. Most Costly Scheme A $200-300 Million Dollar Attack on a Leading Airline This sophisticated fraud scheme targeted a major airline’s reservation system, but by implementing DataVisor’s UML, the airline successfully prevented $200-300 million in potential fraud losses in just one year. What makes this attack particularly interesting is its scale and coordination across multiple dimensions of the ticket booking process. Inside the Attack: How Fraudsters Exploited the Airline The fraudsters executed a complex ticket reservation fraud operation, systematically exploiting the airline’s booking system and loyalty program. The scheme involved: Coordinated Bookings Multiple accounts and flight routes were used to obfuscate fraudulent activity. Loyalty Point Exploitation Fraudsters rapidly accumulated and transferred loyalty points to evade detection. Stolen and Compromised Accounts A mix of stolen credit cards and breached loyalty program accounts fueled the scheme. Rapid Booking and Cancellation Fraudsters used repeated booking and cancellation patterns to avoid triggering traditional fraud detection rules. How DataVisor’s UML Stopped the Fraud DataVisor’s Unsupervised Machine Learning (UML) model played a pivotal role in detecting and preventing this complex fraud scheme. Unlike traditional systems, which rely on historical data or predefined rules, DataVisor’s UML analyzed correlated events in real time, identifying suspicious activity across multiple dimensions: Booking Behavior: Clustering irregular patterns across accounts and routes. Account Creation: Spotting anomalies in new accounts and behaviors. Loyalty Transactions: Flagging suspicious point transfers and usage patterns. To further strengthen the investigation, DataVisor’s Knowledge Graph enabled investigators to visualize the fraud network, linking compromised accounts, high-risk bookings, and loyalty point abuse. This real-time, holistic view allowed the airline to quickly identify and respond to emerging fraud rings. Results: Protecting the Airline from Massive Losses By leveraging DataVisor’s UML and Knowledge Graph tools, the airline successfully prevented $200-300 million in potential fraud losses over a 12-month period. The ability to detect evolving fraud patterns without relying on historical labels or static rules gave the airline a significant advantage in staying ahead of increasingly sophisticated fraud tactics. This case highlights the importance of advanced fraud detection systems in defending businesses from coordinated, large-scale attacks. For airlines and other industries, solutions like unsupervised machine learning are critical to mitigating financial losses and protecting reputational integrity in an era of ever-evolving fraud. Runnerup: Microsoft Impersonation, $60 million in losses Microsoft was the most impersonated brand in phishing attacks during Q3 2024, accounting for 61% of brand phishing attempts, according to CheckPoint Research‘s latest Brand Phishing Ranking. Most Coordinated Fraud Scheme of 2024 BNPL Coordinated Loan Application Attack The BNPL (Buy Now, Pay Later) coordinated loan application attack stands out for its sheer scale and intricate organization, highlighting the power of fraud networks to exploit financial platforms. With clusters ranging from fewer than 10 to over 10,000 applications, fraudsters created a complex web of interconnected submissions that traditional fraud detection systems struggled to identify. Inside the Attack: How Fraudsters Exploited BNPL Platforms Fraudsters systematically coordinated their attack by: Sharing Personal Information Social Security numbers were reused across multiple users and accounts. Reusing Addresses and Devices Shared devices and addresses created the illusion of distinct applications. Varying Loan Amounts Loan values ranged from small amounts to as much as $10,000, further complicating detection. Clusters of Fraud Organized clusters ranged from fewer than 10 applications to over 10,000, creating a significant challenge for manual verification processes. This attack resulted in potential monthly fraud losses of $3 million for the BNPL provider. How DataVisor’s Technology Stopped the Fraud DataVisor’s Unsupervised Machine Learning (UML) was instrumental in detecting and preventing this sophisticated fraud ring: Real-Time Analysis The UML analyzed multiple dimensions, such as shared devices, reused personal information, and application behavior, to uncover coordinated patterns. Fraud Network Visualization Using DataVisor’s Knowledge Graph, investigators visualized the interconnected fraud network, linking non-labeled transactions, compromised accounts, and high-risk applicants. Proactive Detection By identifying unusual activity spikes without relying on historical data or predefined rules, the BNPL provider was able to respond rapidly and halt the fraud before further losses occurred. The Results: Preventing Significant Losses By implementing DataVisor’s UML, the BNPL provider successfully potentially prevented over $3 million in monthly loan fraud losses. The advanced system’s ability to detect emerging fraud patterns in real-time gave the provider a decisive advantage, particularly during peak periods when fraud often surges. Key Takeaway This case underscores the growing sophistication of coordinated fraud rings and the critical role of advanced fraud detection systems like DataVisor’s UML in combating these threats. As fraud attacks continue to evolve, businesses must adopt technologies that analyze behavior holistically and detect anomalies in real-time to stay ahead of organized fraud networks. Most Creative Scheme The Deepfake Video Conference Scam In one of the boldest and most creative fraud schemes of 2024, a multinational company in Hong Kong fell victim to an elaborate deepfake operation that resulted in HK$200 million (US$25.6 million) in financial losses. This scam marks a chilling evolution in the use of AI, where fraudsters successfully replicated not just one individual but an entire group of participants on a video conference call to authorize fraudulent money transfers. How the Deepfake Scam Worked Fraudsters used advanced AI-driven deepfake technology to digitally impersonate the company’s Chief Financial Officer (CFO) and other employees during a seemingly routine virtual meeting. The scammers’ attention to detail made the replicas nearly indistinguishable: Visual Replication Hyper-realistic deepfake videos mimicked the facial expressions and mannerisms of real participants. Audio Duplication AI-generated voices matched the tones and speech patterns of the CFO and colleagues. Social Engineering Fraudsters created a sense of urgency, convincing an unsuspecting employee to authorize multiple wire transfers. The scheme’s sophistication lay not only in the convincing nature of the deepfakes but also in the careful orchestration of the setup: the meeting seemed legitimate, complete with participation from supposed senior executives and a valid financial request. Why This Scam Is So Significant This deepfake scam represents a dangerous turning point in financial fraud: Unprecedented Creativity The use of AI to impersonate multiple participants in real time demonstrates the growing capability of deepfake technology. Bold Execution Fraudsters targeted a high-stakes scenario—approving large financial transfers during a live virtual meeting—relying on trust and authority to bypass suspicions. Global Impact This is one of the first reported deepfake scams involving such significant financial losses, making it a case study for organizations worldwide. Lessons Learned: How to Combat Deepfake Fraud The success of this scheme highlights the urgency for businesses to adapt their fraud prevention strategies to counter the evolving risks of AI-driven scams. Key measures include: Implementing Deepfake Detection Tools Advanced technologies can analyze video and audio for inconsistencies to identify manipulated content. Strengthening Verification Protocols Require multi-factor verification (e.g., verbal, written, and biometric confirmation) for sensitive financial approvals. Employee Awareness and Training Educate employees on the risks of deepfake scams, social engineering tactics, and how to verify participants during virtual meetings. Using Secure Communication Channels Rely on encrypted, verified platforms for high-stakes meetings to reduce risks of unauthorized access. Key Takeaway The deepfake video conference scam highlights how fraudsters are leveraging AI creativity to bypass trust and security in unprecedented ways. With hyper-realistic deepfakes now targeting live interactions, businesses must act quickly to integrate advanced detection tools and strengthen verification processes to safeguard against this bold new frontier in financial fraud. Most Audacious Fraud Scheme FaaS: Credit Card Fraud Webinars DataVisor detected this $900k potential loss for a credit union, a value greater than the infamous “TikTok infinite glitch”. Fraud-as-a-Service, like these, is a growing sector in the underground financial crime ecosystem. When making money from fraud wasn’t enough, the crime strategy expanded to make more money by teaching and selling fraud tactics to aspiring fraudsters. The fraud ring conducted a series of webinars teaching participants how to deceive specific financial institutions and attain credit lines. Falsifying Information Instructions on manipulating income details, employment status, and personal data. Dark Web Purchases: Encouraging attendees to buy compromised email addresses to pass third-party signal detection. To learn how this fraud ring operated, see the Fraud Ring Gallery. How DataVisor Stopped the Fraud DataVisor’s Unsupervised Machine Learning (UML) caught this fraud in real-time by analyzing multiple dimensions of loan and credit applications, including: Referral links Declared income and employment status Application timing and suspicious email domains Traditional systems would likely have missed these signals, but DataVisor’s UML enabled the credit union to achieve a 3x improvement in fraud detection rates and save close to $900,000 annually in potential losses. To learn more about fraud ring tactics like this, visit our Fraud Ring Gallery. Most Viral Fraud Scheme The TikTok “Infinite Money Glitch” In late August 2024, TikTok users discovered and rapidly shared information about a vulnerability in JPMorgan Chase’s ATM system. These customers now allegedly owe the bank nearly $662,000 cumulatively. Dubbed the “Infinite Money Glitch”, this viral scheme turned out to be a digital-age version of the age-old check-kiting scam. How the Scheme Worked Participants exploited ATM deposit systems to withdraw funds before checks cleared: Wrote large checks to themselves. Deposited these checks at Chase ATMs. Withdrew substantial amounts before the checks bounced. The vulnerability allowed access to more funds than typically permitted before check clearance, and TikTok’s viral nature amplified the scheme, spreading it across multiple states. JPMorgan Chase’s Response In a startling twist that blends social media virality with financial crime, JPMorgan Chase has taken legal action against their customers. This case, dubbed the “infinite money glitch” by social media users, represents a modern intersection of technology, social media influence, and financial fraud. Facing potential losses close to half a million dollars, JPMorgan Chase has initiated a series of legal actions: The bank filed lawsuits against four customers in federal courts across Texas, California, and Florida. Chase is demanding repayment of fraudulently withdrawn funds, plus legal fees and other expenses. JPMorgan Chase spokesperson Drew Pusateri emphasized the seriousness of the situation, stating, “Fraud is a crime that impacts everyone and undermines trust in the banking system” The bank is actively cooperating with law enforcement and remains open to pursuing other individuals or entities involved in this scheme. This case highlights the evolving landscape of financial fraud in the digital age, where social media can rapidly spread information about vulnerabilities in banking systems. It also underscores the need for financial institutions to remain vigilant and adaptive in their security measures to combat increasingly sophisticated and viral fraud schemes. Bonus Awards Lifetime Achievement in Fraud Winner: The Return of Check Fraud in the Digital Age While checks may seem outdated, check fraud has made a surprising comeback, driven by technological advancements, AI-enabled tools, and social media trends. Fraudsters have revived this “old-school” tactic, proving that traditional scams can still wreak havoc in modern financial systems. Despite the decline in paper check usage, check fraud has experienced a notable resurgence, with fraudsters adapting old techniques to exploit modern banking systems. The American Bankers Association’s 2024 Fraud Insights Report highlights that fraudsters are using established accounts to bypass controls on new accounts, contributing to persistent check fraud. Most Unexpected Fraud Scheme Winner: Fraud Rings Exploiting Charity and Disaster Relief Funds In a year marked by humanitarian crises, fraudsters targeted charity programs and disaster relief funds. Exploiting fake identities and AI-generated applications, they diverted resources intended for those in need, underscoring the need for stronger fraud prevention in the non-profit sector. The 2024 Global Identity & Fraud Report by Experian notes that fraudsters are employing new methods that strike an emotional response from consumers with cause-related asks to gain access to personal information. Fraud Scheme We Never Thought Would Happen Winner: Deepfake Job Interviews to Land High-Salary Positions Fraudsters began impersonating job applicants using deepfake technology in virtual interviews, securing employment at companies and stealing sensitive corporate data or siphoning salaries. This innovative scam blends AI and social engineering to manipulate hiring processes. Experian’s Future of Fraud Forecast for 2024 discusses how generative AI accelerates DIY fraud, including deepfake content used in such schemes. Emerging Threat of the Year Winner: Deepfake Voice Cloning in Financial Authorization Scams Voice cloning fraud has surged, allowing criminals to impersonate executives, family members, or customer service agents to authorize unauthorized financial transactions. As AI tools become more accessible, this emerging threat is rapidly growing. The Journal of Accountancy discusses how fraudsters are using sophisticated methods, including deepfakes, to commit fraud. Most Resilient Fraud Scheme Winner: Romance Scams and Social Engineering Fraud Despite increased awareness, romance scams remain one of the most effective and emotionally manipulative forms of fraud. In 2024, scammers exploited deepfakes and AI tools to create more convincing personas, leading to billions in losses globally. The Payments Association reports a rise in unauthorized payment fraud, driven largely by card-related scams, indicating the resilience of such fraud schemes. The “Fast and Furious” Award for Rapid Execution Winner: Coordinated Loan Application Fraud in BNPL Platforms This BNPL fraud scheme showcased lightning-fast coordination, where fraudsters submitted thousands of applications in a short period, exploiting systems before traditional defenses could respond. Datos Insights’ 2024 Impact Awards in Fraud recognize innovative solutions addressing such challenges. Stay a Step Ahead of Scammers in 2025 The 2024 Fraud Awards shine a light on how fraudsters continue to evolve, using creativity, technology, and coordinated attacks to target organizations globally. From deepfake scams to synthetic identities and viral fraud schemes, it’s clear that fraud is no longer limited to conventional methods. Looking ahead to 2025, businesses must adopt proactive fraud detection systems to combat emerging threats, leveraging advanced technologies like AI, machine learning, and behavioral analytics. By staying vigilant and adaptive, organizations can safeguard their assets and reputation in an increasingly complex fraud landscape. View posts by tags: Related Content: FraudTech How to Use Data Orchestration to Fight Real-Time Fraud Product Blogs Can You Use the DataVisor Platform to Build Your Own Fraud Solution? Digital Fraud Trends Experts Name the Most Challenging Fraud Threats from 2023 to Watch in 2024 Quick Takes 3 Ways FedNow is Different From Other Payments about Reem Habbak Sr. Digital Marketing Manager at DataVisor, Reem spends her time making sure the world is a little safer from financial crime. When she's not spreading the word about fraud & money laundering prevention, she enjoys being out in nature, trying different cuisines, and having a good laugh with her family and their ginger cat. about Reem Habbak Sr. Digital Marketing Manager at DataVisor, Reem spends her time making sure the world is a little safer from financial crime. When she's not spreading the word about fraud & money laundering prevention, she enjoys being out in nature, trying different cuisines, and having a good laugh with her family and their ginger cat. View posts by tags: Related Content: FraudTech How to Use Data Orchestration to Fight Real-Time Fraud Product Blogs Can You Use the DataVisor Platform to Build Your Own Fraud Solution? Digital Fraud Trends Experts Name the Most Challenging Fraud Threats from 2023 to Watch in 2024 Quick Takes 3 Ways FedNow is Different From Other Payments