Deepfake Cyber Threats: Understanding the Risks of AI-Powered Fraud and Scams

I. Targeted Entities
Deepfake technologies pose a threat to a wide range of entities, including but not limited to:
- Individuals / General Public
- Politicians and Political Processes
- Celebrities and Public Figures
- Organizations and Corporations:
- Senior Executives
- Financial Sector
- Government Officials and Agencies
II. Introduction and Key Treat Details
Introduction
Synthetic media generated by Artificial Intelligence (AI), commonly known as deepfakes, are rapidly multiplying and increasing in sophistication. We are currently witnessing a significant surge in deepfake incidents; for instance, there was a 257% rise in recorded incidents from 2023 to 2024, and the rest quarter of 2025 alone surpassed the total incidents of the previous year.

The potential impacts are severe and varied. These include substantial financial losses for organizations and individuals, as seen by the $25 million fraud at Arup, where executives were impersonated via deepfake video. Deepfakes are key in disinformation campaigns that erode public trust and can influence political outcomes, such as through fake calls targeting voters. Furthermore, the technology is used to create non-consensual explicit content and enhance the effectiveness of social engineering attacks.
As outlined in Section I, targets span from the general public and public gures to corporations (particularly in nance) and government entities. Addressing this emerging threat requires a multi-layered strategy. Organizations must implement robust cybersecurity policies, conduct continuous employee awareness training, deploy technical safeguards, and enforce strict verification protocols. Also, individuals need to develop media literacy, enhance personal data security, and be skeptical of certain online information. Ocial bodies, such as the FBI, are increasingly issuing warnings and guidance, indicating a move towards more collaborative defense.
Key Threat Details
Threat Type: The threat involves the malicious use of deepfakes, which are AI-generated synthetic media (audio, video, or images) carefully crafted to impersonate real individuals or fabricate events that never occurred. The primary technology empowering deepfakes is Generative Adversarial Networks (GANs). A GAN consists of two neural networks: a ‘generator’ that creates the fake content and a ‘discriminator’ that attempts to distinguish the fake content from authentic examples. Through an iterative, adversarial training process, the generator becomes progressively better at creating realistic fakes that can deceive the discriminator, and ultimately, human perception. This technology is leveraged by increasingly accessible software, with tools like Iperov’s DeepFaceLab and FaceSwap, and services like Voice.ai, Mur.ai, and Elevenlabs.io for voice cloning.
Targets
- Individuals (General Public): Targeted for fraud, non-consensual explicit content, and harassment.
- Politicians and Political Processes: Disinformation campaigns, impersonation to influence elections, and reputational attacks.
- Celebrities and Public Figures: Often targeted for non-consensual explicit content, endorsement scams, and reputational damage.
- Organizations and Corporations:
- Senior Executives (CEOs, CFOs): Impersonated in financial fraud schemes.
- Financial Sector: Targeted for large-scale fraud, market manipulation through disinformation, and undermining customer trust.
- Government Officials and Agencies: Impersonated to obtain sensitive information, spread disinformation, or authorize fraudulent actions.
Impact
If successful, deepfake attacks can lead to:
- Financial Fraud: Significant monetary losses through impersonation of executives or trusted parties to authorize fraudulent transactions (vishing).
- Disinformation and Political Destabilization: Manipulation of public opinion, interference in elections, incitement of social unrest, and damage to democratic processes.
- Reputational Harm: Severe damage to personal or corporate reputations through the creation and dissemination of non-consensual explicit material, defamatory statements, or fabricated incriminating evidence.
- Social Engineering and Data Breaches: Gaining unauthorized access to sensitive systems or information by impersonating trusted individuals and deceiving employees.
- Erosion of Trust: Diminished public trust in authentic media, institutions, and digital communication (“liar’s dividend”).
- Operational Disruption: Business operations can be disrupted by disinformation campaigns or internal fraud incidents.
Contextual Info
Deepfake technology is accessible to a wide spectrum of malicious actors. This includes individual fraudsters, online harassers, organized criminal enterprises focused on financial gain, and potentially state-sponsored groups deploying deepfakes for complex disinformation campaigns and political interference.
Related Campaigns/Past Activity
The versatility of deepfakes is seen through various high-prole incidents:
- The $25 million financial fraud at Arup, where attackers used deepfake video and audio to impersonate senior executives in a conference call, compelling an employee to make unauthorized transfers.
- AI-generated calls impersonating U.S. President Joe Biden, which urged voters in New Hampshire not to participate in the primary election, representing a direct attempt at election interference.
- The widespread creation and distribution of non-consensual explicit deepfake images of public gures like Taylor Swi, highlighting the potential for severe personal and reputational harm.
MITRE ATT&CK TTPs
T1566 Phishing: Deepfakes, especially audio (voice clones), are used in vishing (voice phishing) campaigns, aligning with sub-techniques like T1566.003 Spearphishing Voice.
T1591.002 Create/Modify Content: Deepfakes inherently involve creating or modifying content to deceive, related to broader information operations or influence campaigns.
IV. Recommendations
For Organizations
Policies:
- Develop and enforce robust cybersecurity policies that address the risks of deepfake attacks. Integrate deepfake scenarios into incident response plans and conduct regular practice incidents.
- Establish clear guidelines on the acceptable use of AI and synthetic media tools within the organization.
Awareness/Training:
- Implement continuous security awareness training for all employees, leadership, and relevant third parties. Training should cover deepfake identification, the psychological tactics used by attackers (e.g., urgency, authority bias), and established reporting procedures.
Technical Safeguards:
Enforce strong Multi-Factor Authentication (MFA) across all systems and users, prioritizing stronger methods for critical access points.
Deploy AI-powered detection tools for high-risk communication channels (e.g., video conferencing, customer service calls).
Adopt a Zero Trust security architecture, assuming no user or device is inherently trustworthy without continuous verification.
Monitor for Virtual Camera Software in Logs: For live deepfake attacks, attackers may use virtual camera software like Open Broadcaster Software (OBS) to feed the manipulated video into the meeting application. If logging is enabled for platforms like Zoom or Microsoft Teams, security teams can review logs for camera device names. The presence of uncommon camera names like ‘OBS Virtual Camera’ can be a strong indicator of a deepfake attempt, since this software is not typically used by employees for standard meetings.
Verification and Controls:
- Implement strict verification (e.g., phone call authentication) for any unusual or high-value requests, specifically those involving financial transfers, changes to payment details, or disclosure of sensitive information over digital channels.
- Implement “master passcodes” or challenge questions for authenticating identities during sensitive communications.
- Enforce dual approvals for significant decisions/transactions.
Preventative Measures:
- Minimize the public availability of audiovisual material of executives/employees to limit training data for attackers.
- Assess organizational susceptibility to deepfake attacks, identifying vulnerable processes and personnel.
For Individuals
Increase Media Literacy and Critical Thinking:
- Approach online content with healthy skepticism. Question the authenticity of unexpected, sensational, or emotionally manipulative videos, audio messages, or images.
- Always consider the source of information. Verify claims through multiple reputable sources before accepting them as true.
Recognize Potential Red Flags:
- Be aware of common visual indicators such as unnatural eye movements, mismatched lighting, a face that flickers when an object passes in front of it, or an unwillingness from the person to show their side prole. For audio, listen for robotic cadence, unnatural pitch, or lack of emotional inection. 17 However, understand that sophisticated deepfakes may not exhibit obvious aws.
Protect Personal Data:
- Review and tighten privacy settings on all social media accounts to limit public access to personal images, videos, and information.
- Be mindful of the amount of personal audiovisual data shared online.
Verify and Report:
- If you receive a suspicious or urgent request, even if it appears to be from a known contact, verify it through a separate, trusted communication channel (e.g., call a known phone number).
- Report suspected deepfakes immediately to the platform where they are hosted. If the deepfake is being used for malicious purposes (e.g., fraud, harassment, defamation, non-consensual explicit content), report it to law enforcement agencies.
VII. References
Works cited
Deepfake statistics 2025: how frequently are celebrities targeted?, accessed June 7, 2025, hps://surfshark.com/research/study/deepfake-statistics
Cybercrime: Lessons learned from a $25m deepfake attack | World …, accessed June 7, 2025, hps://www.weforum.org/stories/2025/02/deepfake-ai-cybercrime-arup/
Understanding the Hidden Costs of Deepfake Fraud in Finance – Reality Defender, accessed June 7, 2025, hps://www.realitydefender.com/insights/understanding-the-hidden-costs-of-de epfake-fraud-in-nance
Top 5 Cases of AI Deepfake Fraud From 2024 Exposed | Blog – Incode, accessed June 7, 2025, hps://incode.com/blog/top-5-cases-of-ai-deepfake-fraud-from-2024-exposed/
Gauging the AI Threat to Free and Fair Elections | Brennan Center for Justice, accessed June 7, 2025, hps://www.brennancenter.org/our-work/analysis-opinion/gauging-ai-threat-free-and-fair-elections
FBI warns of fake texts, deepfake calls impersonating senior U.S. …, accessed June 7, 2025, hps://cyberscoop.com/i-warns-of-ai-deepfake-phishing-impersonating-government-ocials/
Top 10 Terrifying Deepfake Examples – Arya.ai, accessed June 7, 2025, hps://arya.ai/blog/top-deepfake-incidents
Deepfake threats to companies – KPMG International, accessed June 7, 2025,hps://kpmg.com/xx/en/our-insights/risk-and-regulation/deepfake-threats.html
Cybercrime Trends: Social Engineering via Deepfakes | Lumi Cybersecurity, accessed June 7, 2025,hps://www.lumicyber.com/blog/cybercrime-trends-social-engineering-via-dee pfakes/
Investigation nds social media companies help enable explicit deepfakes with ads for AI tools – CBS News, accessed June 7, 2025, hps://www.cbsnews.com/video/investigation-nds-social-media-companies-he lp-enable-explicit-deepfakes-with-ads-for-ai-tools/
How to Mitigate Deepfake Threats: A Security Awareness Guide – TitanHQ, accessed June 7, 2025, hps://www.titanhq.com/security-awareness-training/guide-mitigate-deepfakes/
Deepfake Defense: Your Shield Against Digital Deceit | McAfee AI Hub, accessed June 7, 2025, hps://www.mcafee.com/ai/news/deepfake-defense-your-8-step-shield-against-digital-deceit/
FBI Warns of Deepfake Messages Impersonating Senior Ocials …, accessed, June 7, 2025, hps://www.securityweek.com/i-warns-of-deepfake-messages-impersonating-senior-ocials/
FBI Alert of Malicious Campaign Impersonating U.S. Ocials Points to the Urgent Need for Identity Verication – BlackCloak | Protect Your Digital Life™, accessed June 7, 2025, hps://blackcloak.io/i-alert-of-malicious-campaign-impersonating-u-s-ocials-points-to-the-urgent-need-for-identity-verication/
AI’s Role in Deepfake Countermeasures and Detection Essentials from Tonex, Inc. | NICCS, accessed June 7, 2025, hps://niccs.cisa.gov/training/catalog/tonex/ais-role-deepfake-countermeasures-and-detection-essentials
What is a Deepfake Aack? | CrowdStrike, accessed June 7, 2025, hps://www.crowdstrike.com/en-us/cybersecurity-101/social-engineering/deepfa ke-aack/
Determine Credibility (Evaluating): Deepfakes – Milner Library Guides, accessed June 7, 2025, hps://guides.library.illinoisstate.edu/evaluating/deepfakes
Understanding the Impact of Deepfake Technology – HP.com, accessed June 7, 2025, hps://www.hp.com/hk-en/shop/tech-takes/post/understanding-impact-deepfake-technology
19.Deepfakes: Denition, Types & Key Examples – SentinelOne, accessed June 7, 2025, hps://www.sentinelone.com/cybersecurity-101/cybersecurity/deepfakes/
en.wikipedia.org, accessed June 7, 2025, hps://en.wikipedia.org/wiki/Deepfake#:~:text=While%20the%20act%20of%20cr eating,generative%20adversarial%20networks%20(GANs).
What are deepfakes? – Malwarebytes, accessed June 7, 2025, hps://www.malwarebytes.com/cybersecurity/basics/deepfakes
Complete Guide to Generative Adversarial Network (GAN) – Carmatec, accessed June 7, 2025, hps://www.carmatec.com/blog/complete-guide-to-generative-adversarial-network-gan/
How to Get Started with GANs: A Step-by-Step Tutorial – Draw My Text – Text-to-Image AI Generator, accessed June 7, 2025, hps://drawmytext.com/how-to-get-started-with-gans-a-step-by-step-tutorial/
Detection of AI Deepfake and Fraud in Online Payments Using GAN-Based Models – arXiv, accessed June 7, 2025, hps://arxiv.org/pdf/2501.07033
What is a GAN? – Generative Adversarial Networks Explained – AWS, accessed June 7, 2025, hps://aws.amazon.com/what-is/gan/
Overview of GAN Structure | Machine Learning – Google for Developers,accessed June 7, 2025, hps://developers.google.com/machine-learning/gan/gan_structure
Unlocking the Power of GAN Architecture Diagram: A Comprehensive Guide for Developers, accessed June 7, 2025, hps://www.byteplus.com/en/topic/110690
We Looked at 78 Election Deepfakes. Political Misinformation Is Not an AI Problem., accessed June 7, 2025, hps://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem
What is a deepfake? – Internet Maers, accessed June 7, 2025, hps://www.internetmaers.org/resources/what-is-a-deepfake/
Don’t Be Fooled: 5 Strategies to Defeat Deepfake Fraud – Facia.ai, accessed June 7, 2025, hps://facia.ai/blog/dont-be-fooled-5-strategies-to-defeat-deepfake-fraud/
Top 10 AI Deepfake Detection Tools to Combat Digital Deception in 2025 SOCRadar, accessed June 7, 2025, hps://socradar.io/top-10-ai-deepfake-detection-tools-2025/
How to Spot Deepfakes – Fake News – Dr. Martin Luther King, Jr. Library at San José State University Library, accessed June 7, 2025, hps://library.sjsu.edu/fake-news/deepfakes
Threat Advisory created by The Cyber Florida Security Operations Center. Contributing Security Analysts: Derek Kravetsky