Deepfake Attacks on Businesses: Risk Analysis
Janhavi Pathak | 18 July 2024
Summary
The weaponisation of deepfakes aimed at financial fraud and industrial espionage is spiking. The primary technique used by threat actors is the synthetic visual and auditory impersonation of top executives. Attacks are steadily being reinforced through multiple channels to leverage trust and credibility established in the online space.
The target pool is shifting from bigger companies to small and medium enterprises (SMEs). SMEs without adequate in-house cybersecurity measures are vulnerable to becoming ‘weaker’ links and easy targets, causing unintended financial losses and supply chain disruptions.
Open-source platforms with user anonymity have facilitated continuous improvement and innovations in deepfakes. Information sharing, ranging from internal source code to tactical ideation, coupled with progress in Artificial Intelligence (AI), has rendered the deepfake technology accessible, cheap and hyperrealistic.
Businesses need to adopt routinised threat scenarios and post-incident training to tackle new threat vectors. In-house cybersecurity measures should be calibrated with evolving AI/ML technology to optimise threat detection and incident response. Cyber awareness across management levels and proper authentication systems are crucial for countering the threat of deepfakes.
Key Drivers Behind Deepfake Attacks
Deepfakes are a critical cybersecurity threat against companies worldwide. The technology uses deep learning techniques to produce synthetic audiovisual media that is often hyperrealistic. While the early deepfake technology required substantial computational resources and technical expertise, the advent of generative Artificial Intelligence (Gen AI) has lowered the technical and economic thresholds. It has transformed synthetic media into a cheap, easy-to-use and accessible commodity, compounding the problem of ‘real vs fake.’
Deepfake creation communities and forums share open-source code repositories on platforms such as GitHub, 4Chan, 8Chan, and Reddit to constantly improve the technology's quality, efficiency, and usability. Strict user anonymity on these platforms provides ‘safe spaces’ to share ideas and step-by-step guides. The growing use of social media has facilitated the extraction of targeted individuals' photos, videos, and voices. Consequently, people with no or limited technical background can learn and practice creating deepfakes, which they often commodify in the online marketplace.
Face swapping and voice cloning with realistic lip-syncing have birthed new threat vectors like Business Identity Compromise (BIC). Threat actors generate synthetic corporate personas with realistic voice, visuals and idiosyncratic behaviour to dupe companies. In 2020, a Hong Kong bank lost $35 million to a deepfake audio attack, marking the largest publicly disclosed amount lost to deepfake. Similarly, an AI deepfake Chief Finance Officer (CFO) scammed British engineering group Arup of $25 million in 2024, underscoring the escalating risk of deepfake weaponisation against businesses.
Evolving TTPs of Threat Actors
Deepfakes often target large corporations to undermine their credibility and extort steep payments. As big companies bolster their IT security and deepfake tech becomes increasingly democratised, the target pool may shift towards Small and Medium Enterprises (SMEs) in developing regions.
Deepfake attackers are employing multimodal strategies to outsmart siloed processes. Email ranks as the top delivery method but is now followed by a perfectly cloned deepfake call—audio or video—to attack the target's aural and visual senses simultaneously. This instils a sense of urgency and credibility in the target, causing them to overlook or bypass established protocols.
Risk Assessment
Deepfakes have emerged as the second most common cybersecurity threat against US businesses in 2024. Close to 10% of companies have witnessed a successful or attempted deepfake fraud through emails due to obsolete cybersecurity measures and a lack of awareness among the workforce. Near perfection in cloning texts, voices and videos of targeted individuals through AI have amplified traditional phishing attacks, rendering businesses vulnerable. Threat actors exploit the trust developed online among individuals in the era of remote/hybrid work. Without timely redress, an attack on a trusted figure may damage the trust, integrity and confidence that are integral to the social fabric of any company.
Companies lacking proper securitised communication channels and authentication protocols are particularly vulnerable to deepfake attacks. Although financial gain through fraudulent and unauthorised transfers is the primary objective of threat actors, businesses can also suffer significant reputational damage. Disinformation through deepfakes attacking top executives can harm employee and consumer confidence in management. It can also be used to manipulate a company’s stocks during IPOs, mergers, and corporate reorganisations, causing instability in financial markets. Without an effective incident-response strategy, companies risk poor decision-making in the aftermath, sustaining unintended operational losses.
Deepfakes have aggravated the threat of industrial espionage. Limited oversight of the private sector in democratic societies makes it an easy target for cyberattacks. Big companies have diversified and globalised supply chains involving several off-shore partners and third-party vendors, predominantly SMEs with limited resources and poor cybersecurity systems. This can provide a window of opportunity for threat actors aiming to exploit weak links in a business. Any data breaches that lead to the leakage of trade secrets, sensitive information, and Intellectual Property (IP) theft can compromise countries' vital supply chains and, as a result, national security.
Propaganda dissemination through deepfakes to influence popular sentiment towards companies from certain countries has spiked. Governments are recalibrating national security strategies to factor in state-sponsored actors' malicious use of deepfakes. Companies such as Amazon Web Services (AWS) with large government contracts to develop critical data centres for facilitating information sharing and optimising interoperability among the allied countries are at a heightened risk of experiencing a lateral spread after initial breaches. Thus, strengthening endpoints and adopting a robust security posture becomes a priority when “intelligence and national security operations become data operations.”
Four Potential Solutions for Industries
Firstly, as deepfake-driven cyberattacks become increasingly common, businesses can benefit from developing and sustaining AI/ML-integrated data security initiatives to flag discrepancies between real and synthetic files. Proactive consolidation of threat detection systems, internal traffic monitoring, and modernising email defense tooling are essential preliminary steps towards fostering a technical bulwark against deepfakes. Incorporation of emerging technologies, such as compulsory watermarking of audiovisual files using audio and photo Plethysmography (PPG), into routine operations can help companies discern between real and synthetic media.
Secondly, fortifying identity verification and validation protocols through multifactor and two-factor authentication can provide stronger coverage for transactions, particularly high-profile ones. By creating secondary communication channels to question the legitimacy of transactions, companies can implement the ‘trust, but verify’ principle to avoid threat actors' multimodal targeting strategy. In parallel, fostering a healthy security culture within the company should be prioritised to equip and empower employees with the requisite knowledge, awareness, and tools to deal with deepfakes.
Thirdly, a cybersecurity-educated workforce is the first line of defense for any business. Consequently, nurturing a positive security culture should involve a good balance between integrating advanced technology-oriented solutions and routinised cyber awareness campaigns for employees across the board. Employees and leadership must cultivate a working understanding of deepfakes to detect potential risks and adopt mitigation strategies. An effective way to bridge this knowledge gap is by conducting regular threat-scenario and incident-response training, simulating deepfake attacks. This would instil healthy skepticism in the workforce against inconsistencies in synthetic media while providing them with a streamlined response system in case of a breach.
Lastly, companies need to adopt a multistakeholder security strategy to address deepfake attacks efficiently. This involves establishing contact with social networks to counter defamatory narratives propelled by synthetic media designed to harm a company’s reputation. Similarly, close cooperation with the public sector can optimise threat detection, especially by hostile state actors with larger resources. A robust information-sharing mechanism between the public and private sectors against deepfakes would yield better strategies for long-term resilience and protection of IT security.
Forecasts
Short-term
In the short term, the unbridled progress and proliferation of AI/ML technology underpinning deepfakes in larger populations and developing regions will likely generate new threat centres. This could lead to tactical innovations and cooperation between threat actors exploiting open-source platforms with user encryption to share knowledge and expertise.
Medium-term
Deepfakes will likely become more realistic, personalised, and targeted in the medium term. As top industry executives become wary of potential targeting, the victims are likely to include more mid- and entry-level employees, ushering in ‘low-impact but high-probability’ attacks. Reputational damage and financial fraud may remain key threats, followed by consolidating intra-company trust and integrity.
As big companies invest in bolstering their IT security long-term, attacks will likely shift towards third-party vendors and suppliers (usually SMEs) with limited budgets and familiarity with new threat vectors. Businesses hesitant to incorporate AI/ML-backed IT security programs will likely experience more attacks, endangering supply chain disruptions and market stability.