Dive Brief:
- Just over half of businesses in the U.S. and U.K. have been targets of a financial scam powered by “deepfake” technology, with 43% falling victim to such attacks, according to a survey by finance software provider Medius.
- Of the 1,533 U.S. and U.K. finance professionals polled by Medius, 85% viewed such scams as an “existential” threat to their organization’s financial security, according to a report on the findings published last month. Deepfakes are artificial intelligence-manipulated images, videos, or audio recordings that are bogus yet convincing.
- “More and more criminals are seeing deepfake scams as an effective way to get money from businesses,” Ahmed Fessi, chief transformation and information officer at Medius, said in an interview. These scams “combine phishing techniques with social engineering, plus the power of AI.”
Dive Insight:
Generative AI could enable fraud losses to reach $40 billion in the U.S. by 2027, Big Four accounting firm Deloitte said in a May report.
“There is already an entire cottage industry on the dark web that sells scamming software from US$20 to thousands of dollars,” the report said. “This democratization of nefarious software is making a number of current anti-fraud tools less effective.”
In May, British engineering group Arup was in the spotlight after reports that scammers successfully siphoned $25 million from the company by using deepfake technology to pose as the organization’s CFO. Following a video conference with the false CFO and other AI-generated employees, an Arup staff member made a number of transactions to five different Hong Kong bank accounts before discovering the fraud.
In another example, the Guardian reported in May that advertising group WPP had been the target of an unsuccessful deepfake scam.
Online content such as a YouTube video or podcast featuring a CEO or CFO can provide criminals with material for a convincing deepfake that can then be used in an impersonation scam to dupe someone such as a finance team member into turning over company funds, Fessi said. As part of the scam, the fraudster may try to create a false sense of urgency to pressure the unsuspecting employee into acting quickly.
In another type of deepfake scam, the attacker may try to impersonate a vendor or supplier that works with the company, he said.
In light of the rising threats posed by deepfakes, Fessi urged companies to take defensive measures built on three main pillars:
- Education: “Everyone in the organization should have a basic understanding of what a deepfake is, how to spot one, and what steps to take if they are targeted,” he said. Companies should also consider complementing this with specialized training for senior executives and managers as well as employees in high-risk departments.
- Process: Companies need to have checks and balances in place to minimize the risk of employees inadvertently making payments to fraudsters, such as requiring at least two people to sign off on a wire transfer, according to Fessi. Organizations also need to prepare for how they will respond in the event of a successful deepfake attack. “It’s important to have these processes documented and shared with the workforce, especially the finance staff,” he said.
- Technology: Tools such as AI and machine learning, when combined with a multi-level validation process and segregation of duties, can help businesses to spot anomalous transactions, Fessi said.