A recent study conducted by digital identity firm Signicat and consultancy Consult Hyperion has brought to light a concerning trend: more than a third of fraud attempts targeting financial institutions now involve the use of artificial intelligence (AI).
The research underscores the rapidly evolving threat landscape, with decision-makers in fraud prevention acknowledging that AI is likely to drive the majority of identity fraud in the future, resulting in an increased number of victims.
Interestingly, approximately 75% of organizations cite a lack of expertise, time, and financial resources as obstacles to detecting and combatting AI-driven fraud.
This deficit in expertise and organizational commitment is particularly troubling when juxtaposed with a startling statistic from the report: incidents of deepfake fraud have skyrocketed by 2137% over the past three years.
Account Takeovers
The report reveals a troubling shift in the strategies employed by fraudsters. While AI was previously utilized primarily for creating synthetic identities and forging documents, it is now being leveraged on a larger scale for deepfake and social engineering attacks.
Account takeovers, previously considered a consumer-focused issue, have now become the most prevalent form of fraud for business-to-business entities. Cybercriminals exploit weak or reused passwords to gain access to existing accounts, often utilizing deepfake technology to impersonate legitimate account holders.
Insights on Deepfakes
Deepfake technology, which uses AI to generate authentic-looking but fabricated audio and video content, now accounts for a significant 6.5% of total fraud attempts, reflecting a dramatic surge in incidents over the past three years.
The banking sector, in particular, has expressed concerns about deepfake attacks, with 92% of cybersecurity professionals expressing worry about potential fraudulent activities.
Furthermore, the financial ramifications of deepfake fraud extend beyond the banking industry. In 2023, 26% of small companies and 38% of large corporations reported losses of up to US$480,000 due to deepfake fraud.
Deepfake fraud is not limited to smaller companies. The UK’s Financial Conduct Authority (FCA) recently issued a warning about the risks associated with deepfake fraud, emphasizing the potential for AI to disrupt the financial services sector on an unprecedented scale.
The severity of the threat posed by deepfake attacks was exemplified in a recent incident involving engineering firm Arup, which fell victim to a deepfake fraud amounting to £20 million (US$25,486,000) after an employee unwittingly participated in a video conference featuring a digitally manipulated version of the company’s CFO.
This incident underscores the sophistication and potential impact of deepfake attacks on even the largest organizations.