Billions of dollars are disappearing each year as cybercriminals supercharge their scams with the help of artificial intelligence.
Last year alone, over 859,000 complaints funneled into the FBI’s cybercrime system, revealing losses of $16.6 billion, a staggering jump from the previous year. Phishing attempts and spoofing dominated the landscape, with more than 193,000 reported incidents.
New tools like voice cloning and generative AI are making social engineering attacks not just more frequent but alarmingly convincing. The days when phishing arrived only as poorly written emails have faded. Now, scammers impersonate bank staff or government agents with voices that can no longer be easily distinguished from real people, according to a recent Consumer Reports investigation.
Scams Today Are Slicker Than Ever
The sophistication of vishing—fraud done over the phone—means even a seasoned professional could be fooled. “Vishing attacks use social engineering techniques to impersonate legitimate callers, such as bank representatives, tech support agents or government officials, in order to trick victims into sharing sensitive information, such as login credentials or credit card numbers,” according to the Oct. 16 analysis by Kaufman Rossin.
It is not only established employees at risk. So-called boss scams are now targeting new hires, using social media footprints to sound credible, all before IT teams can intervene or even detect something is wrong.
Criminals have now weaponized AI to mimic automated phone menus perfectly and shift their approach based on a victim’s responses. In 2024, the FBI attributed eighty-three percent of total cybercrime losses—around $13.7 billion—to these attacks that rely on trust manipulation.
Defenders are forced to step up their game. Multifactor authentication and encrypted communication are now baseline requirements, but many organizations are taking it further.
The Financial Services Information Sharing and Analysis Center urged companies to monitor transactions for unusual behavior with AI, stopping questionable payments before completion. At the same time, the introduction of crypto-related cybercrime statistics recommended testing cyber defenses against fake AI-driven phishing campaigns to prepare staff for the new wave of synthetic threats.
Companies that have taken these steps are already seeing results. Over half of large firms adopting AI security tools report drops in fraud and faster responses, signaling a shift from pure awareness toward real resilience.
Incident response planning is no longer buried in the IT department—it is now a top priority for boards and executives.
For those in charge of company finances and risk, the threat has shifted from technical firewalls to direct, personal manipulation. In financial services, digital identity can be hijacked through nothing more than a convincing AI generated phone call.
Trust has become the most precarious asset to defend in the era of synthetic deception using deepfakes and phishing.







