‘It is hard to see how you can prevent the bad actors from using [AI] for bad things.’
Every day, it seems, we learn of a paradigm-shifting way AI is altering how we live and work. And with each new advancement comes another possible security threat. Artificial intelligence? Try artificial identities.
According to a survey by forensics firm Regula, over the past year 46% of organizations globally have experienced what’s known as synthetic identity fraud, which involves scammers stitching together authentic and fake ID information to create new personas. These “Frankenstein” identities are then used to make fraudulent purchases or set up bank accounts. AI is enhancing these efforts.
A 2022 story by Stateline chronicled crooks who “combined real Social Security numbers with mismatched or phony names to create new identities.” They were eventually charged “with 108 counts of illegal financial activity, mostly borrowing huge amounts of money they never intended to pay back.”
Banks are a big target. The Regula survey found that banks are the most popular targets of synthetic identity fraud. More than 90% of banking industry respondents said they perceive it as a real threat and 49% said they have already experienced it.
Meanwhile, AI-generated “deepfakes”—both voice and video—are a growing concern. Several recent articles have documented how AI voice clones can successfully bypass biometric login platforms commonly used by banks. There are even how-to videos on YouTube.
Deepfake defenses. Regula’s chief technology officer, Ihar Kliashchou, says neural networks can help detect deepfakes, but “they should be used in conjunction with other antifraud measures that focus on physical and dynamic parameters, such as face liveness checks, document liveness checks via optically variable security elements, etc.”
We are almost certainly just seeing the tip of the AI iceberg. Geoffrey Hinton, a generative AI pioneer who recently left his post at Google, now looks back on his half century in the field with a touch of regret. “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton says.