Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
In the wildly popular and award-winning HBO series “Game of Thrones,” a common warning was that “the white walkers are coming” — referring to a race of ice creatures that were a severe threat to humanity.
We should consider deepfakes the same way, contends Ajay Amlani, president and head of the Americas at biometric authentication company iProov.
“There’s been general concern about deepfakes over the last few years,” he told VentureBeat. “What we’re seeing now is that the winter is here.”
Indeed, roughly half of organizations (47%) recently polled by iProov say they have encountered a deepfake. The company’s new survey out today also revealed that nearly three-quarters of organizations (70%) believe that generative AI-created deepfakes will have a high impact on their organization. At the same time, though, just 62% say their company is taking the threat seriously.
“This is becoming a real concern,” said Amlani. “Literally you can create a completely fictitious person, make them look like you want, sound like you want, react in real-time.”
Deepfakes up there with social engineering, ransomware, password breaches
In just a short period, deepfakes — false, concocted avatars, images, voices and other media delivered via photos, videos, phone and Zoom calls, typically with malicious intent — have become incredibly sophisticated and often undetectable.
This has posed a great threat to organizations and governments. For instance, a finance worker at a multinational firm paid out $25 million after being duped by a deepfake video call with their company’s “chief financial officer.” In another glaring instance, cybersecurity company KnowBe4 discovered that a new employee was actually a North Korean hacker who made it through the hiring process using deepfake technology.
“We can create fictionalized worlds now that are completely undetected,” said Amlani, adding that the findings of iProov’s research were “quite staggering.”
Interestingly, there are regional differences when it comes to deepfakes. For instance, organizations in Asia Pacific (51%) Europe (53%) and and Latin America (53%) are significantly more likely than those in North America (34%) to have encountered a deepfake.
Amlani pointed out that many malicious actors are based internationally and go after local areas first. “That’s growing globally, especially because the internet is not geographically bound,” he said.
The survey also found that deepfakes are now tied for third place as the greatest security concern. Password breaches ranked the highest (64%), followed closely by ransomware (63%) and phishing/social engineering attacks and deepfakes (61%).
“It’s very hard to trust anything digital,” said Amlani. “We need to question everything we see online. The call to action here is that people really need to start building defenses to prove that the person is the right person.”
Threat actors are getting so good at creating deepfakes thanks to increased processing speeds and bandwidth, greater and faster ability to share information and code via social media and other channels — and of course, generative AI, Amlani pointed out.
While there are some simplistic measures in place to address threats — such as embedded software on video-sharing platforms that attempt to flag AI-altered content — “that’s only going one step into a very deep pond,” said Amlani. On the other hand, there are “crazy systems” like captchas that keep getting more and more challenging.
“The concept is a randomized challenge to prove that you’re a live human being,” he said. But they’re becoming increasingly difficult for humans to even verify themselves, especially the elderly and those with cognitive, sight or other issues (or people who just can’t identify, say, a seaplane when challenged because they’ve never seen one).
Instead, “biometrics are easy ways to be able to solve for those,” said Amlani.
In fact, iProov found that three-quarters of organizations are turning to facial biometrics as a primary defense against deepfakes. This is followed by multifactor authentication and device-based biometrics tools (67%). Enterprises are also educating employees on how to spot deepfakes and the potential risks (63%) associated with them. Additionally, they are conducting regular audits on security measures (57%) and regularly updating systems (54%) to address threats from deepfakes.
iProov also assessed the effectiveness of different biometric methods in fighting deepfakes. Their ranking:
- Fingerprint 81%
- Iris 68%
- Facial 67%
- Advanced behavioral 65%
- Palm 63%
- Basic behavioral 50%
- Voice 48%
But not all authentication tools are equal, Amlani noted. Some are cumbersome and not that comprehensive — requiring users to move their heads left and right, for instance, or raise and lower their eyebrows. But threat actors using deepfakes can easily get around this, he pointed out.
iProov’s AI-powered tool, by contrast, uses the light from the device screen that reflects 10 randomized colors on the human face. This scientific approach analyzes skin, lips, eyes, nose, pores, sweat glands, follicles and other details of true humanness. If the result doesn’t come back as expected, Amlani explained, it could be a threat actor holding up a physical photo or an image on a cell phone, or they could be wearing a mask, which can’t reflect light the way human skin does.
The company is deploying its tool across commercial and government sectors, he noted, calling it easy and quick yet still “highly secured.” It has what he called an “extremely high pass rate” (north of 98%).
All told, “there is a global realization that this is a massive problem,” said Amlani. “There needs to be a global effort to fight against deepfakes, because the bad actors are global. It’s time to arm ourselves and fight against this threat.”