From cheapfakes to deepfakes

I was listening on the radio to someone who was talking about AI. At first, I was skeptical of what they were saying, as it seemed to be the classic hand-waving of “machines will never be able to replace humans” without being specific. However, they did provide more specificity, mentioning how quickly we can tell, for example, if someone’s tone of voice is “I’m not really OK but I’m pretending to be.”
We spot when something isn’t right. Which is why it’s interesting to me that, while I got 10/10 on my first go on a deepfake quiz, that’s very much an outlier. I’m obviously not saying that I have some magical ability to spot what others can’t, but spending time with technologies and understanding how they work and what they look like is part of AI Literacies.
All of this reminds me of the 30,000 World War 2 volunteers who helped with the Battle of Britain by learning to spot the difference between, for example, a Messerschmitt Bf 109 and a Spitfire by listening to sound recordings, looking at silhouettes, etc.
Deepfakes have become alarmingly difficult to detect. So difficult, that only 0.1% of people today can identify them.
That’s according to iProov, a British biometric authentication firm. The company tested the public’s AI detective skills by showing 2,000 UK and US consumers a collection of both genuine and synthetic content.
[…]
Last year, a deepfake attack happened every five minutes, according to ID verification firm Onfido.
The content is frequently weaponised for fraud. A recent study estimated that AI drives almost half (43%) of all fraud attempts.
Andrew Bud, the founder and CEO of iProov, attributes the escalation to three converging trends:
The rapid evolution of AI and its ability to produce realistic deepfakes
The growth of Crime-as-a-Service (CaaS) networks that offer cheaper access to sophisticated, purpose-built, attack technologies
The vulnerability of traditional ID verification practices
Bud also pointed to the lower barriers of entry to deepfakes. Attackers have progressed from simple “cheapfakes” to powerful tools that create convincing synthetic media within minutes.
Source: The Next Web
Image: Markus Spiske