A nationwide study conducted by biometric facial authentication specialist iProov has revealed a sheer lack of awareness and education around deepfake technology among the UK public, with almost three-quarters (72%) of respondents saying they’ve never even heard of a deepfake video.
Deepfake is a technique for human image synthesis based on Artificial Intelligence. It’s used to combine and superimpose existing images and videos on to source images or videos using a machine learning technique known as generative adversarial network. Due to these capabilities, deepfakes have been used to create fake videos. Deepfakes can also be used to create fake news and malicious hoaxes.
The research polled 2,000 respondents across the UK to reveal their attitudes towards (and understanding of) deepfake technology. The results highlight a need for awareness and education on the impact of deepfakes which, if not addressed, will have huge implications on personal and professional security.
Commenting on the findings, Andrew Bud (founder and CEO at iProov) said: “Awareness is the first defence against any cyber security threat, as we’ve already seen with attacks like phishing and ransomware. Deepfakes, however, represent a whole new kind of danger to businesses and individuals. Technology also has a big role to play in combating the threat, yet if the vast majority of people in the UK have such little awareness of deepfakes right now, they simply cannot begin to prepare themselves as they need to.”
Underlying societal threat
Until recently, deepfakes were a nascent concept. Today, the technology behind deepfakes is threatening to undermine the notion of trust in moving images and is becoming increasingly accessible – be it through the creation of fake news or spoofing the identity checks required to log into a bank account.
However, the research has revealed members of the public to be largely unaware of the threats. Even once respondents were provided with the definition of a deepfake video, almost a third (28%) said they believed them to be completely harmless. More than two-thirds of respondents (70%) confessed they didn’t believe they would be able to tell the difference between a deepfake and a real video.
Detecting fraudulent identities
Interestingly, once those surveyed were provided with a definition of a deepfake video, they began to recognise the technology’s mounting threat. In fact, just under two-thirds (65%) of people said that their newfound knowledge of the existence of deepfakes undermined their trust in the Internet.
Notably, consumers went on to cite identity theft as the biggest concern (42%) for how they believed deepfake technology could be misused. Almost three-quarters (72%) of respondents also said they would be far more likely to use an online service with preventative measures in place to mitigate the use of deepfakes.
Despite the security implications of the specific concerns raised surrounding identity theft, more than half of all respondents (55%) surprisingly called out social networks as the party most responsible for dealing with deepfake synthetic videos.
Bud added: “Taking the fight to this new wave of fraud means that security measures in this new post-truth era simply have to be as creative, sophisticated and fast-moving as the fraudsters. While the adoption of biometric technology to crack down on deepfakes is growing among financial institutions, Governments and large-scale enterprises, the challenge ahead lies in the effective detection of a genuine human presence. It’s a challenge that should not be underestimated.”