An experiment conducted in the UK has shown that people generally struggle to distinguish deepfake videos from authentic ones. Participants watching all authentic videos were almost as likely to report something unusual as those who watched a mix of real and deepfake content. When asked to select the deepfake video from a set of five, only 21.6% of participants correctly identified the manipulated video. The research was published in Royal Society Open Science.
Deepfake videos are artificially manipulated to appear real using deep learning techniques. These videos use artificial intelligence to superimpose faces, mimic voices, and create hyper-realistic imitations of real people, making it challenging to distinguish between real and fake content.
Initially developed for entertainment and creative purposes, deepfakes are now raising ethical and security concerns due to their potential for misuse. They can be employed to manipulate public opinion, harm reputations, or commit fraud by placing individuals in fabricated scenarios. Despite their risks, deepfakes also have legitimate applications in film, education, and digital content creation.
Study author Andrew Lewis and his colleagues wanted to explore whether people are able to recognize deepfake videos. They were interested in finding out whether people are able to point them out without any warning (without anyone telling them that there might be deepfakes among the contents they are viewing) and whether giving a warning about possible deepfakes changes the situation. For example, the researchers wanted to know if participants could identify which video in a series used deepfake technology if they were told that at least one video was altered. To test this, they designed a controlled experiment.
The study recruited 1,093 UK residents through Lucid Marketplace, an online platform for gathering survey participants. The participants were divided into three experimental groups, and the survey was conducted via Qualtrics.
In the first group, participants watched five authentic videos with no deepfakes. The second group viewed the same set of videos, but one of them was a deepfake, without the participants being warned about its presence. After watching the videos, participants were asked if they noticed anything unusual.
The third group also watched the same video set with one deepfake, but they were informed beforehand that at least one of the videos would be manipulated. They were given a brief explanation of deepfakes, described as “manipulated videos that use deep learning artificial intelligence to make fake videos that appear real,” and were explicitly told, “On the following pages are a series of five additional videos of Mr. Cruise, at least one of which is a deepfake video.” After watching, participants were asked to select which video or videos they believed to be fake.
The deepfake video in the study featured the actor Tom Cruise, with the other videos being genuine clips of him sourced from YouTube. To account for familiarity with the actor, all participants first watched a one-minute interview excerpt of Tom Cruise to provide a baseline understanding of his appearance and speech patterns.
The results showed that participants were largely unable to detect deepfakes. In the group that watched only authentic videos, 34% reported noticing something unusual, compared to 33% in the group that unknowingly watched a deepfake. This small difference suggests that people did not perform better at detecting deepfakes than spotting irregularities in authentic videos.
In the group that received a warning about deepfakes, 78.4% were still unable to correctly identify the manipulated video. Participants were generally more likely to mistake one of the genuine videos for a deepfake than to correctly identify the actual fake. However, among those who selected only one video, 39% correctly identified the deepfake, a rate somewhat higher than random guessing.
“We show that in natural browsing contexts, individuals are unlikely to note something unusual when they encounter a deepfake. This aligns with some previous findings indicating individuals struggle to detect high-quality deepfakes,” the study authors concluded.
“Second, we present results on the effect of content warnings on detection, showing that the majority of individuals are still unable to spot a deepfake from a genuine video, even when they are told that at least one video in a series of videos they will view has been altered. Successful content moderation—for example, with specific videos flagged as fake by social media platforms—may therefore depend not on enhancing individuals’ ability to detect irregularities in altered videos on their own, but instead on fostering trust in external sources of content authentication (particularly automated systems for deepfake detection)”, study authors concluded.”
The study sheds light on the general population’s limited ability to detect deepfake videos. However, it is important to note that deepfakes are a relatively new phenomenon, and most people have little experience in identifying them. As deepfakes become more common, it is possible that individuals may develop greater skill in spotting them.
The paper, “Deepfake detection with and without content warnings,” was authored by Andrew Lewis, Patrick Vu, Raymond M. Duch, and Areeq Chowdhury.