Education

Apr 7, 2024

The Reality Defender Guide to Detecting Deepfakes

Person walking forward

We often receive inquiries about how individuals can detect deepfakes without the use of robust deepfake detection. While it's understandable that people may hope to develop a keen eye for spotting fakes, the reality is that identifying deepfakes without specialized tools is incredibly difficult — even for the experts.

Though low-quality "cheapfakes" may be easier to recognize due to obvious flaws, relying on human perception alone is not a reliable method for distinguishing genuine media from sophisticated deepfakes. Even the most experienced professionals in the field can be misled by convincing fakes, and always instead rely on the assistance of advanced deepfake detection tools to make accurate assessments.

Reality Defender's team of highly respected experts in artificial intelligence, machine learning, and deepfake research have decades of combined experience working with advanced artificial intelligence models and systems. Yet despite our collective knowledge and skills, we acknowledge that no individual or group can consistently outperform robust deepfake detection systems. This is why we built Reality Defender.

About This Guide

Manually detecting deepfakes can be challenging, especially as generative AI technology advances and produces increasingly realistic content. While there are certain indicators that might suggest a piece of media is a deepfake, relying solely on human perception to accurately classify them is becoming increasingly more difficult.

In an effort to contribute to overall digital media literacy, myself and my team compiled a list of potential signs and signifiers to look for when attempting to identify deepfakes across various media types. It is crucial to note that these tips are not foolproof and may quickly become outdated — even as this guide is published — given the rapid pace of technological development in this field.

Our intention in presenting these guidelines is not to suggest that everyday users should be expected to detect deepfakes with a high degree of accuracy on their own. Rather, we hope to highlight the complexities involved in manual deepfake detection, emphasize the importance of using AI-assisted deepfake detection like Reality Defender, and drive the need for caution when consuming digital media.

Images 

  • When manually checking still images, one should look for blurring or distortions around the face, body, or clothing, as this may indicate tampering. A high-quality faceswap aims to seamlessly fit a source face onto a target face, but the process leaves discrepancies around the peripheral (forehead, temples, jawline) and central face (mouth, nose, eyes) regions. 
  • Watch the hair, as absence of frizz and individual strands can point to a manipulated image. Algorithms also struggle with rendering individual teeth and fingers. 
  • Deepfakes can contain inconsistencies within the background of the image, such as unnatural lighting or shadows. If parts of the image are oddly pixelated or suddenly lack detail or clarity, this can be a warning sign, too. 
  • If the picture displays an unnatural sheen or gloss, or a color balance that seems off given the scene it displays, it can be a sign that it was created with a diffusion AI model.
  • AI struggles to correctly portray alignment of body parts, posture, and facial structures, especially when it comes to eye and lip movements.

Video

  • Most of the detection tips related to images can also be applied to a frame-by-frame analysis of videos: one should pay attention to artifacts or blurring around the face, body, clothing, hair, body posture and alignment. as well as shadows and lightning. 
  • AI’s attempts to render objects and people in motion can create temporal inconsistencies, such as abrupt transitions, continuity errors, or unnatural movements that persist over time. Evaluate the speed and motion of objects within the video for any irregularities.
  • Deepfakes may be unable to match the elements of shadows and lighting. Watching how shadows move across a human face might be especially helpful. 
  • AI algorithms also struggle to recreate natural blinking, or forego blinking altogether.
  • Assess whether the audio syncs naturally with the movements of the lips and facial expressions. Deepfake videos may exhibit slight delays or mismatches between audio and visual cues. (This can very well also indicate just a simple delay in audio and video sync.)

Audio

  • Deepfake speech often exhibits anomalies such as unnatural pauses, fluctuations in tone, pitch, and rhythm, as well as peculiar hesitations, robotic intonations, or irregular pacing. Additionally, generative AI algorithms often struggle to replicate certain phonetic sounds.
  • Deepfakes often lack authentic emotional cues or reactions. It pays to listen for inconsistencies in the emotional tone or reactions that do not align with the context of the conversation.
  • Deepfake algorithms may struggle to replicate authentic background noises or ambiance. One should always pay attention to any pauses, distortions, and other unnatural breaks beyond speech.

Text

  • As deepfakes may struggle to mimic the exact nuances of individual writing styles, it is important to note any changes in tone or writing style inconsistent with the writer’s usual communications. If the text seems generic or foregoes any personalization, this can also be a red flag.
  • Because LLMs learn from massive datasets riddled with errors, they can end up emulating those errors. Look for linguistic anomalies, such as illogical syntax, grammatical errors presented as correct writing, and factual errors. These models are also imperfect when it comes to understanding the subject matter. 
  • It is always important to assess whether the text shows a robust (and human) understanding of the topic or context. (This applies to emotional understanding, too.)
  • One should always consider the source the text came from. Does it come from a reputable website? Can corroborating evidence for claims within the text be found elsewhere? 

Context

Beyond these clues, it is always helpful to ponder the context behind the video, image, audio, or text.

  • Which media platform was the original piece of content posted on? 
  • Is there consistency in the story and behavior shown in the piece of questionable media? 
  • Does the behavior on display deviate extremely from what is expected, i.e., is a video advertising a new meat product utilizing the likeness of a celebrity known for their lifelong vegan advocacy? 

In Case of Fraud

AI-powered fraud affects the lives and livelihoods of millions across the globe. According to the Sumsub 2023 Fraud Report, the most common type of identity fraud in 2023 utilized generative AI and deepfakes. The number of detected deepfakes in fraud attempts increased tenfold between 2022 and 2023, and experts predict this number will continue to rise sharply over the next few years.

Individual users, workers, and customers in the financial, media, professional services, and healthcare industries will be particularly affected by these trends, but as the skills and methods of fraudsters evolve, no person in the digital space will be safe. Fraudsters will look to hijack bank accounts, social media profiles, create fake job opportunities and interviews, and utilize phishing and fake endorsements from celebrities to attract people to fake financial and product schemes. In most extreme cases, fraudsters have used deepfakes to arrange non-existent work meetings that convinced workers to transfer vast sums of money, and arranged deepfake phone calls to convince people their loved ones have been kidnapped. 

While the responsibility to protect individuals from such schemes belongs to companies and institutions — banks and governments, social media and tech corporations, employers and service providers — it doesn’t hurt for users to know what to look out for in this new world of deepfake deception. Below are a few more tips for individuals to employ to protect themselves from cybercriminals.

Verifying Requests

It is always a good idea to mistrust requests at face value, even when deepfakes depict a person familiar to us: a boss, colleague, or a loved one. Every request submitted via image, text, video — especially requests of a financial nature — should be independently verified and scrutinized. As tragic as it is, because of deepfakes, seeing and hearing can no longer translate to immediate believing when it comes to digital communication. 

Social Engineering Attacks

Beware of taking the phishing bait. Emails, messages, and other digital communications created and distributed with LLMs are designed to elicit panic, sympathy, and other emotions that lead to rash actions. One should never feel compelled to act right away, without verifying the source and veracity of the text or media. Scrutinize email addresses for subtle inconsistencies, such as a single letter dropped from an email you usually trust, and never click on links without being certain of their source.

Security is Key

It pays to protect your online accounts with as many steps and measures as possible: two-step verification, fingerprint and other biometric security. While a big part of the problem is that deepfakes can overcome these measures (for example, fraudsters can generate fake voiceprints and videos of users to satisfy the security requirements, which is why the implementation of real-time deepfake detection tools by companies and institutions into their verification frameworks is crucial), the more extra steps users put between their accounts and cybercriminals, the better.

If these suggestions seem obvious or insufficient, it is because they are. As with all cases of manual detection, these basic safety tips will not be enough to protect individuals from the elaborate deepfake fraud techniques just around the corner. This is why effective deepfake detection starts at the top.

Manual Detection Isn’t Effective

As is clear from these suggestions, it is unlikely that we can keep up with the proliferation of increasingly sophisticated deepfakes merely through casual interaction via the senses. Manual detection often requires expertise in various domains such as image and video processing, computer graphics, and human behavior analysis. 

At the same time, human perception and judgment are subjective and prone to fatigue, distraction, and bias, leading to mixed results. Science unfortunately confirms that humans are not very good at spotting deepfakes: in a study published in Scientific Reports in August of 2023, up to 50% of the study respondents were unable to distinguish between a deepfake video and real footage. Another study published by iScience showed that respondents were unable to distinguish between authentic and deepfake videos, but remained fully confident in their ability to do so. 

Manual detection is not only unreliable, but impractical in terms of labor and limited in scalability. Considering the nearly infinite amounts of content created and distributed in digital spaces daily — a number that is bound to become even more astronomical with low-rent AI-generated content flooding the Internet — human moderators and casual users alike cannot be expected to manually scrutinize every piece of content they come across. Yet we do believe that every user is entitled to know whether the content they are viewing was created by a human or a machine, without needing a degree in digital forensics and unlimited free time. 

Constant Worry vs. Programmatic Safety

Earn on in the development of the Reality Defender platform, we asked ourselves: who should bear this constant burden of worrying about deepfakes? 

While we advocate for widespread awareness of the potential misuse of generative AI as part of everyone's media literacy education, we also believe that the average citizen should not be burdened with the responsibility of constantly pondering and verifying the authenticity of the media consumed on their chosen platforms. This is especially true given the overwhelming volume of content we encounter on our devices daily, making the task of verifying every post and video daunting, exhausting, and unreasonable.

To address this inequality of truth without plunging every user into the depths of paranoia, we collaborate with some of the world's largest organizations, governments, platforms, and institutions to implement Reality Defender's deepfake detection tools, using AI to detect AI, overcoming the fallibility of manual detection at the highest levels. By integrating deepfake detection into newsroom fact-checking or call center structures, we can ensure that everyday users don’t need to worry that their beloved platforms and services are serving up bogus, misleading media, or allowing fraudsters to hijack accounts. Instead, the platforms vulnerable to deepfake-based attacks are proactively protecting their users against this content from the get-go.

We designed our deepfake detection platform with the goal of providing equitable access to truth for all people, aiming to protect as many users as possible while consciously choosing not to directly offer our tools to individual consumers. This approach allows Reality Defender to cover potentially billions of consumers and users at every turn by shielding the platforms they use and making deepfake detection consistent and systematic, instead of placing the onus on users themselves via fragmented, do-it-yourself manual methods that are demonstrably bound to fail.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter