If you have ever received an image or video and were unsure whether it was real, know that this type of uncertainty is increasingly common. Artificial intelligence tools have evolved quickly and are now able to create content very close to reality, preserving faces, scenes and even small details — sometimes changing just one element, such as the color of clothing.
The problem is that all this quality makes it more difficult to separate what is true from what has been manipulated. In sensitive situations, such as elections, conflicts or public crises, this distinction is no longer just a curiosity and starts to directly impact the way information circulates.
And there is an important point: even if we pay attention, It’s not always easy to get it right. People without technical training can identify false content a little better than by pure guesswork, but this margin is greatly reduced when the material is well done or appears out of context.
Continues after advertising
But despite advances, artificial intelligence still has limitations. According to Hany Farid, professor at the University of California, Berkeley and specialist in digital forensics, these systems do not understand the physical world in the same way as humans.
In practice, this means that the AI can simulate very well, but it does not “reason” about space, depth or physical coherence. And that’s precisely when some clues emerge that content may not be real.
One of the practical guidelines for identifying content generated by AI comes from academic research led by Lele Cao, a researcher linked to Microsoft Research’s AI laboratories, with collaboration from researchers in Europe.
The study proposes a process aimed at common users, focusing on observation of patterns and inconsistencies. The idea is to increase the chances of identifying signs of manipulation in everyday life.
Images
In photos, the signs usually appear in the details. Hands with too many (or too few) fingers, misaligned eyes or slightly distorted facial elements are still relatively common mistakes. Another point that draws attention is the background. Sometimes it looks too blurry or has strangely repeating patterns, like trees or nearly identical objects.
It’s also worth asking a simple question: does this scene make sense? When something seems too perfect or implausible, it may be an indication that it was artificially generated. A useful check is to use tools like Google Images to do a reverse search and see if that photo has appeared before in another context.
Continues after advertising
Audios
In audio, the main clue is the naturalness of speech. AI-generated content can have very uniform intonation, without emotional variation or natural pauses. It is also common for these audios to be too clean, without background noise. Real recordings often have ambient sounds, even if discreet.
Extra care applies to unexpected messages, especially when they involve urgency or requests for money. Even if the voice sounds familiar, the ideal is to confirm it through another channel before making any decision. Applications like Shazam can help identify whether an audio has previously circulated or been reused.
Videos
Manipulated videos, especially deepfakes, require a closer look. The face is usually the first point to observe. Unnatural blinks, strange facial expressions or lack of synchronization between speech and lip movement may indicate editing. In some cases, the mouth does not follow the audio exactly.
Continues after advertising
Body movements may also seem a little “stiff” or artificial, especially in the hands and head. Another important detail is lighting. Incoherent shadows or sudden changes in light within the same scene can be a sign of manipulation. Additionally, the origin of the video makes a difference. Content that appears without a clear context or circulates on unreliable channels calls for greater caution.
Checking is still the safest way
With the evolution of these technologies, identifying false content has become less intuitive. There is no single definitive sign, and even careful analysis may not be sufficient in all cases.
Therefore, verification remains essential. Searching for the origin of the content, checking whether it was published by reliable sources and using support tools help reduce the risk of error. In the end, the scenario changed: it’s not enough to just look, you need to investigate.