The idea of an image as evidence of a fact is coming to an end. Artificial intelligence tools like Veo 3 (Google), Sora (OpenAi) and Runway are no longer just video generators. They are undoing the border between real and the artificial. From now on, reality has become design.
For a long time, we trusted the image as a witness. It was the “I saw with my own eyes” of journalism. In a way that today seems almost innocent, a few years ago, Photoshop killed the photo as proof. And now AI is killing the video as truth.
And all this with a frightening efficiency that changes a lot to journalism and content platforms.
Continues after advertising
The impact of AI on journalism and the investigation of facts
How to denounce abuse if a scene of violence can be perfectly manufactured? How to register the story if the gift may be being invented with Hollywood quality?
It is not a futuristic debate. It’s already happening.
Writing will need a new type of professional who can identify digital artifacts, check visual origin, interpret synthetic images.
Continues after advertising
At the same time, the press itself will be tempted to use these technologies. Will it be acceptable to generate simulations with AI to illustrate crimes or wars? If so, how to make it clear that this is a reconstitution and not a record? It is an urgent dilemma.
Confidence will become a critical skill. A new type of education will be essential: learning to read images as language.
C2pa in an attempt to ensure authenticity
The C2PA standard for Content Provenance and Authenticity) is an initiative created by large technology and media companies to combat visual misinformation. The idea is that images and videos come with a type of “history of origin” built -in: who created, when it was done, went through editions. As a digital ID.
Continues after advertising
This makes it easy for platforms and users to know if that content is authentic or manipulated.
Digital platforms: neutral showcase or real curators?
Another way out may be to force platforms to signal AI -generated content. But it depends on regulation and political will. Thing that who profits with misinformation knows how to explore.
Tiktok, Instagram, YouTube… are just showcases where anyone can expose whatever you want? Or are these platforms obligation to take care of what they are delivering to billions of people every day?
Continues after advertising
For years, these networks have positioned themselves as neutral channels, just connecting who publishes with whom it consumes. But when distributed content can be a realistic lie generated by AI, responsibility changes level.
If they have algorithms to choose what will first appear on their feed, why wouldn’t they have the obligation to filter out what can cause real damage?
This is not just about content moderation. This is an editorial responsibility. And maybe it’s time for platforms to be charged as such.