AI-Generated Videos


The illusion of credibility: how artificial intelligence imitates, but cannot replace, journalism

In a world where videos can be created in minutes from a simple text prompt, the boundary between reality and fabrication dissolves at alarming speed. Text-to-video AI generators — a technological marvel — currently function more as engines of disinformation than sources of truth. But why is this content fundamentally unreliable, even when it looks realistic and convincing?

Key findings:

  • Exponential growth of deepfake scams: According to Forbes Africa, deepfake and AI-generated scams have surged by more than 1,800% in a single year. Reports of this spike followed the widespread availability of these technologies.
  • Legal challenges: Courts increasingly struggle to determine the authenticity of AI-generated evidence. Rapid technological advancement is outpacing legal and forensic expertise, creating serious barriers to verifying digital proof.
  • Regulatory responses: The Internet Freedom Foundation and other organizations are actively developing policies for AI governance and combating deepfakes, acknowledging the emerging threats.

Key Takeaways:

  • AI-generated videos can appear convincing but often lack verifiable provenance and subject-matter expertise.
  • Human editorial oversight and provenance metadata are required to treat synthetic media as trustworthy.
  • Publishers that use unverified mass-produced AI videos risk user dissatisfaction and potential algorithmic or manual ranking penalties.
  • Use an actionable verification checklist: check source, cross‑reference, look for disclosure labels, and inspect technical inconsistencies.

Mechanisms for simulating reality in videos created by AI

AI videos can easily mimic a reality that never existed or “recreate” an event that never happened. Viewers see, hear — and believe. But what they witness is not a fact, it is a simulation. Deepfake misuse is no longer rare: fabricated interviews, fake public addresses, even blackmail.

All of these share a single pattern — leveraging technology to manipulate human emotions and behavior. With the speed of social-media distribution, fact-checking simply can’t keep up.

Statistics on the accuracy of human detection of deepfakes

A systematic review and meta-analysis of 56 studies (86,155 participants), conducted using the system at consensus.app , shows that the average human accuracy in detecting deepfakes is only 55.5% (95% CI: 48.9–62.1%) — nearly equivalent to random guessing. For images, accuracy drops to 53.2% (95% CI: 42.1–64.6%); for videos — 57.3% (95% CI: 47.8–66.6%).

What’s striking is that providing detection hints barely improves performance: even with guidance, accuracy is only about 60% (Testing Human Ability To Detect Deepfake Images of Human Faces, Sergi D. Bray et al.; Providing detection strategies to improve human detection of deepfakes, K. Somoray et al.)

AI-generated videos - illusion of truth

Models without experience: the missing ingredient — expertise

Will AI videos hurt your site’s ranking? Under Google’s E-E-A-T standards, content quality is defined by four criteria: Experience, Expertise, Authoritativeness, and Trustworthiness. AI generators possess none of these. They don’t understand topics, have no field experience, ask no clarifying questions, and verify nothing. They merely assemble probabilistic video patterns. The result is a “hallucination” — a fabricated narrative easily mistaken for truth. This systemic flaw makes such videos inherently unreliable.

This unreliability directly damages Trust — the foundation of E-E-A-T. Mass production of inaccurate content can trigger technical penalties. Content created with minimal effort often lands in the lowest indexing tiers. If AI videos are produced en masse, this spam-like behavior can even activate pandaDemotion, a domain-wide penalty for low-quality, duplicate, or thin content.

Imitation instead of reality: the “original” fake without effort

Google also evaluates the level of effort behind a piece of content via internal signals such as contentEffort and originalContentScore.

Insufficient effort:

The contentEffort attribute directly measures “LLM effort score” and acts as an anti-commoditization metric. AI-generated videos inherently score low because they do not require meaningful human work. Low effort is a hallmark of Lowest-Quality content — material produced “with little or no effort, originality, or added value.”

Lack of originality:

A low originalContentScore is assigned to content lacking unique ideas, personal anecdotes, or details that could come only from direct human experience. AI videos are simulations, not lived reality, and therefore cannot demonstrate genuine originality.

If a video is created automatically, without human editing, journalistic verification, or authorial analysis, it signals “low quality” or even “spam.” Stories with no verified sources, no analysis, no contextual understanding — these are merely flashy visuals. And the problem is that such material increasingly circulates as “news.”

Algorithmic penalties driven by user dissatisfaction

Even if an AI video looks striking, its unreliability becomes obvious through user behavior. When viewers click on an AI-generated video in search results but quickly return to the SERP (the “pogo-sticking” effect), the system records this as BadClicks — a direct negative signal indicating unmet user needs.

Aggregated through Chrome browser data, these signals feed into NavBoost, which adjusts rankings based on actual engagement. Low-quality main content, produced without effort, inevitably results in poor on-page engagement time — another indicator of unmet user needs.

How to recognize a fake?

Algorithmic deception is becoming subtler. Earlier deepfakes were exposed through warped faces or incorrect shadows; now even audio synchronization is nearly flawless. Responsibility shifts to the viewer. What helps is:

  • Source: trust only authoritative, verified outlets.
  • Cross-checking: look for confirmations in independent media.
  • Disclosure: watch for AI-content labels.
  • Detail analysis: search for small inconsistencies — often the only remaining clues.

Ultimately, mass production of AI videos harms not just individual stories but the reputation of the entire platform. Continuous publication of such content lowers the domain-wide siteAuthority metric, which reflects perceived importance and authority. If the site is flooded with unrelated AI-generated videos, it also decreases siteFocusScore, undermining thematic specialization — a key signal for perceived expertise.

What tools are best suited for verifying the provenance of AI videos?

To verify the provenance of AI videos, tools that use two approaches are best suited: metadata analysis (C2PA standard) and anomaly detection using machine learning.

Content Authenticity Initiative (CAI) Verify Tool: This free online tool, based on the Coalition for Content Provenance and Authenticity (C2PA) standard, allows you to upload a file (video, image) and verify its metadata. If the video was generated by an AI model that supports this standard (e.g., OpenAI Sora), the report will indicate that it was “released by OpenAI” and is AI-generated. You can verify content at verify.contentauthenticity.org.

Attestiv: Offers software for forensic video authentication and deepfake detection using a combination of AI and forensic analysis.

Without humans, there is no journalism

A video created by artificial intelligence cannot be credible without active human involvement. Without scriptwriting, editing, analysis, and editorial responsibility, it remains a simulated image — not journalism.

Even if it “looks real,” it is nothing more than smoke without fire. And the greatest danger lies in the temptation to trust that smoke. In an era of information warfare, misinformation, and manipulation, automated videos are not a solution. They are another threat we must learn to detect and overcome.

By John Morris

John Morris is an experienced writer and editor, specializing in AI, machine learning, and science education. He is the Editor-in-Chief at Vproexpert, a reputable site dedicated to these topics. Morris has over five years of experience in the field and is recognized for his expertise in content strategy. You can reach him at jm@vproexpert.com.