TV Futurist

TV Futurist

TrendCore 03/26

📚 Why Media Literacy Beats Regulation in the AI Era

Sandra Lehner's avatar
Sandra Lehner
Mar 02, 2026
∙ Paid

Happy Monday, Wavemakers!

A week ago in Weekend Waves, we talked about governments trying to ban or heavily restrict Social Media.

Australia raising age limits.
European policymakers debating tighter platform controls.
Politicians framing TikTok as a national security threat rather than a cultural one.

The instinct is understandable:
If something feels chaotic, regulate it.
If something feels dangerous, remove it.

But here’s the uncomfortable truth:

In an AI world, banning platforms doesn’t solve the problem.
Because the problem isn’t access.
It’s literacy.

And literacy can’t just be mandated, it has to be built!


🧠 Truth Is No Longer a Default Setting

For most of broadcast history, truth was infrastructural.

If something aired on television, it passed through:

  • Editors

  • Fact-checkers

  • Legal teams

  • Institutional accountability

Now?

Content flows first.
Verification comes later - if at all.

Add AI to that equation and the environment shifts again:

  • Photorealistic images generated in seconds

  • Synthetic voices indistinguishable from real ones

  • Deepfake videos that don’t look “fake” anymore

We’ve moved from a world where truth was filtered to a world where truth is forensically reconstructed.

💡Truth isn’t a given anymore. It’s a skill. A skill we all need to learn.


🇩🇪 When Even Public Broadcasters Get It Wrong

And what I’m saying here isn’t theoretical.

Just last month, German public broadcaster ZDF faced criticism after an AI-generated image was used in a segment on news show heute journal. The visual was meant as illustrative material, but it wasn’t clearly labelled as synthetic. Viewers could reasonably assume it was real footage.

The backlash was immediate. ZDF apologised and removed the image, but the damage wasn’t about one picture. It was about trust.

This is exactly the point:
Even institutions built on editorial standards can stumble in the AI era.

💡When synthetic media enters trusted news environments, clarity isn’t optional, it’s structural.


🚫 Why Regulation Feels Powerful (But Isn’t Enough)

Let’s move on to Social Media. When governments try to ban or limit Social Media, they’re reacting to symptoms:

  • Misinformation

  • Polarisation

  • Algorithmic amplification

  • Youth exposure

But banning access doesn’t teach discernment.

A 14-year-old who is blocked from one platform will still:

  • Encounter AI content elsewhere

  • See manipulated visuals

  • Receive synthetic audio

  • Navigate algorithmic feeds

The distribution layer changes, but the literacy requirement doesn’t.

💡You can regulate platforms. You can’t regulate perception.


🏷️ Watermarking Won’t Save Us Either

There’s a growing belief that AI watermarking will restore order.

Label the image.
Tag the video.
Signal what’s synthetic.

That helps - technically. But it assumes two things:

  1. That labels won’t be removed or faked

  2. That audiences automatically understand what the label means

Watermarking addresses origin. But it doesn’t build interpretation skills.

In fact, we’re entering a paradoxical phase: The more “AI” is labelled, the more unlabelled content may feel implicitly authentic - even when it isn’t.

💡Trust can’t be automated.


🔍 Pre-Bunking > De-Bunking

Therefore, I would argue: One of the most promising shifts isn’t technological, it’s educational.

Pre-bunking builds pattern recognition.
Instead of reacting after deception spreads, it teaches audiences how manipulation is structured, so they can spot it in real time.

So, instead of saying: “This video is fake.”

You teach:
👉 “Here’s how fake videos are structured.”
👉 “Here’s how outrage gets engineered.”
👉 “Here’s how visual framing alters perception.”

This builds cognitive friction and friction is healthy. Because in an AI-saturated world, speed is the enemy of truth.


📱 Why Gen Z Feels Confident - But Is Structurally Vulnerable

Gen Z often scores high on digital confidence.

They know:

  • How to edit

  • How to remix

  • How to detect obvious memes

  • How to navigate online culture

But AI changes the game.

The next wave of misinformation isn’t low-effort Photoshop.
It’s high-resolution, emotionally engineered content.

The vulnerability isn’t stupidity. It’s overconfidence. Because when you grow up fluent in the medium, you assume mastery of the message.

💡Digital natives are not automatically AI natives.


🔓 Deep Dive: The New Truth Infrastructure

Behind the paywall, we move from diagnosis to strategy:

👉 The 3-layer system that will define media trust in the AI era
👉 Why regulation will plateau while literacy compounds
👉 What this means for TV platforms
👉 How brands should operate when credibility becomes scarce

This is where the structural shift begins.

User's avatar

Continue reading this post for free, courtesy of Sandra Lehner.

Or purchase a paid subscription.
© 2026 Sandra Lehner · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture