At the Media Lab Innovation Festival, the AI for Media Network showcased where AI is already genuinely useful in journalism – and where it hits its limits. In the AI Clinic, we discussed concrete use cases, from documentary experiments to resonance management. In the “B(a)d Time Stories” session, four speakers openly shared cases where AI created more problems than it solved. The core insight: AI rarely “just” saves time; it shifts tasks and creates new editorial decision points.
On March 18, 2026, the AI for Media Network curated two formats at the Media Lab Innovation Festival at the University of Television and Film Munich: First, the AI Clinic, where participants could present concrete AI use cases that are not yet working as intended. Second, the “B(a)d Time Stories,” where speakers talked frankly about failed or bumpy AI projects – and what can be learned from them for the journalistic use of AI.
Documentary experiment: When AI generates missing images

The AI Clinic opened with an example that illustrates how AI can address an editorial challenge. Johannes Schiller from MDR Next presented how the broadcaster used generative AI in a documentary about an escape from the GDR to create missing images and scenes. For instance, black-and-white photos of the protagonist were colorized and animated using AI.
In this unpublished experiment, one thing becomes clear: AI is anything but a pure cost‑saving tool. It took countless generation attempts, sorting through hundreds of clips, and painstakingly refining historical details – overall, it meant more work rather than less. The interdisciplinary team tested custom models and a fixed “style DNA” to ensure visual consistency for the protagonists and the GDR look. According to Schiller, test audiences accepted AI when it was clearly labeled as a gap filler or used for abstract scenes, but not as a full replacement for original footage.
Who owns the time freed up by AI?

The AI Clinic then tackled two pre-submitted problems involving AI deployment. Anja Keber and Lukas Graw from the BR business desk took a meta‑perspective by asking: Who actually owns the time that AI frees up in everyday work? Answers ranged from “We’re paid by the hour, so we keep working” to the idea of using the freed‑up half hour for more in‑depth research and higher‑quality journalism. Lukas Graw notes: “What we found interesting was that no one wanted to go home earlier because of any potential time savings.” At the same time, it became clear that many people are not yet experiencing real time savings, because mastering tools and dealing carefully with AI is itself time‑consuming – and because, without a structured dialogue with employers, it remains unclear how gained time can be put to meaningful use and properly acknowledged.
Keber and Graw are currently researching the topic of “time savings through AI” in greater depth. Their radio feature “Wem gehört die halbe Stunde” will air on April 22, 2026, between 11:00 a.m. and 12:00 p.m. on Bayern 2 and will subsequently be available on ARD Sounds.
AI in resonance management: most useful for structuring and clustering

The second use case focused on AI‑supported issues and resonance management, presented by Kim Ceesay and Tom Klein from Hessischer Rundfunk. The HR team wants to better read and interpret online debates about HR itself and about topics relevant to the broadcaster.
The discussion made it clear that the core task for successful monitoring is strategic and editorial in nature: “Before we decide on tools, models, or automation, we need a clear definition of which signals we want to detect in the first place, what we want to be alerted to, and which notifications are actually relevant to us,” summarizes Community Management Coordinator Klein, reflecting the feedback from workshop participants.
Kim Ceesay, a Research Consultant at HR Data, adds that AI hits its limits in monitoring: weak, quiet, or diffuse signals are still only partially detectable, whereas condensed tipping points, clusters, and patterns appear more accessible. “For us, the implication is that AI can primarily support structuring, aggregation, clustering, and escalation alerts, while human interpretation, editorial contextual expertise, and methodological oversight remain central,” says Ceesay.

B(a)d Time Stories: A stage for everyday AI fails
Am Abend rückten mit den „B(a)d Time Stories“ die Lernmomente in den Fokus, die mIn the evening, “B(a)d Time Stories” shifted the spotlight to learning moments that are usually swept under the rug: failures, dead ends, and unexpected side effects of AI projects. Four speakers illustrated how broad the spectrum of challenges really is:
- Dominik Meissner (RoomPal) showed what can go wrong when using AI to translate a travel chatbot into German – and why good prompts, guardrails, and clear quality control processes are indispensable in production environments.
- Dr. Yulia Rönsch (Supportive Stranger App) used her experience as a technical writer to illustrate that AI cannot “magic away” chaotic documentation of software features: without consistent terminology, structure, and context, even the best models fail at producing a comprehensible user guide.
- Robert Kowalski (Jambit) focused on the strategic layer: AI projects rarely fail because of the technology itself, but rather due to unclear objectives, legal constraints, internal policies, and a corporate culture that refuses to adapt.
- Kevin Schramm (BR) demonstrated his vibe‑coded workflow for filtering newsletter overload. His conclusion: working filters are worth their weight in gold – but it is challenging to train an AI system to reliably separate genuinely relevant from irrelevant content.
A common insight ran through all presentations: AI is currently not so much taking work off our hands as changing it. Working with AI requires new competencies: targeted prompting, clean data foundations, and rigorous quality assurance through thorough review of AI‑generated content.
Next AI for Media Meetup on vibe coding on May 12
On May 12, 2026, we will host our next meetup at the BR Funkhaus. The 8th edition will focus on “vibe coding.” We will look at examples from U.S. newsrooms, present concrete vibe‑coding use cases from German media organizations, and offer a practical introduction to targeted vibe coding. You can find the preliminary agenda and the link to the registration page here.