Bots & Beers – An AI Evening for Journalism Trainees

How does AI work? How are newsrooms using it? And can I try it myself? These were the key questions at Bots & Beers on January 23, a joint event by the AI for Media Network, Media Lab Bayern, and the ifp – Institute for Journalistic Training. A report by ifp fellow Marina Schepetow

a room full of pepole listening to a speaker
AI for Media Network Manager Bernd Oswald gesturing during his welcome address at Bots&Beers. Photo: Erol Gurian/IFP

On January 23, around 40 young journalists accepted the invitation from ifp, Media Lab Bayern, SWR X Lab, and the AI for Media Network to learn more about the use of AI in journalism through hands-on demonstrations and discussions.

At the opening, ifp director Isolde Fugunt noted that AI has already become part of everyday newsroom work, while at the same time creating fears and doubts among many colleagues. In times of rapid technological change, she said, it is especially important to reflect on what defines journalism and what cannot be replaced by AI. According to an observation by media researcher Alexander Borchardt, shared in the BR24 media podcast, there is currently no job loss in newsrooms due to AI, but rather a growing number of positions focused on AI.

Bernd Oswald, Manager of the AI for Media Network, briefly introduced the work of the network, which focuses on demonstrating journalistic use cases in which AI is applied.


Five Use Case Stations

The network also contributed one of these use cases to Bots & Beers:
Luis Mayerhofer from the BR AI + Automation Lab presented the AI-supported video editing pipeline “SchnittmAIster.” A language model such as Google Gemini analyzes raw video material and suggests images that match the spoken script. Using the example of an accident video, Mayerhofer showed how a rough cut can be created in a short time. The longer the footage, the greater the time savings for the newsroom. With “SchnittmAIster,” Mayerhofer and five colleagues from ARD won the AI for Media Hackathon on AI in video production in 2025.

Four additional use cases were presented:

  • SWR X Lab introduced Whatsupdate, a personalized news chatbot that allows users to tailor their news updates to their own regional interests. They can also ask follow-up questions, which are answered with references to SWR articles.
  • Süddeutsche Zeitung presented the Film Search Assistant, an AI-powered search engine that recommends films from the SZ’s extensive review database based on a user’s interests and preferred genres. They also demonstrated the Federal Election Assistant, which answers questions about the German federal election in a chatbot interface based on SZ reporting. For article summaries and rewriting texts in plain language, SZ uses the language model Claude.
  • The startup WeDaVinci, funded by Media Lab Bayern, connects the publishing and film industries. Using generative AI, the tool creates storyboards and video trailers from a manuscript, making literary adaptations much easier.
  • Fans of English football can use the Premier League app to generate match summaries as podcasts or short articles and customize them according to their interests. This is enabled by the AI assistant Microsoft Copilot integrated into the app.

AI Cannot Replace Journalistic Observation

Following the use-case presentations, Anna Künster from Deutsche Welle took a critical look at the opportunities, risks, and ethical aspects of using AI in journalism in her keynote. She introduced the “stochastic parrot” paradigm—a metaphor for so-called large language models: mathematically based systems that generate text by predicting probabilities, imitating language like a parrot without actually understanding it. Compared to traditional LLMs, newer reasoning models such as OpenAI o3 have more “thinking time” and produce more accurate answers, but they are still based on probabilities. Künster concluded that AI cannot provide essential journalistic skills such as sensitive human interaction, real-world observation, and verification.

Creative Vibe-Coding Workshop

In the final program session, participants were invited to get creative in a vibe-coding workshop. Their task: develop a project in groups that makes coverage of the Olympic Games interactive for users. After 40 minutes of coding with Google AI Studio, the results ranged from a quiz with true and false statements about records, to interactive maps for exploring venues, and even a Pokémon-style game with information about the athletes.

This “bot” part was followed by the “beers” part: a relaxed evening with drinks, pizza, and many conversations, accompanied by a DJ set from Alicea.