How are AI tools changing day-to-day reporting? Where do they help with fact-checking – and where are they dangerously misleading? At the seventh AI for Media Meetup at the SPIEGEL headquarters in Hamburg, one takeaway was clear: AI can reduce workload in verification workflows and surface useful signals – but journalists will still have to decide whether content is authentic and how it should be interpreted.

On 12 February 2026, DER SPIEGEL hosted the seventh AI for Media Meetup. Around 120 participants from journalism, software engineering, product development and academia met at SPIEGEL’s high-rise in Hamburg to discuss how AI can support reporting and verification – from manuscript checks and large-scale data analyses to the detection of synthetic content.
Ole Reißmann: By 2026, you can no longer say whether an image is real or manipulated

In his keynote “The Trust Question: Technical Solutions or Social Problems?”, Ole Reißmann, Head of AI at SPIEGEL Group, demonstrated how easily watermarks and labels on AI-generated images can be removed or spoofed – even for systems like Google SynthID or C2PA. Neither visible labels nor technical fingerprints will reliably distinguish “real” from “fake” in the long term. His conclusion: “2026. Images are real. Or not. I guess we’ll never know.” He has linked the examples from his talk on his blog.
AI-assisted fact-checking in SPIEGEL’s documentation department

Gerret von Nordheim, Deputy Head of SPIEGEL’s documentation department, explained how AI supports the verification of manuscripts. He presented an in-house fact-checking tool based on large language models and complex prompt chaining. The system breaks texts down into verifiable claims, automatically searches for suitable online sources, and flags potential errors – including rationales and source links. According to von Nordheim, SPIEGEL’s AI tool identified 74 percent of the errors that appeared in SPIEGEL’s “Correction Column” over the last three years – ranging from incorrect years and imprecise wording to legal and medical details. The tool is used as an additional layer of review: it replaces neither documentation staff nor authors, but is intended to catch trivial, overlooked errors and to safeguard texts that would otherwise not be checked at all for lack of time.
Large-scale data investigations: Using AI to search thousands of TikTok videos

Susmita Arp, Deputy Head of AI Projects in SPIEGEL’s documentation department, showed how the newsroom used AI in a large-scale investigation into high-reach Islamic influencers on TikTok. For twelve major accounts, the audio tracks and captions of more than 6,000 videos were transcribed and indexed in an AI-supported research system built on SPIEGEL’s press database DIGAS. Journalists can query all transcripts with their own prompts, receive specific quote passages plus links to the original videos for each answer, and jump to the corresponding point in the video with a single click. “We used AI to get an overview of the content, to find commonalities and contradictions between accounts, but also within individual accounts,” Arp said. AI also helped identify relevant clips – but the journalistic work still consisted of interpreting the content, finding affected individuals, and researching real-world consequences.
Survey in the AI for Media Network: Who works with AI, and how?

Dr. Michael Graßl (Magdeburg-Stendal University of Applied Sciences) and Prof. Jonas Schützeneder (Bundeswehr University Munich) presented an ongoing research project conducted in cooperation with the AI for Media Network. All members of the AI for Media Network in the German-speaking world are being surveyed: what roles they have in their organizations, how they assess their own AI skills, where their organization uses or needs AI, and which network offerings they find most useful. The aim is to provide a robust situational picture of the state of AI in German-language journalism and to generate concrete recommendations for how the network should evolve. The survey was launched on 13 February; results will be presented at the next AI for Media Meetup on 12 May.
Capabilities of generative AI: a look into the “forger’s toolbox”

Jan Eggers, data journalist at WDR, demonstrated live just how powerful current image, video, and voice generators have become. Models such as Flux, Google Gemini, Sora, and Grok can produce realistic photos, video footage, and deceptively similar voices from simple text prompts – including automatically generated soundtracks. Eggers showed how photos or news clips can be manipulated within seconds and how open-source models bring these capabilities onto personal machines. At the same time, he emphasized: many fakes still fail in the details – in the background or in the context. When debunking them, journalists should not rely solely on AI detection tools, but also apply core verification methods: source criticism, contextual research, and systematic cross-checking.
Eggers has documented the examples from the meetup on his blog.
In the tools segment, three providers presented their solutions for verifying visual content.
Lumid: Fingerprints to reveal provenance and editing history of images and videos

Hans Brorsen and Daniel Larena Baumann from the startup Valid Tech showed how their tool Lumid labels content at the point of creation – using metadata, watermarks, and an externally stored “fingerprint.” From technical features (colors, frequencies, pixels) and semantic properties (“car in front of building,” “sky”), Lumid generates an abstract representation of an image. This signature is intended to make it possible to uniquely identify images even after metadata has been removed, retrieve provenance and editing history, or flag deepfakes. If media organizations integrate their systems with Lumid, they could offer their audiences quick source checks directly within their own channels.
Gretchen AI combines deepfake and context analysis

Jakob Tesch demonstrated how Gretchen AI combines forensic deepfake analysis with automated context research. Images or videos are analyzed at pixel level for traces of manipulation. In parallel, a comprehensive reverse image search is triggered. The tool evaluates “clusters” of similar images from different sources. For each image, it generates a description, highlights conflicting contexts, and specially flags fact-checking sources (for example, outlets belonging to the International Fact-Checking Network). The goal is to consolidate verification steps and make them more transparent.
Neuramancer: forensic deepfake detection

Annika Gruner presented Neuramancer’s approach, which originates from forensic attribution. Every camera lens leaves individual, physically determined patterns in image noise, akin to a fingerprint – and AI image generators produce characteristic patterns as well. Neuramancer trains proprietary models to distinguish these patterns and highlights image areas that are likely AI-generated or subsequently manipulated in heatmaps. This works without metadata and is robust to compression, for instance in forwarded WhatsApp images. For clients – currently mainly insurers – the system can reveal minimal post-processing or “enhanced” damage photos. In addition to the heatmaps, the tool provides an assessment of the probability that a file is real or fake, including an uncertainty score. The higher the uncertainty, the more the tool refrains from a definitive judgment.
Panel: In the end, a journalist has to decide whether an image is real or fake

In the subsequent panel discussion, moderated by Isabel Lerch (NDR) with Jakob Tesch (GretchenAI), Annika Gruner (Neuramancer), Jana Heigl (Team Lead “Faktenfuchs” at Bayerischer Rundfunk) and Stefan Voß (Head of Verification at dpa), the group explored what AI tools can contribute to verification workflows and where journalists remain indispensable. The flood of manipulated content has to be countered with tools that can analyze content at scale, argued Gruner. But the outputs of AI tools are often difficult for journalists to interpret, Heigl and Voß agreed. Fact-checkers need transparent, explainable indicators. They cannot work with a statement from a tool that an image is “78 percent AI-generated,” Voß added, calling such scores a “black box.”
There was consensus that no AI system can provide a “100 percent guarantee” – especially not for complex, newsworthy events in public spaces. Tools can offer clues, pre-sort large volumes of images, and surface scattered information more quickly. Ultimately, journalists must decide on the truthfulness of contested information, images, and videos – and, where necessary, be transparent when a clear-cut answer is not possible.
Problem pitch: How can we reduce false positives in AI text checks?

In the “Problem Pitch” segment, Riccardo Longo (Product Manager at BILD) described how already published texts at Springer are automatically checked by AI for spelling, grammar, and factual accuracy – and where the process still fails. False positives are particularly disruptive: the AI flags correct content as erroneous, for example brand-specific language (“Transferhammer”) or specific location references. The consequences: frustration among authors, declining trust in the tool, and an increased risk of overlooking genuine issues. Longo asked for ideas on how to reduce the false positive rate. Participants suggested several approaches: using current reasoning models in “thinking” mode, always passing the current date in API calls to give the AI temporal context, or strictly constraining the AI via prompt design to text comparison only, rather than world knowledge.
AI lightning talk: dpa develops tool for live fact-checking of videos
In the AI lightning talk, Arne Beckmann, Software Engineer at dpa, presented “Checkmate,” a prototype for AI-supported fact-checking of videos in live situations. The tool transcribes ongoing video streams, identifies verifiable statements (“claims”), and outputs them as a list. These claims can then be checked against a large base of sources, including dpa archives, fact checks, the Google Fact Check Explorer, and additional sources that can be configured. Newsrooms are intended to use the tool to more quickly find relevant evidence during speeches, debates, or breaking news situations, and to verify on a unified interface which claims can be supported or refuted.
A full documentation of the meetup, including recordings and slide decks, is available in this (password-protected) article.
The next meetup will take place on 12 May 2026 at Bayerischer Rundfunk in Munich. Registration opens on 23 March.