At a symposium in Tutzing, the AI for Media Network explored how journalism can thrive in an AI-driven information ecosystem. Sixty experts collaborated to develop strategies how media outlets can prepare their data for transformation by generative AI, ensuring it meets user needs and fits diverse consumption contexts. A personal conference report by David Caswell.
David Caswell is an innovation consultant specializing in AI in journalism. He was previously Executive Product Manager of BBC News Labs and held senior roles leading AI initiatives at Yahoo!, Tribune Publishing and The Los Angeles Times. He also publishes peer-reviewed research on computational and automated forms of journalism and was recently the co-author of the “AI in Journalism Futures” report.
Key Takeaways
- News media are increasingly seen as data providers, distributing content as data to AI platforms and for personalized experiences.
- Respecting and listening to the audience is crucial, as AI can enhance access to high-quality information.
- Re-versioning content is a key strategy to attract news audiences and reach previously ignored groups.
- Applying AI in news media faces challenges, with efficiency gains from AI workflows often being disappointing.
- Despite practical limitations, AI’s long-term transformative potential in news media is recognized.
The two years since the launch of ChatGPT have seen a dramatic expansion of the scale and scope of AI-driven innovation in news media around the world. Industry observers often point to the Nordic countries as perhaps the most advanced in applying AI to news, aided by early investment at media houses like Schibsted and JP/Politiken and a collaborative culture fostered by communities like the Nordic AI Journalism network. It is increasingly clear, however, that a similar dynamic is at work in the German-speaking countries, and that some of the most thoughtful and practical initiatives in AI-mediated news are happening in Germany, Austria and Switzerland.
AI for Media Think Tank
This trend is apparent in the formation of the AI for Media Network community, inspired by its Nordic counterpart and supported by BR. Under the leadership of Uli Koppen and Bernd Oswald, the network has quickly grown to nearly 600 members and is connecting its members via online presentations and in-person meetups and hackathons in Munich. The network is particularly focused on the practical application of AI to news media, but also on the strategic challenges facing journalism as a result of the growing influence of AI on the information ecosystem. This combination of practicality and strategic awareness was evident in the most ambitious event hosted by the AI in Media Network so far – a ‘think tank’ conference about the AI-mediated Media Environment. The conference took place at the Akademie für Politische Bildung in Tutzing on November 13th and 14th, 2024 and was attended by more than 60 media leaders and AI practitioners from across Germany, Austria and Switzerland. The conference was held under Chatham House rules.
Six speakers, eight working groups
The purpose of the conference was simple: to explore ways to boost quality news media within a digital information ecosystem increasingly driven by AI. The structure of the workshop reflected the same combination of strategic thinking and practicality that is valued by the AI for Media Network community. A series of speakers provided diverse overviews of AI in media, ranging from hands-on examples of AI products (Martin Schori of Aftonbladet), data infrastructure for AI (Pirita Pyykkönen-Klauck from ZDF Sparks and consultant Christian Vogg), models for licensing news content to AI platforms (Peter Archer of the BBC) and scenarios for the potential long-term future of AI in news (myself). Practical exploration was conducted in eight working groups, organized via a ‘bar camp’ model and reporting back to the larger gathering at the end of each day. The results from the working group sessions on the first day were succinctly summarized by Alessandro Alviani of Süddeutsche Zeitung, using both AI transcription and summarization and his own insightful observations.
These eight working groups were the heart of the conference, and their diversity reflects the collective concerns of the assembled community, expressed via ‘dot voting’ on a larger number of suggestions. Some focused on the practical needs of media organisations, such as a group focused on developing a framework for fast ‘build-vs-buy’ decisions in a very dynamic and fast-changing technology environment, and a group exploring the requirements for a shared archive across German language Public Service Media providers. Others focused on underserved audiences, including a group that described an AI-enabled hyper personalized local news platform (an ‘AI Neighbour’) and another that explored ways to help include marginalized audiences in public discourse by empowering them to actively contribute and become ‘part of the story’. Several examined aspects of media distribution, including a group exploring the future of search in an AI-mediated information ecosystem, including in ways that might be more driven by conversation than by queries, and another that looked at new and better ways of bringing high-value science stories (‘science diamonds’) to audiences in relevant ways using AI.
Finally, several teams focused on the changing fundamentals of news in an AI-mediated information ecosystem, including a team that reviewed the potential options for new ‘units of news’ that might be suited to an AI ecosystem, and a team that explored what user interfaces for news might look like in an environment in which AI tools were ubiquitously available to consumers – a concept they termed ‘Fluid AI’.
News Media as Data Providers
A persistent theme throughout the entire conference was the idea of news media as data. This showed up in multiple different ways, in both the speaker presentations and in most of the working groups. In some contexts, it appeared as familiar media content distributed as data, either to AI platforms via licensing deals, to audiences as raw material for experiences personalized by AI, or among media producers as shared archives or as a shared ‘data mesh’. In other contexts, it appeared as new structures for news, including as grounding data for AI ‘RAG’ systems and chatbots, new formats such as Q&A pairs, or even as entirely new units of news, such as knowledge graphs.
The details of these discussions were also revealing, because they were often about concepts that are familiar to anyone who works with data but interpreted in terms of the sometimes-radical capabilities of AI. Discussion of data schemas and metadata reflected the new ease with which AI could structure data or use structured data. Discussion of IP and data licensing reflected the growing power of AI model companies and the desire of media providers to retain independence. Discussions about data governance reflected the increasingly central role of ‘grounding’ in AI systems, and of trust as an essential requirement. These conversations clearly reflected the ways in which AI and Large Language Models have blurred the distinction between structured and unstructured data.
Respecting and Listening to Audiences
A second persistent theme of the conference was the increasing need to respect and listening to the audiences of media. This need is not new in the news industry, but it has simultaneously become both more feasible and also more necessary as the capabilities of AI have improved and as those capabilities are increasingly used by others competing for the attention of audiences. The new feasible of improving broad access to high quality information was part of several of the speaker’s presentations, the subject of several of the working groups, and a frequent topic of conversation at meals and coffee breaks. The ability of current AI tools to successfully ‘re-version’ content in different ways, between formats or even between media, seems to be well-understood by the conference participants and was demonstrated in some of the presentations. Re-versioning as a strategy for competing for news audiences in commercial media, or for serving more of the public by Public Service Media, was frequently mentioned.
The increasing need to understand and serve more people, including people who might have been previously ignored by traditional media organizations was also a common topic. This need seemed partly driven by a ‘if-we-don’t serve-them-someone-else-will’ concern by some conference participants, but also by a long-held commitment to serve the public that was perhaps previously thwarted by the lack of resources available to publishers and broadcasters. The prospect of using AI to genuinely serve more people in valued and accessible ways seems to be genuinely inspiring to many participants.
Challenges in AI Application
Another third consistent and very practical message that arose throughout the conference was that the successful application of AI to news media may not be easy. Several practitioners with substantial experience in applying AI in newsrooms told us that the efficiency savings achieved from investments in AI workflows so far had been disappointing, with one suggestion that an efficiency revolution “isn’t going to happen”. We also heard about divisions in some newsrooms between news workers and management around AI, with diverging ideas about the practicality of widespread AI adoption and the opportunities available from it.
Several speakers and many participants across the conference reminded us that while it was relatively easy to demonstrate and even launch exciting stand-alone AI news products like summaries and chatbots, the more substantial and impactful changes required much deeper and less exciting investments in infrastructure and culture – investments that might require considerable time to become successful. It was notable that some of the participants who were the most vocal about the practical limitations of using AI in newsrooms, and who often had the most experience in deploying AI, also seemed to be the most convinced by its long-term transformative potential.
Engaged and Knowledgeable Participants
Part of the value of a conference like this is the opportunity to interact with so many experienced and engaged people, including at meals and during breaks. This opportunity was enhanced by the fact that the participants were staying at the Akademie für Politische Bildung facility, beautifully situated on the shores Lake Starnberg. The conversations therefore had a college-like honesty and sincerity that is sometimes missing from more formal or corporate settings. It was clear that most participants were deeply engaged with AI, not only professionally but also personally. Many used language models in their day-to-day lives, as assistants, advisors, researchers, authors, coders and even companions. Many had thought deeply about the implications of AI on the future of human societies and even human identity. Many were intensely aware of the potential ethical and practical harms associated with AI and struggled in good faith to resolve those concerns in their work implementing it in newsrooms. Many were also intensely curious about AI, and about how it’s impact might play out in the coming years.
I was consistently impressed by how knowledgeable many of the participants were about the models, capabilities and tools, despite having begun their AI journey relevantly recently. It seemed to me, after engaging with this community over several days, that the German-speaking countries were well-placed to become world-class centers of AI in media technologies and applications.
Fresh Concepts
Conferences are also places where new terms and ideas are exchanged, and this conference was no exception. Some of the fresh concepts that stuck with me, usually because they were expressed in a few words with a lot of meaning:
- “Expert in the loop” as an interpretation of editing LLMs,
- “Simplify the stack” as a necessity for AI infrastructure,
- “Fluid UI” as a description for on-the-fly production of consumption experiences by AI,
- “listening systems” to describe the use of AI to understand deep audience needs at scale and
- “search as a conversation” to convey how interaction might replace queries as semantic concepts are replacing keywords.
I was fascinated by the revelation by an experienced participant that LLMs could not yet successfully summarize football matches, and by the reports that efficiency gains from AI were harder to obtain than expected. Most of all I was heartened by the collaboration and open discussion that was apparent over the two days, often around complex and nuanced topics. It was clear that the AI for Media Network was already facilitating a growing conversation that is becoming more than the sum of its parts.
Conclusion
As I have described above, some big themes emerged from this conference: content is now data, and data is now content; Audiences are gaining control of media, and therefore their needs will be respected; Getting value from AI in news media might not come easily. Some fascinating and detailed projects also emerged from the working groups, some of which might well (and should) continue to be developed in hackathons and beyond. For me all this emphasises that applying AI in media is necessarily and simultaneously strategic and pragmatic, requiring both a vision for where it is going as well as tangible outcomes and products used by journalists and audiences. That combination isn’t always easy to achieve, but the AI in Media Network’s Think Tank conference showed that it is sometimes possible.