DataTopics: All Things Data, AI & Tech

#92 AI in the Newsroom: Building GenAI tools for De Standaard, Nieuwsblad, Telegraaf, NRC & more

DataTopics

Send us a text

We go inside Mediahuis to see how a small GenAI team is transforming newsroom workflows without losing editorial judgment. From RAG search to headline suggestions and text‑to‑video assists, this episode shares what works, what doesn’t, and how adoption spreads across brands.

You’ll hear about:

  • Ten priority use cases shipped across the group
  • Headline and summary suggestions that boost clarity and speed
  • RAG‑powered search turning archives into instant context
  • Text‑to‑video tools that free up local video teams
  • The hurdles of adoption, quality, and scaling prototypes into production

Their playbook blends engineering discipline with editorial empathy: use rules where you can, prompt carefully when you must, and always keep journalists in the loop. We also cover policies, guardrails, AI literacy, and how to survive model churn with reusable templates and grounded tests.

The result: a practical path to AI in media — protecting judgment, raising quality, and scaling tools without losing each brand’s voice.

🎧 If this sparks ideas for your newsroom or product team, follow the show, share with a colleague, and leave a quick review with your favorite takeaway.

SPEAKER_00:

Good evening, both. Today I'm joined by both Lisa Shedgen and Nick Schuiten. Lisa, I will start with an introduction from ChatGPT in five sentences. In this episode, I'm joined by Lisa Shedgen, data scientist and gen AI solution owner at MediaHus. With a PhD in data science from Ghent University, Lisa brings a unique blend of academic expertise and real-world innovation. At MediaHus, she's leading the charge in applying generative AI to transform editorial workflows and newsroom efficiency. Lisa is passionate about using AI responsibly to enhance, not replace, journalism. We'll dive into her journey, the challenges of AI adoption in media, and what the future holds for data-driven newsrooms. What do you think about the introduction, Lisa?

SPEAKER_03:

It's quite good. A lot of compliments, but quite good.

SPEAKER_01:

Have you ever listened to like the Welcome to the A podcast? No. Like this is what I imagined the intro would be then, but it's a bit less impressive than the intros usually, but still cool. I still think it's impressive.

SPEAKER_03:

I'm impressed by myself.

SPEAKER_00:

Good to start with. We're also joined by Nick. Could you also introduce yourself?

SPEAKER_01:

Uh certainly. So um yeah, Nick Schouten, as Ben already mentioned. Um, I've been a data engineer at Dataroots for about six years now. Uh, three and a half of which I've uh been at Media House. And um, yeah, recently uh when uh the Gen AI team was was created at Media House, uh Lisa stole me and and uh made me her private data engineer and problem solver.

SPEAKER_03:

So it was a long time dream of mine to have a private data engineer.

SPEAKER_00:

Awesome. That's also a nice introduction, though. Lisa, you hold a PhD in data science as mentioned. What did that include? What did you do?

SPEAKER_03:

Um so when I started, I didn't have like a specific topic. Um but um I mainly focused on NLP and uh predictive modeling and mainly on social media data. So already texts, mainly comments, um wide variety of topics. Um but yeah, it was quite interesting.

SPEAKER_00:

So quite close to Gen AI already with LLMs.

SPEAKER_03:

Yeah, yeah, the the ancestors of LLMs.

SPEAKER_00:

And then you moved to MediaHus, which is a company. Yeah.

SPEAKER_03:

Why did you make the switch from research to I really what I liked about research was like deep diving and really going into a specific topic. Um, but what was more difficult for me, I think, was it was a bit lonely. Also, half of my PhD was during COVID, which made it like even worse. Um, so I really missed working with a team on something. Also, sometimes I thought, okay, I'm deep diving into something, but who is gonna read this paper? So I I missed the team and I missed some like real-world value.

SPEAKER_00:

Um and then you you moved to MediaHus. Could you shortly introduce MediaHus for the listeners?

SPEAKER_03:

Um, so MediaHus is a uh publishing company, uh, one of the biggest ones in Europe. Um, we have uh uh brands in Belgium like Newsblot and the Standaard, but also in the Netherlands, Telegraph and I say Ireland, Luxembourg, Germany. I hope I don't forget a country because otherwise they won't be happy. But um, it's a big media group.

SPEAKER_00:

And so publishing, but also radio?

SPEAKER_03:

Um, our main focus is uh journalism, uh, but we do have a bit of uh radio as well, uh, like play nostalgie um is also present at the Antwerp office, and some local TV as well. Some local TV indeed, but our our biggest um source of revenue and also what we do is journalism.

SPEAKER_00:

Cool. And then it's in the media sector industry, as you mentioned. Why did that industry attract you? Is there a specific reason?

SPEAKER_03:

Um so I was looking for something that that would mean something to me personally. Um, and then I think specifically journalism, I thought, okay, this is something important, um, and I would like to contribute in a way to that.

SPEAKER_00:

Okay, interesting. Yeah. And so you started as a data scientist, right?

SPEAKER_03:

Yeah, that's right.

SPEAKER_00:

What did you do as a data scientist?

SPEAKER_03:

Um, so at first, when I started, I mainly worked on CRM, so uh propensity to biomodeling, churn modeling, so predicting who would buy a subscription, who would stop their subscription. Um, I think I did that for like three to six months uh when I initially started. And then quickly I moved um to more newsroom projects because some colleagues were like, hey, we have nice ideas, and they looked for someone that wanted to do these crazy ideas, and then that's how it started.

SPEAKER_00:

Um were you already working together back then?

SPEAKER_01:

Well, uh, I was I our team at the team I was in the data platform team was already supporting Lisa's team. Um and so um yeah, Lisa really is a is a go-getter, so the the newsrooms came with a lot of um uh ideas and Lisa had a lot of enthusiasm, but there wasn't always that much time for it. So I uh Lisa, uh I think as she will say herself as well, was really good at uh knocking on people's doors until it got done. I was one of the doors that got knocked on a lot. Uh so we were working together, but uh but uh not in um how you say it, not in a structural way like we are now.

SPEAKER_00:

And what were you doing then, making sure that the ideas that were suggested by journalists could actually happen? So to make sure that that it was technically feasible to do it, or how did you tackle that?

SPEAKER_03:

Uh both. So we have um a colleague of ours uh who's a business partner for the newsrooms, and what his job is he really identifies issues, takes ideas from the newsrooms, and then brings them to us. Um and basically I think before I joined, most of the use cases were CRM. A lot of CRM uh recommender systems um and not much newsroom tools. Um, so we really started simple. Um even before ChatGPT came, we started with doing like summaries to assist journalists, which they thought was really crap, by the way. Um, but we really liked it, but they thought it was really crap. Um, but I did a bit of everything. And like Nick said, indeed, I was a bit of a knocking on all possible doors to get things done. Um yeah. So I wouldn't say my role at the time, even if I was 100% data scientist, I wasn't doing 100% data science.

SPEAKER_01:

But because you had to as well, uh, because uh otherwise it wouldn't get done.

SPEAKER_03:

So yeah, now I have Nick doing the work.

SPEAKER_00:

And so that's what you did for about two years. Yeah.

SPEAKER_03:

Yeah.

SPEAKER_00:

Really focused on smaller experimentational use cases for journalists.

SPEAKER_03:

Yeah, but it it was also difficult because I was uh mainly a lot of doing it on my own, so I didn't have a lot of capacity, so I couldn't um do much. Like at the end, I I felt like there were so many use cases and I couldn't advance on them just because we didn't have any or not enough capacity um at the time.

SPEAKER_00:

So there was a constraint there, and did you then try to prove to management that it's indeed useful to tackle these use cases, or how did that go?

SPEAKER_03:

I think um the Gen AI hype helped me a bit in that one because um I don't know if it's by luck or how things were, but um, many of the newsroom cases, of course, were influenced by Gen AI. So when the Gen AI hype came, they also wanted to invest more in Gen AI, and then those were of course newsroom cases, so it all correlated a bit. Um, and we got lucky that they wanted to invest in Gen AI and automatically also um in those use cases.

SPEAKER_01:

Um and I think what you said, like before um, yeah, before the Gen AI hype train, indeed, there was um from management side, there was already decided indeed that there was a lot of value in these traditional predictive things that a lot of a lot all companies do this, right? Um but it was hard to prove the the true value because to prove the true value you need to uh save time or effort or make the work more pleasant for the the journalists. Um and and without big funding, so that it was a bit of a chicken or an egg, and now um we have an egg.

SPEAKER_00:

Because of like the big shift in attention towards Gen AI, because all companies in the media industry are doing it, it's impossible for medias to not do it. Next to that, I assume the smaller use cases gained our attention too, or was there really low adoption?

SPEAKER_01:

No, no, I think they they definitely definitely already helped putting it on the map, right?

SPEAKER_03:

Otherwise, yeah, yeah, absolutely. Because at like I said in the beginning, when we did summaries, they didn't want it at all. They thought oh, like comparing now, also the quality was worse than what we have now, but still they didn't really want it. And then ChatGPT launched, and of course, they also wrote about ChatGPT, they read about ChatGPT, so they were like, Oh, we want this too now. It was like um new toys to play with, both for us and for journalists. And I think also the quality that we have now is way better.

SPEAKER_01:

Yeah, but because we had that framework in the beginning already, that as you said, was yeah, created created. I you we weren't 100% being uh doing that work alone. Um a lot of other stuff came in between, but we did already uh you did already have that basic framework, and and and when the tools got better, it was really easy to show that we could go quick, we could uh be ahead of the curve, and um and therefore the the yeah investment I think is a lot easier to make uh if you're in management and you see, okay, we have this already, and we have somebody that can uh implement this. Um yeah, we just need more of it. So, how do we do that? More uh then then you have something to have a framework and you needed to scale it, yeah, yeah, but more framework and something to scale in terms of people, yeah, and not necessarily in sort of in the sense of code, I think.

SPEAKER_00:

Okay, and so knocking on doors resulted in forming a new team?

SPEAKER_03:

Uh yeah, yeah, kind of.

SPEAKER_00:

What is the team called?

SPEAKER_03:

The Gen AI team, but they also call me Team Lisa, so I'm not sure if that's a compliment or not, but I'll take it as a compliment.

SPEAKER_01:

I don't know if you're allowed to technically say you're Team Lisa, but I don't know.

SPEAKER_03:

I'll I think it's I'll I'm just gonna take it as a compliment. I don't know.

SPEAKER_00:

Sure, sure. And so for Team Lisa, how did that team start then? How did you form the team and what were the first steps? Was there a Gen AI strategy in place for Media House, or is it something that's working bottom up?

SPEAKER_03:

Um so there was so this year in at the beginning of uh 2025, um, an additional investment uh from MediaHuis to invest in Gen AI. And so um people decided there should come a Gen AI team. So they created a role. Um, then I applied for the role, I got the role. Um, and kind of to because it was for now a one-time off investment, um, we do have to prove by the end of the year that we have added some value. Um, and so we together with with newsrooms, um with people that are uh into AI strategy as well, um, define 10 use cases that we really want to deliver for all media newsrooms um by the end of the year. So it's quite um a challenge because first it's 10 use cases and also it's a lot of newsrooms. Um so for example, even if we we have like a headline generator, we don't generate headlines for one newsroom, we generate for all newsrooms, and all have their specific style. So we do it in their brand style. Um Jules, who is working on it now, even adapted all the quote signs to the specific quotes that the newsrooms use and everything. So it's even if it's a group solution, there's really tailor work in in every in every brand, in every newsroom.

SPEAKER_01:

And and the code is only one part of that. It then also you can write the code, you can write as much code as you want, but you need it to be used in the end. So the code doesn't matter if it's not being used. So we also we can make the tools, but um, if if we in December manage to get there and we have 10 tools, but none of them are being used, then then our team was still useless, right? So we then we throw all the 10 tools away and we start from scratch, I think. So we really also need to focus on um adoption, the connection with business, which is also a big reason, by the way, uh why they why I think that they chose Lisa because of the connection with business, because of the yeah, what you said and it started from just talking with the journalists. So what could we do for you? What could be of use?

SPEAKER_00:

Yeah, and and so you have that bigger group, which is MediaHouse Group, but you have different brands, different people, yeah, different cultures, different tooling and systems, I can imagine. Are you then a central team, or do the other brands also have a centralized or decentralized um as such uh team in Data NAI?

SPEAKER_03:

So we are a central team. Um and so when we make something, we want to make it for everyone. So we're not gonna, if only one newsroom has a specific request, as in really a specific tool, that we won't build it. Um so we really try, even though there's always, like I said, tailor things for every newsroom, each of the 10 use cases, the idea that it's adopted by all of the newsrooms.

SPEAKER_00:

And do you have one big central team, or do you also have some people decentralized that have a bigger connection with the people within that brand?

SPEAKER_03:

It's difficult because we're we're looking how to make that work. Um, because we have the central team, we have the the central um business partner, he's called, so the bridge between us and the newsroom. Um and so now we also created um an AI group where every newsroom has one person that comes to that group to represent his or her newsroom, and where we also share info, and then that person shares the info back to the newsroom. But it it stays sometimes difficult um to get adoption through all newsrooms or to get communication through all newsrooms.

SPEAKER_01:

Yeah, and that's also a big difference that some newsrooms are not very inclined to believe or to want to use Gen AI, and some are really enthusiastic about it.

SPEAKER_03:

Um often it really also depends on like uh one enthusiast in that specific newsroom. If we have like one fan and he helps us also in co-creation, because we get a lot of feedback and in making it um how it should be, and then we have one fan that promotes that really makes a huge difference.

SPEAKER_00:

So it would be interesting to explore the possibility of having, for example, AI champions or something similar, that you have at least one person in a team that could be the champion of AI to make sure that's which informally we is the case now, but the AI champion list is just in Lisa's head.

SPEAKER_01:

Okay, you know which doors to knock on.

SPEAKER_03:

Yeah, but also I think from knocking on all of them and getting denied a lot. But I also think that's a journalistic context. They're very um very just use their phone. They're not like we don't have lists or SharePoint pages or whatever, they hate all of it. So most things are just through my WhatsApp. Like they just text me, like, oh I want this and oh, I want this. So it's difficult to put it also maybe in a formal way.

SPEAKER_00:

Uh yeah, so you really have to adapt your style and collaboration to the audience you're working with.

SPEAKER_03:

Yeah, but I like it. I love the journalists. I think they're uh it's a good thing to start with. Yeah, sorry, go on. No, no, they're tough. Um, as in, and it's obviously it's also their job, so they're they're very critical of everything we do, um, which I think they should, and it's super good that they are, but it also makes them a tough audience.

SPEAKER_00:

Um challenge accepted.

SPEAKER_03:

Challenge accepted, because indeed, once that they adopt our tools, it's uh super satisfying, I think.

SPEAKER_01:

One of the best things.

SPEAKER_03:

And also, I don't know if that's good or bad, but when they text you that your tool is broken, it actually makes me happy because it means they use it. Yeah, indeed.

SPEAKER_01:

As long as they don't text you can get as long as it's a good thing as it's once in a if it breaks too often, then okay, I won't be so happy anymore.

SPEAKER_00:

But that's the difference between someone who focuses on business and someone who's actually solving the problems and technical uh Nick, it's broken again.

SPEAKER_01:

Help but but for me, doesn't it sometimes also isn't it sometimes difficult as well with um yeah, you get the 50 or whatever uh informal yeah inputs from all the different people, they're not yeah, gathered in any way, it's not yeah, structural. So for me, that would it would be too much to keep all in my head.

SPEAKER_03:

For me and my head, you mean if it's difficult, um that's actually quite okay. I quite like it, and and you often quickly spot the similarities though, because you thrive in the chaos, I yeah, yeah.

SPEAKER_00:

I really I I maybe want to put an emphasis on the role of the business partner, yeah. Because that's that's really the bridge between your team and then all these different editorial teams. Yeah. What's what's his or her role and and like how do you collaborate?

SPEAKER_03:

Um so the business partner and I, and I think even with the team, we collaborate very closely. I would say we almost like we work as a tandem. So almost because of course we have separate roles. I'm more focused on the team and the technical aspect, and he's more focused on the the business, but we go hand in hand for sure.

SPEAKER_00:

So what you did before knocking on all doors and making sure that you have a backlog of use cases, that's now something the business partner does.

SPEAKER_03:

That's that's we already did that together.

SPEAKER_01:

Yeah, the business partner was there before uh Lisa.

SPEAKER_03:

Yeah, the business partner was there before, and the business partner was the one that dragged me along uh to get all the use cases.

SPEAKER_00:

And this person is also focusing on adoption of solutions then.

SPEAKER_03:

Yeah, absolutely. Yeah, so identifying needs, also sometimes managing expectations for us. So when he already knows that something is technically difficult or not as feasible, he also manages expectations and kind of protects us as a team that we won't be able to solve everything. So it's it's both ways.

SPEAKER_01:

And just a great soundboard now as well. Yeah, yeah, yeah. Very valuable.

SPEAKER_00:

Yeah. And so you just mentioned that there is an actual backlog of 10 use cases, or do you still have to define those use cases?

SPEAKER_03:

No, we have 10 of them. Um, I think two are almost done. We have delivery on the 15th of July of two of them, so that we'll have two completely finished.

SPEAKER_00:

Can you talk about these use cases or is it confidential?

SPEAKER_03:

No, I think um also when we see with other uh people working in the media industry, I think most companies are doing similar things. So we have um generation of headline suggestions, um, which we put quite some time in to get it right. Um, we have uh generation of uh summaries, uh bullet points. Also, maybe important to mention that it's always suggestions. Um so it's always to help the journalists and not to put it automatically um to the reader.

SPEAKER_01:

Always human in the loop.

SPEAKER_03:

Always human in the loop. Um yeah, absolutely. And also, for example, for the headline use cases, I don't even maybe it's not even an efficiency use case. Um, it's uh because we have our homepage, and there, of course, a lot of time is put into the headlines, but for the rest of our content, often a headline is something that happens like really quick, it needs to be published, needs to be put a headline. So I think the the headline use case is just to get the overall quality of our headlines uh to another and also the variability, yeah.

SPEAKER_01:

Yeah. And because we don't want a home page where everything is uh is a question where we or where everything is a quote, because then it also gets really static, really, really stable.

SPEAKER_03:

Yeah, absolutely. What uh what other use cases do we have?

SPEAKER_01:

What other use cases can you discuss?

SPEAKER_03:

Yeah, that's right, I'm thinking. And it's also sometimes embarrassing because I don't always remember all 10 of them. Um, but we have um an internal rec search as well for journalists as a as a research.

SPEAKER_00:

Can you explain what you mean with rec?

SPEAKER_03:

Yeah, yeah, maybe I should. Um so it's uh retriever augmented generation.

SPEAKER_01:

Retrieval augmented generation.

SPEAKER_03:

Um so basically it's on our own archives, so our own articles. Um and what happens first is that we when a journalist searches for something, we first do a search, but then we also get a generated answer already based on that search. Um and they use it not only for searching purposes, but also sometimes to get new angles on stories.

SPEAKER_00:

Ideation, yeah. Yeah, yeah. So it's a huge efficiency gain because instead of having to look through search terms and then finding the articles and trying to understand what it was about, now it does that automatically and it generates a summary of all things you could find within that very rich archive, I can imagine.

SPEAKER_03:

And sometimes also uh a creativity thing. Like um, there was an example of a journalist that told me, Oh, I um we've written so many about about migration. So I asked it, what did we write about migration and give me some new angles for a story? And so it really nicely summarized what was already written, but also new angles that they didn't maybe think of before. So it's it's both. I also saw social media journalists that asked it to write social media posts based on some articles that they had already written. So um it's broader than just a search tool. Um yeah, that's a nice one. That's cool. Um, what other ones do we have?

SPEAKER_01:

Um but can we say all 10 of them?

SPEAKER_03:

Uh uh, I think so, yeah. Nick is not so sure.

SPEAKER_01:

I don't know. Um well I I personally think inefficiency-wise, uh the text-to-video one will be really will help a lot. So basically, um a lot of small local news uh is is recorded by freelance journalists and then has to be cut together into small clips where it says uh yeah, uh still renovating the uh local highway or something. And then it's like three white area view shots and and then some text on top of it, and and this text is maybe uh spoken as well. And this is really repetitive, uh, and and yeah. The there's not that much NI in it, only mostly the the text generation and then the text-to-speech, uh, I guess.

SPEAKER_03:

Um but but this will I think help a lot of uh and maybe an important one because we internally call it text-to-video, but we don't generate video, so it's more like um editing. No, I don't think we will. I think we will. I don't think we will.

SPEAKER_01:

We won't, I guess.

SPEAKER_03:

Um, but it's more like helping with editing, and also these are um like Nick explained, often like car crashes or fires, or so that's really an efficiency so that the video teams who are often not many people can actually put a lot of work in like um high-value videos like uh about the war or about other topics rather than smaller topics like an accident on the highway.

SPEAKER_01:

And that's that's that's why I think it's a really good representative use case of what I personally think Gen AI can help out with with these repetitive tasks that um don't require the most amount of creativity, uh and it can help with with bringing news idea ideas, like we said with the rag earlier as well, but um it still will be human in the loop, and I still think that anything that I that the that when a human helps out that it will always be better um than than purely letting the Gen AI uh run loose on the homepage. That's definitely not the goal, and and it won't be in the next uh uh long time.

SPEAKER_03:

No. And I think indeed it will just allow us to so that the journalists have more time to put their time in um work that really makes our content better, our journalism better, rather than like boring repetitive tasks, maybe.

SPEAKER_00:

And so you mentioned having a focus on these two use cases first, the ones that are being shipped right now. Why those two use cases and not some of the others?

SPEAKER_03:

I th I think that was just the low-hanging fruit, because it was the one that two years ago we started with and and then we bumped into quite some technical issues to really roll it out to all newsrooms.

SPEAKER_01:

Um and they're mostly prompt engineering, and so the detail they're also quite related to each other.

SPEAKER_03:

I think we did most of the work there, so it was just quick wins to roll them out um to everyone as quickly as possible.

SPEAKER_00:

You both have experience in shipping use cases for traditional AI. You mentioned the propensity models for cross-cell, upsell, and so on. Yeah. Nick, you also worked on it from a more technical perspective. You now are working on these two gen AI use cases. What were the main differences and what are some extra challenges, or which challenges don't you have anymore?

SPEAKER_01:

Well, from the I don't want to go too technical, but from the technical side, we're trying to keep it um as much alike to what we used to do. Um very much the same uh in in the sense of the technology stack that we're using. And and yeah, the difference is now uh okay, the not to the text to video, but with the prompt engineering use cases that uh instead of maybe calling an expensive model and doing a lot of compute, you're making an API call, which um actually helps out with um um with a lot of yeah, uh before the resource management was uh was a big part, and now that's basically gone. Uh except for yeah, then it's a black box, and um the difficulties there are yeah, how do you test that? How do you keep quality high? How do you know what is good and what is not good? But like with the title generation, what's a good title? It's really yeah, it's a vague answer. Uh nobody can really tell you. And do you install feedback loops then together with these journalists, or how do you tackle that? We definitely do, but um yeah, it's not like we're A-B testing on 10 million users, right? We're giving a feedback loop to um yeah, maybe I don't know, uh, maybe like a thousand or or two thousand or uh calls a month because we're not fully um yeah, we're snowful adoption yet, right? Right. So how what do you really do with that? Uh can you really test three models with uh with a thousand uh pieces of feedback?

SPEAKER_00:

Uh I doubt so. But also feedback loops from journalists because they have the experience, they know what is a good title and what is not.

SPEAKER_01:

Yeah, we definitely ask them like here are the titles, and then uh here's the new version, but you can't you can't um so it's on a one-time basis, like asking them can you review the new version with some uh questions or whatever? That's something that we definitely do, but it's also not something that the journalists necessarily have a have a lot of time for. I think that they uh we can't ask them every time we make a change to the prompt, which is probably three times a day, to run it for 30 questions because they will just start ignoring us, and rightfully so, because it's not their main job. Um and then the automated loop that you talked about, um yeah, we just don't really feel we have enough data on on it uh for it yet. So we really are are also it's so subtle.

SPEAKER_03:

Like there every time we ask them for feedback, there's always like this one kind of exception, yeah.

SPEAKER_01:

But this is an opinion piece, and this is like this, and and and we also uh there's also been a bias, like um like it's been said that uh when you use a model a lot, you sort of like start liking its personality, and then the new model has a slightly different personality, and then maybe you're thrown off, even though objectively maybe it's better. But what is objectively again? Um, so I we really see the need there a lot for um yeah, having more um tests in the traditional sense, so that we we have some kind of metric, uh some kind of groundedness, some kind of guardrails, pre post processing, something that we actually do have some control over still. That's a big part of um, even though it sounds like it's against the team Gen AI, but it I think it's a bit part of Team Gen AI.

SPEAKER_03:

I know what you're gonna say and I agree.

SPEAKER_01:

What am I gonna say? Because I don't know.

SPEAKER_03:

That's uh that some traditional coding and machine learning is still very valuable for some tasks.

SPEAKER_01:

I mean personally, I think that everything that we can do without Gen AI we should do without Gen AI. Um so yeah. Which is weird to say being in the Gen AI team, but um it'll always be a lot easier than than making an API call to a black box, which it will which is what an LLM will always be.

SPEAKER_03:

Yeah. And maybe also for me, what has changed the most with like traditional machine learning? Um like now people are able to experiment or do proof of concepts in like a few hours. Like they built their own chat GPTs, they built their own experiments, and they're like, oh, look what I've done in two hours. Now can you roll this out company-wide for I don't know how many users? Higher expectations, yeah, because it's so easy to experiment with, but the difference between like experiments and then rolling out in a scalable way for everyone is a way bigger gap than before, where those models were quite complex and not everyone could make a machine learning model.

SPEAKER_01:

So we even didn't ourselves recently still, yeah, like recently. Um, and this is uh live on the on the websites of Newsbot and Leimer Limburg. We made a customer-facing chatbot to ask questions about how to fill in your taxes. And yeah, we we made it ourselves uh in like half a day or a day, and we uh promoted it to the newsroom. They were so enthusiastic about it, so we got happy, they were happy. We were all on this uh uh pink cloud, and then it took, I think, another five weeks to get it through all the necessary productionizing steps, legal steps. Uh, because we can't uh I we we need to make sure that the bot doesn't say, Hey, you can get free money by clicking on this link. Uh so um, yeah, just and that and and we were the ones that that did that, all right. We ourselves already underestimated how much time it would take, and then uh and we work with it day to day.

SPEAKER_03:

So I think but that's I think at media is where I've also seen a change in our management where first they did expect us to roll out things fast, and now they also see that if we want to do it in a scalable way, it takes time. So that's I'm happy with that.

SPEAKER_00:

Yeah, I think you mentioned the constraints or the the higher expectations, too. But on the other hand, I think thanks to Gen AI, there is a discussion, right? Because everyone is using these tools, and that way the doors might open more easily than before. Yeah, because back then I think in traditional AI they didn't really know what it was about and how it worked, and so on, and you had to put a lot of effort into AI literacy, and I still think it's true today, but by having all people playing around with these tools, they see the value from their perspective and thus will be more open towards collaborating with your team.

SPEAKER_03:

Yeah, and sometimes we also need to tell them, oh, but for this problem, gen AI is not the answer. Let's switch back to that's really the literacy part.

SPEAKER_00:

What's the difference and when to use what?

SPEAKER_03:

And now now I get like so many emails like I have this issue, can you solve it with AI? Like, well, this is we can just do this rule-based, you know, it's very easy. And they're like, Oh wow.

SPEAKER_00:

Is it also something you're focused on? AI literacy, gen AI literacy?

SPEAKER_03:

Um, I do. So it's not officially in my role, but it's something that I really enjoy doing. Um, so again, like the tools mainly focus on newsrooms. I also mainly focus on AI literacy in the newsrooms together with a business partner. Um, we did a lot of workshops. Um, and not even, I think the approach we took is we don't want to say like, oh, these are Gen AI tools, it's amazing. Adopt. It's just really like this is what they can do, this is what they can't do, this is what you can use it for, this is why you shouldn't use it for. And I think they really appreciated just the honesty.

SPEAKER_00:

I'm I'm very curious. Is there a Gen AI policy in place at Media Husse?

SPEAKER_03:

Um, we do, but it's very limited still. So they're still working on it. So we do have the aspects like Nick mentioned, uh human in the loop.

SPEAKER_01:

Um don't send user data to America.

SPEAKER_03:

Don't send use, but we already had that before Gen AI also. Uh but it's tempting to uh Yeah, don't put sensitive data, human in the loop.

SPEAKER_00:

Um Are there any guidelines for journalists and how they should use AI?

SPEAKER_03:

But that's with uh each newsroom uh individually. So there it's the editor in chief um that that organizes together with the newsroom what they can and they can't do. Um when I look across newsrooms, most things like helping, translating, um ID generation, things like that are accepted. Um generating new content like generating images is a no-go. Um so that's it's really like an assistant, and the journalist is always uh in the end responsible for for what's being put online.

SPEAKER_00:

And then for the broader workforce, do you have a recommended tool to work with?

SPEAKER_03:

Um, so I don't know, like many uh companies that are on uh the Microsoft stack, we have Copilot Studio and Copilot Studio.

SPEAKER_01:

Yeah, and Copilot 365.

SPEAKER_03:

Yeah, we have we have a test for that.

SPEAKER_01:

I hate that Microsoft always calls everything the same.

SPEAKER_03:

But it's yeah, because I also they said no, we changed the name. So indeed 365, but I think we're still in the test phase. So we have about uh I think 250 um people in the company uh trying that one out, and then the normal copilot that we have for everyone. Um but I also see many colleagues still going towards ChatGPT or other because they prefer it. But they can they can as long as they don't put any sensitive data in the city.

SPEAKER_00:

So that's the policy for today then. Yeah, interesting. Nick, you just touched upon the fact that there are new models every week, and so the Gen AI landscape is moving rapidly. How do you handle that?

SPEAKER_01:

Um I think a part of it I already talked about uh where um the models are changing, so we need um some kind of way to um yeah make it measurable, make it um so that when a model changes, we're not just rewriting the prompts for another three months and and and then just testing manually. That's not uh I that's not what you would do in traditional software engineering, and so it's not uh even though it's a lot more difficult, it's not what we shouldn't do now. Um so that's one part of it. I'll not go too much more in depth there, I think. But in in I think a big strength of of and and which is also how um yeah the Gen AI team started is um even though some tools are maybe not ready yet, we are we're experimenting with them a lot and testing with them a lot. And and um, for example, text-to-speech in Dutch is okay, but not as good as maybe in English. But I think it's fair to assume that some point in the future somebody will make a Dutch T2 text to speech. Um that be you no.

SPEAKER_03:

Do we really want Dutch with a campus accent?

SPEAKER_01:

Uh but I for example, like 11 laps is is really good, but it's but there's still it's not like with the English model, so that you can maybe already I we everything we ex every every experimentation we do there, I think it will be useful at some point, all the knowledge we we gain there. Um so for me that's a big part of it. I don't know if that's something you want to add.

SPEAKER_03:

No, yeah, I think uh besides the that we should prove that we have value by the end of the year, we should also prove that we have learned. Um, so even if we do use cases that weren't successful, if we learn something, if we know that next time we should do it differently, or I think that's already a step forward, and I think they also want us um to do it in that way, so that gives us a bit of freedom as well.

SPEAKER_01:

Uh I think everybody is going to make mistakes in the Gen AI landscape, I think, because it's just gonna change so much. Like if you I may be a bad example, but like let's say we were to spend today six months indeed making our own TTS. I'm I can guarantee you that it will be six months wasted because then somebody else will make uh the one for if you if notebook LM Dutch becomes public, it's it's immediately already and they have it, but they just don't want to make it commercial yet. But they have it and it's already better than anything we could probably ever make. Yeah.

SPEAKER_00:

Um Lisa, tell us more about your team. What does it look like? Which skills do you think are necessary in a Gen AI team?

SPEAKER_03:

Um so what I really like about the team when we created it is that we have um different, so we have data scientists, data engineers, developers. So the fact that we have different profiles really helps us to move quickly. Um, because like Nick said before, when I was in the data science team and Nick was in the data engineering team, I always had to be like, Nick, please can you help me with this? And then officially Nick had to say no, but he helped me anyway. Um but so now having all those different people together, we are able to experiment and move way faster. So I think that's a huge improvement that was really necessary to do what we want to do today.

SPEAKER_00:

And so instead of having these specialized silos, you now have a multidisciplinary team working on use cases together.

SPEAKER_03:

Yes, I think that was absolutely necessary.

SPEAKER_01:

Yeah, because with with like the the summaries in the beginning, you would maybe have some some Python code that could already do everything, but then it wasn't reachable by anybody else, and then some data engineers held out and it became an API, but then still a journalist doesn't really want to do a curl command or go to even a swagger UI, so absolutely not the not at all. So then now indeed front end you you can put a UI on it, uh like even if it's maybe not forever the UI, it then we can just test that it gets used, and then if it gets used, we can find the rest of the integration with Media Hust that the people that that can put a lot more time in it, maintain it for longer, but we need to find out quickly whether things get used.

SPEAKER_03:

Yeah, and I think indeed that's what we do. We sometimes don't focus on making things robust from the first time, but just be able to experiment and evaluate fast, even if it's by cutting some corners. Um, and then indeed if it's successful, then we will make it better, but we have to make sure we will make it better.

SPEAKER_00:

Yeah, always a balancing actually. Yeah, yeah. Um how do you do that? Because indeed you want to have room for experimentation. Nick also mentioned the things that are coming in the coming months. How can we already approach that and make sure that we are on the forefront, but then also delivering real value because there's some pressure because you need to have 10 use cases by the end of the year. How do you balance that?

SPEAKER_03:

It's definitely a balancing act. Um, I think we also together with the team, we often like look at the roadmap. And often when we look at it and the 10 use cases, there are ones that have proven value that we put into production. And then there are other ones where like, okay, we have no idea. Let's just try and see. And if it doesn't work out, that's fine as well. Um, but I think we really discuss that together.

SPEAKER_01:

Yeah, and then and fail fast, iterate quickly. Um, but at the same time, yeah, we the a lot of the use cases, like we say, some use cases are maybe very similar, so we can find some template, and we know that if we invest it properly the first time, then even if the two use cases or the three use cases that use it uh all are bad, then there are there will still be more that um also use can use this template.

SPEAKER_03:

So um maybe I'm a bit too pragmatic sometimes. I always tell them I don't care, just make it happen.

SPEAKER_01:

You shouldn't say that.

SPEAKER_03:

Oh yeah.

SPEAKER_01:

No, no, but yeah, it's a balancing act. And uh but uh it's from both sides, right? Sometimes you're the person that wants us to go quicker, sometimes you're the person that wants to go.

SPEAKER_03:

Sometimes I'm also guys, this is in production.

SPEAKER_01:

We need to slow down, so it's a bit it's a bit uh two sides of the coin. Um but use case by use case, it's really difficult.

SPEAKER_00:

Looking back at the first few months with your new team, besides onboarding Nick, of course. What are some moments you're proud of?

SPEAKER_01:

Or you think we're really uh small correction there, Ben. I onboarded Lisa.

SPEAKER_03:

So okay, but then apart from that, you can uh that's very true because I remember being like in my first month at Media Heads and I had some data issues, and I was like in the evening next to Nick's desk, and I'm like, please help me. I don't get like what this is about.

SPEAKER_01:

So um and now we still don't know, but it uh but now we both don't know, so it's fine.

SPEAKER_03:

Um I think for now what I'm most proud of is still the team. So Nick, of course, is part of it, but just the whole team because when I started um as solution owner Gen AI, I was still just on my own. And so even after a week, um people were asking me, like, oh, and have you moved forward now? And I was like, guys, I'm still on my own. It's not just because you gave me the title or the role that now it's gonna move faster. So um I really had to put a team together quite quickly. Um and a month ago or something, we had our our team building, and I was sitting like after the dinner at the table, and I saw like all the people and we're having drinks, and I was like, okay, the team is good, like we're there now. We can we can go.

SPEAKER_01:

But I think as you also said like gave a good example, like every okay, so maybe some context. We did an escape room with eight people, and normally you expect like some people would be in the background, maybe uh it's too much, but everybody really went for it, and it's a bit the same in the last uh two months, like everybody's really go pushing for it. There's a lot of enthusiasm. Um which is really cool to see because normally you have some people that maybe um yeah are are I um I don't know how to put this, but but uh it's just everybody's going for it, everybody's pushing, so it's really nice to see. Uh yeah, and it's because you have that shared goal and a new ambition.

SPEAKER_00:

Sometimes I think so.

SPEAKER_01:

I think everybody's excited about it. Yeah.

SPEAKER_00:

I'm very much looking forward to talking to both of you in like six months from now. Yeah, me too. Maybe one year.

SPEAKER_03:

I I really don't know what it is. Yeah, like every even because now I started, I think about just six months ago. And so even then I was like, oh, I would want to see where I am in the summer, and I hope I will have a team, and now I have the team, and I'm like, indeed, okay, by the end of the year, will we be able to deliver that value? So indeed, even myself, I really don't know.

SPEAKER_00:

But once I go to the newsblot and I can listen to the article in campus, I can't. You will think of us.

SPEAKER_01:

I will know that everything worked out well. But actually, side issue, like if you want to make a voice, you really need to give away a lot of rights for your voice, because then they can use your voice to do a lot of stuff. So I don't know if you want to. Uh, we're still looking for somebody.

SPEAKER_03:

You can do for it, Balang.

SPEAKER_01:

But yeah, it's uh I'll consider it. Big legal document. We'll have to talk about the details, but I'll consider it but maybe sorry, to come back to the thing about talking to us in six months from now, I think for the rest of the year for me it's pretty clear. Yeah, but I'm really excited to talk to myself like three months into next year, yeah. Because uh what we said now with the 10 use cases, it's really cool, but it's also not the maintainable to keep going at that speed. Do you have an agentic AI use case?

SPEAKER_03:

It depends on the definition.

SPEAKER_01:

Yeah, it depends on what definition you give us a definition.

SPEAKER_03:

Give us a definition, I will tell you. Now we put you uh on the spot.

SPEAKER_01:

Turn around.

SPEAKER_00:

Let's say it's gen AI, but then working autonomously with tools.

SPEAKER_01:

Yeah and is it an AI that's coordinating, choosing what other is it uh is it a cyclic loop or not?

SPEAKER_00:

I don't think I think it can be sequential too, if that's your question.

SPEAKER_01:

Yeah, if it's if it's not if it doesn't need to be where it calls itself multiple times, then then technically speaking, we have an agentic setup.

SPEAKER_03:

Yeah, but also um I feel like now a bit whenever we don't have the answer, it's like, oh, but the agents will solve it. And then I'm also like, okay, let's do one step at a time and let's see. And let's also use agents if we think the agents will have value and not just use agents to use agents.

SPEAKER_00:

And and I understand that you're very cautious because agents has some kind of feeling that that it's going to about replacing humans, and that's really not something you're looking into. Yeah. Because it's really focused on autonomous decision making and so on. And of course, there would still be a human in the loop, but on the other hand, but yes, you go ahead.

SPEAKER_03:

No, journalists, because we had the idea of agents, we have one use case that's called news discovery that looks into different uh potential news sources like uh um other news media feeds, but also social media feeds, podcasts, um and to to detect potentially newsworthy events. But even there, we also said, Oh, we can just make it easier if we put an agent on top of all the sources and just suggest you what could be interesting. And the journalists are like, no, no, I really want to see the whole list of potentially interesting. So it's also I think they're very used to in their job a lot of like a lot of contents skimming through a lot of it. So it's really also part of what they do, and sometimes indeed they can detect this little thing that's off, or that's just because of the context, and then that's the thing that the agents won't be able to do. So I also get it.

SPEAKER_00:

Yeah, it's because they've had those moments in the past that it's very difficult to leave that habit.

SPEAKER_03:

Yeah, and they're good at it. So yeah, yeah.

SPEAKER_01:

From my side, it's more from a technical.

SPEAKER_03:

Yeah, you do the technical thing, yeah.

SPEAKER_01:

That I'm more like it's not because an agent can do it that we should do it at all, right? If okay, you can make an agent with a tool that can make arrhythmic uh can do addition, but then can I not just do the addition before and not then I at least I then I have some resemblance of control? Um but yeah, there definitely will be use cases where we where indeed I don't know the path that I'm going to take. But if at any point I know the path that I want to take, um then why wouldn't I just do it in Python and then only call the LLM for the part that I cannot write in if else? Anything that I can write in if else, I will write in if else.

SPEAKER_00:

So don't add complexity when it's not needed.

SPEAKER_01:

Yeah, don't add complexity just to make a cool PowerPoint.

SPEAKER_03:

But uh let me remind you that you said I wrote too many if-else before in my code.

SPEAKER_01:

Yeah, but yeah, because you he didn't like my code.

SPEAKER_03:

He said too many if-else.

SPEAKER_01:

True. I will not deny that.

SPEAKER_00:

So I think a lot of lessons learned. Um, thank you both for being here tonight.

SPEAKER_03:

Thank you.

SPEAKER_00:

Thank you, Ben. Bye bye. You have taste in a way that's meaningful to somebody.

SPEAKER_01:

Hello. I'm Bill Gates.

SPEAKER_02:

I would I would recommend uh typescript yeah, it writes.