DataTopics Unplugged: All Things Data, AI & Tech

#87 How to Successfully Integrate AI into Your Business, with Tim Leers (Global Generative & Agentic AI Lead)

DataTopics

Send us a text

What happens when AI hype collides with enterprise reality? Tim Leers, Global Generative & Agentic AI Lead at Dataroots, pulls back the curtain on what's actually working—and what's not—in enterprise AI deployment today.

We begin by examining why companies like Klarna publicly announced replacing customer service teams with AI, only to quietly backtrack months later when quality suffered. This pattern of inflated expectations followed by reality checks has become common, creating what Tim calls "AI theater" – impressive demos with minimal business impact.

The conversation tackles the often misunderstood concept of "agentic AI." Rather than viewing it as a specific technology, Tim frames agency as fundamentally about delegated authority – the ability to trust AI systems with meaningful responsibilities. However, this delegation requires contextual intelligence—providing the right data at the right time—which most organizations struggle to implement effectively.

"Models are commodities, data is your moat," Tim explains, arguing that proprietary business context will remain the key differentiator even as AI models continue advancing. This perspective challenges the conventional wisdom that focuses primarily on model capabilities rather than data infrastructure.

Perhaps most valuably, Tim outlines three pillars for successful enterprise AI: contextual intelligence, continuous improvement (designing systems that evolve with changing business contexts), and human-AI collaboration. This framework shifts focus from technology deployment to sustainable business value creation.

The discussion concludes with eight practical lessons for organizations implementing generative AI, from avoiding the temptation to build proprietary models to recognizing that teaching employees to prompt effectively isn't sufficient for enterprise-wide adoption. Each lesson reinforces a central theme: successful AI implementation requires designing for change rather than building rigid systems that quickly become obsolete.

Whether you're a technical leader evaluating vendor claims or a business executive trying to separate AI reality from fantasy, this episode provides the practical guidance needed to move beyond the hype cycle toward meaningful implementation.

Speaker 1:

Hello, welcome to Data Topics, your casual corner of the web where we discuss all about Gen AI. I'm sitting here today with a very special guest, tim. Hey, tim, Hi, marilo, and we also have two not-so-special guests, maybe Kim and Mel. So, if you're here, there are two dogs in the in the studio today. So if you hear some noises, that's, that's what it is. Uh, how are you doing?

Speaker 2:

tim, I'm good, very good, very nice to have these special guests here joining us.

Speaker 1:

Yes, we're interesting yeah, hopefully they will. They'll behave um. How are you doing, alex a? Give me a thumbs up. She's good Tim for people that haven't heard you yet the first time they get to know you. How could you introduce yourself to the?

Speaker 2:

people that don't yet know you, sure. So I'm Tim, working at Data Roots for over five years, joined as a research psychologist at the time time, so I worked studying brains before, so joined more as a typical data scientist. You know good at modeling, maybe not so good at the engineering sites. Clouds was something I never worked with.

Speaker 2:

You know the typical stuff coming out of school right and so over the last five years, worked at data roots um, taking ownership over the entire end-to-end machine learning lifecycle. Then Chachapiti hit and focused on generative AI full-time, and that's now been over two years, so also helping to make sure that we can bring that expertise to customers. So there's a lot of hype, as we'll talk about today in generative AI customers.

Speaker 2:

So there's a lot of hype, as we'll talk about today, in generative ai, and a large part of my role today is helping our partners understand what's real, what's not, and how to actually deliver something meaningful with this technology cool, very, very cool.

Speaker 1:

So, jenny, I alex have you ever heard of jenny? It's hard to stay away from it. So there is a lot to uncover and one of the things you do as well as part of your role at data roots as well. You also gave a keynote, an event we did lately, right, um, how?

Speaker 2:

was it, by the way?

Speaker 1:

I think it was good, people were positive yeah, yeah, alex also enjoyed it um very nice. So what did you talk about?

Speaker 2:

maybe I thought uh well, you won't guess it, but it's no, yes ah, no, no, get out. I had the attention of quite a lot of nice technical leaders from companies in mostly belgium, I think and so I just wanted to give them something useful, like they're sacrificing a night to be there to talk to us. So I wanted to just share with them what our lessons learned are, what's the most important thing to focus on, and so that's what I tried to bring. Like, how can you make enterprise ai work?

Speaker 1:

okay, and how can you make enterprise? I work well, that's what we're going to talk about.

Speaker 2:

Let's stretch it out a bit, yeah let's, let's get started.

Speaker 1:

Let's get started. So, um, yeah, gen ai is not like something. It didn't start with chpd right?

Speaker 2:

um, no, no, it was a thing way before that. Um, I think the first time I worked with, uh, with janitor, was actually Roots, where we participated in the first AI song contest and used, I think at the time was GPT-2. So the precursor to what became ChatGPT to generate Eurovision song festival lyrics and then use another generative model to generate audio from that and so so it was like a two-step thing.

Speaker 2:

More steps like, yeah, more steps, many more steps, because there was a lot of things to, let's say, hack into uh, something that resembled music. Maybe eventually, yeah, depends on who you ask we've come a long way from then.

Speaker 1:

Uh, what is the state of ai today? Today, as of, uh, wednesday, july 9th of 2025?

Speaker 2:

so I think, to sum it up, a lot has changed in terms of what we can do with technology, but, at the same time, not much has changed in industry. People are still figuring out how to make it work in practice, and so, if you look at where we're at, there's a lot of hype. There's tech executives talking about AGI being around the corner every other week. Yes, you know, make sure that hype flame for AI keeps going. We have top executives at companies announcing that they're laying off a lot of people to replace them with AI, supposedly and sometimes also backtracking on that, for example, in the case of Klarna. I'm not sure if you've ever heard of this.

Speaker 1:

No, so maybe Klarna for the people that well, I think in Belgium it's very big. If someone's listening from outside Belgium. What is Klarna?

Speaker 2:

I don't use it myself, but I think it's like, uh, buy it now, paid later, kind of solution, like micro loans kind of thing. Is that correct?

Speaker 1:

I've used it for, like, online payments. Okay, so I'm not sure. I mean, maybe they have, they probably have more stuff than this, but like it's always related to like, yeah, paying, buying stuff, buying stuff ride online like safe, secure, all these things.

Speaker 2:

Yeah, yeah, okay, so I didn't know. They also just offer a normal payment service. Yeah.

Speaker 1:

Well, that's what I used. Maybe it's a very niche use case, but I don't do a lot of the buy now, pay later, fair enough.

Speaker 2:

So what happened with Klarna? So very early on in this entire roller coaster of generative AI becoming a thing or becoming a mainstream thing, more importantly, perhaps Klarna was one of the first to come out and say, hey look, we've done it. We've managed to bring generative AI into the enterprise. We've replaced X percent of our customer service team and we now have an AI that actually takes all of those calls, handles customer inquiries, etc. I think about half a year ago the cto or ceo of clarna came out on linkedin and said actually we were kind of wrong. I maybe didn't, I'm paraphrasing here yeah, but essentially service quality dropped and customers noticed and it had an impact.

Speaker 1:

But then Klarna was actually using Agentiai to instruct phone calls, everything.

Speaker 2:

I'm not sure if it was phone calls. I just know that it was for sure. The customer chatbots.

Speaker 1:

Customer service yeah, so.

Speaker 2:

I think it's probably a wide range of customer service automation that they did, including talking to a customer rep through I don't know their website or whatever their app their app, I don't know being replaced in part by ai, and so at the time when they released this, it definitely increased the, the valuation. Their shareholders were very happy, they were all rejoicing.

Speaker 2:

Yes, we have a gentic ai and the guy quit his job, that's it, cash in look, I I'm not here to like, let's say, focus in on one particular company, what they did or didn't do. I also don't think they necessarily shared all of the nitty-gritty details, and maybe for them it was a good learning journey. But I think the point is there is a lot of hype, both from tech executives, leading large companies, um, as well as enterprise companies in general that are trying to inflate their own career or their valuation, and then, finally, we also have a lot of hype from vendors. So vendors are coming to lots of companies, some of the partners we work with as well, and saying, hey look, stop what you're doing, stop the presses. We have a Gentik AI now, and then, a few months later, it's like we have a Gentik AI and we have MCP, which is the USB-C for agents.

Speaker 2:

Then we have, I don't know, you get the point. It's like you have to constantly keep this hype flame going because the truth is, a lot of companies haven't actually figured out how to make this create something valuable, and also a lot of hyperscalers haven't figured out how to sell it aside from in their cloud services package. One interesting example I saw recently which made me chuckle a big consulting company announced that they have 50 agents live working together with their consultants.

Speaker 2:

And it really made me think what does that mean? What impact is there? It's like saying, when you were in the two thousands, we have 50 computers, guys, like it's kind of we're trying to capture metrics rather than focus on what's the real business value, and so that's sort of where we're at. And that's also something that we see when we're just starting to talk with some of our prospective partners is that they've worked already with vendors, with other companies, and they've been burnt.

Speaker 1:

Yeah, very much yeah, I think also the the bit of a challenge, quote unquote is the word agents. It's very it's like thrown everywhere.

Speaker 1:

Before it was gen ai and then it was ai and then agents and people use agents for everything like they say agents, but like for me as a developer, I I think of something very specific when I say agents, right, but then a lot of the times you see like agentic workflows, like okay, what does that mean? You know, you have workflows, you have. And for me, again, as a developer, maybe for a business person doesn't matter, it's like ai, that is does magic, right. But when people say we deploy 50 agents working with our next to our consultants, are they using chadgpt? Is that what it is? Is that enough? A good question, you know. So I don't know like, is there like a? Is there a good definition of what agents is?

Speaker 1:

I said, well, maybe I can take a first crack and say that for me, an agent, as a developer point of view, is something that has two calling, something that can actually decide, quote, unquote, to call functions, to call things right. So for people that are not as familiar as like you have, imagine a CHPT and you have a question on what's the, the conversion rate from euros to reais, which is the currency in brazil. This changes over time. The model doesn't know. So I can give a tool that actually gets the conversion rate and converts it for you, right. So then the model can say, every time I say how much is437 euros in reais, it will say, ah, you're trying to make a conversion. We have this nice tool here, let's use it to get the result For me.

Speaker 1:

That's what I. When some people say agents, that's what I'm thinking primarily. What do you think, like maybe do you agree first with the developer point of view and do you think for a business, the developer point of view, and do you think for a business, a business point of view, what would you? How would you encapsulate agents? Or if someone says I want to use agents for x, is it clear for you what they're talking about?

Speaker 2:

never I think function calling is just one part of the it's. It's an ingredient for an agentic recipe. Let's call it that. You could throw it in the mix and it would make it maybe more likely to have. This analogy is failing as I go. You would have a successful agent at the end of the baking process. Let's call it if you include it as an ingredient. You would have a successful agent at the end of the baking process. Let's call it if you include it as an ingredient. But it's maybe not sufficient or even required to have something.

Speaker 2:

What we call an agent and actually what people perceive as agents or call agents will indeed differ depending on who you're talking to, not just within role, but just across companies. Companies We've talked with partners that have engineering teams that go all in on agentic workflows, treating AI as a sort of glorified HR exercise where you're building virtual employees. They have their own organizational structure, reporting pipelines, I don't know, probably evaluation cycles and so forth. That, yeah, actually yes, because you also have, like you know, the game of life kind of vibes, like where you kind of call the less successful agents. Oh, really, it's also a bit yeah, yeah it's okay.

Speaker 1:

Thank you to the very hyper-capitalistic, yeah, but I think most of them have kids, you know um, there's people doing that as well.

Speaker 2:

But yeah, I think, to take a bit of a cop-out argument here, I think agents are about agency, so being able to give some task away and for something else to take responsibility over it, some type of ownership. That's, I think, the fundamental part of it. And then, from a business perspective, it's more of an ambition. It's not about the technology, it's just about ambition to be able to automate things, to essentially forget about them, to make something else do it for you to some extent. But even there, as you can hear, the definition is still a bit fluffy that was my dog, by the way yeah.

Speaker 1:

um, when you say responsibility as well, I think of roles. A lot of times you say like roles and responsibilities as well, so the two things go together, and actually that's one thing. I also find a lot with a gente. I don't know, I think as a developer let's say Developer these days I'm not sure if I'm a developer. Anyways, imagine I'm a developer, anyways, imagine I'm a developer.

Speaker 1:

When I'm thinking of like projects, I'm thinking of like how to do stuff, the workflows, right, and I think a lot of times when you're talking about agentic projects, it's really more about roles, responsibilities.

Speaker 1:

I think it's like how can I say, if I, if I want to, I don't know, we have this podcast and we want to do, I don't know, we want to publish this on our social media, right, the developer will come and say, ah, maybe we need to have a workflow that every time something gets published, this is this, we can create some text published in these channels. But I think if you think more of a genetic, you think more of a marketing manager, someone that has many different tasks that they have to prioritize and they have to do, and they can do these different things, which is a different way of thinking of these problems as well. Now, another thing that I heard about agentic workflows and I wanted to hear your opinion on this is that some people are opinionated against that. You should never start with agentic, you should always start with more of the workflows. What I mentioned earlier, like happens and this happens, then this didn't do that. Is this a feeling you share?

Speaker 2:

or I'm not against agentic workflows. I am against not defining what we're trying to solve or how we're trying to do that. So I think what is a bit of a problem is that what has, let's say, abstractions, things that have a certain responsibility or scope of behavior it's supposed to support, that is sort of deteriorating with the agentic concept. Rather than having to take the time to define, okay, okay, this marketing agent does X, y and Z. This is the behavior it supports. People just tend to slap a very wide range of behaviors onto it, expecting it to support that Just indeed, by, for example, coupling tools using function calling to that, which means that in practice you get really nice tech demos, but that's sort of where it usually ends.

Speaker 2:

I'm not against the concept of agent tech workflows. I'm sure there's places for them, but when it comes to enterprise deployment, it's very difficult to make it work, let's say, consistently. Yeah, that's sort of the the main challenge there and I'm looking for it right now. But I think latent space, um organizes this ai engineering conference and they also had a pretty interesting way to define what agentic means for them. Maybe it's actually relevant to just quickly show that impact.

Speaker 1:

I believe it was latent space it's a conference, you said, or what is it? Yes, there we go, we got there, or getting there okay so blatant space.

Speaker 2:

By the way, pretty good podcast as well.

Speaker 1:

Uh, very much the second best data podcast in the world exactly yes exactly um, I'm not going to comment further on that.

Speaker 2:

Um, so if you scroll down, there's a nice little image that encapsulates their view of defining agents. So if you go up again, you just scroll past it. This one, no, the one above that, yes, exactly so people listening?

Speaker 1:

can you describe this image?

Speaker 2:

so, essentially, these are what I believe are the ingredients that you would see typically in something people call agentic.

Speaker 2:

But again, you don't need everything for people to have an agentic experience per se. So typically agentic means you have some type of intent, so something that works towards a goal, delegated authority, which for me is the most important element, actually being able to say this thing is going to do that thing for me and I can trust it to do that. Memory I would argue that you don't always need memory, but indeed being agentik typically means having some type of state, some type of ability to interact more long-term, and so that's where typically also one of the ingredients is going to be memory control, so being able to decide what to do next, which is also related to planning. So planning meaning I need to do step one, step two, step three, and to do that I need to do step one, step two, step three, and to do that I need to control the flow of the, the application, so, for example, interacting with a search engine, interacting with a third-party api, etc. And so together this is what they would consider agentic.

Speaker 1:

Let's say I see and uh, you said these are pieces of the puzzle. They're not like a checklist that you need to have all these things, but typically when you think of agents you think of at least a subset of these yep yes, and so it's.

Speaker 1:

Again, it's not about a problem or a set of problems, because there are many ways you can tackle the same problems about how can you? I think the authority one is the one that strikes me the most. It's like the one that you say I'll trust you to do x and y. Go and do it. I don't care how you do it, just get it done right exactly very cool, very interesting. Anything else in this article that you want to before I take this down?

Speaker 2:

oh, I don't have a photographic memory of what's in here, but yeah, it's definitely worth checking out. They talk about. You know how agentic as uh, as a word has evolved, because it's like one of the top questions you also get. You know from executives like what is agentic, what does it mean? What do we need to invest in? And yeah, it means just investing into what you already did before. It's just a way of designing things, or for me, it's mainly still an ambition. Like people want to make things just more adaptable, more reliable. They want to think less about the task that they want to automate, and so they make it agentic and there's different ways to do that.

Speaker 2:

It's with with these ingredients. It's potentially just an LLM. It's potentially an LLM with a data pipeline running in the background. It could be a lot of things.

Speaker 1:

Yeah, and then now going back to how it's actually used today. So you said there's a lot of hype, there's a lot of interest, but from what I understand or correct me if I'm wrong people are not successfully doing it yet wrong.

Speaker 2:

People are not successfully doing it yet. Well, to be clear, there are definitely successful deployments of this technology. It's just that it's the exception rather than the the norm, and it's also important to hold on.

Speaker 1:

So, um, what you were saying earlier, as I understood, if I understood correctly, is that there's a lot of interest, there's a lot of um attention paying to these things, but people are not yet really leveraging these things as well, or people are still figuring out. Is that, is that what you?

Speaker 2:

I think where we're at is in a stage of ai theater, um, where there's a lot of nice stories coming out from every angle, but there's very little impact being shown. We're not necessarily also always focusing as much as we should on what the impact is going to be, sometimes a bit too much on the technology side, and so in the event that we had, but also when talking to a lot of our partners, we tried to refocus back to what matters what is the ambition? What are the the most important values for the company? Where do they want to go and how can this technology help with that? Not so much. How can we create some cool technology that maybe might help with our ambitions?

Speaker 2:

yeah, I see sort of flipping the script there a bit, and so maybe just to, let's say, synthesize what we're talking about. There's indeed a lot of ambiguity about what it all means, and that's partially intentional from these large companies to make sure that the hype is there that companies are constantly reinvesting. Once they've been disillusioned with generative AI, they can move on to the next nice term. But there is some real value, there are real things that you can do with this and there is a way to go from. I don't know why I did that. Is somebody just thomas, like somebody watching and is like this is shit. Please was it you. You can tell me, um, okay, I don't know how to turn that feature off?

Speaker 1:

No, I think, but it's like the timing of it. I think it's me Okay. Turn off reactions. Okay, that should be okay. Now I'm starting again.

Speaker 2:

Yeah. So I guess one of the key points that I try to bring again and again is that there is very real value, but only if you move beyond just technological deployment side. So there's indeed like technological challenges you have to solve. There's a lot of smokes and mirrors here and it's hard also for a company that isn't full time working on generative and agentic care to keep up with that. For us, it's hard to keep up with that. For us, it's hard to keep up with that even with the amount of changes. And so, yeah, what I try to bring is what I try to bring in general to our partners is what do you have to do? What do you have to focus on? What are the important things?

Speaker 1:

yeah, and before we double click there, one thing you said also resonates with me is the there was a lot of hype, there's a lot of promise and maybe, if I think I'm thinking now of like a genetic coding and all these things, right, oh yeah, okay, there was a lot of like, because I think that's where it meets most my, my, not my day-to-day necessarily now, but like things that I'm very paying attention to.

Speaker 1:

Right, there's a lot of promise, there was a lot of um, and then some people try, they try once or twice and they're like, yeah, it doesn't work, right, and I think it's like, yes, some things don't work because the bar was super high and a lot of the things that people were promising was like very like. For example, like you said, we're replacing engineers with AI, replacing this, like, the bar is set really high on the expectations, right, and sometimes, even if you don't meet, like you don't meet that, so it's not a failure. There's still a lot of value there, there's still a lot of stuff right. And I think it's also the optics you need to put in perspective, just because, I mean, there's a lot of hype, there's a lot of promises, a lot of all these things. But if you take that all away, there's still a lot of value, and if you're not doing these things, you're also going to be falling behind.

Speaker 2:

Yes, to be clear, experimentation is extremely valuable and you actually need to be experimenting in your context, make sure that you understand what works and what doesn't work. But yeah, I don't envy the position for leaders that have to make a buyer built decision, have to talk to all of these vendors to figure out what is real here. It's very hard to figure out and actually I have a wonderful slide on page eight, murilo, if you could show it, slide on page eight Getting there, getting there.

Speaker 1:

Okay, there we go.

Speaker 2:

This is one of the things that I talked about. I think it's a very good um idea to look at history, to understand, like, what's coming next, to understand what you should be doing, shouldn't be doing. And I think where we're at is sort of in this awkward stage in the 1980s where computing just came from big mainframes and banks to your desk. Like people would buy computers and ask to take it to work because they were so much more productive with it.

Speaker 2:

Yeah, and at the same time, you had these magazines like the one on the left um, you can't maybe see it here, but it says all you need to do this and it shows like different activities like, for example, spreadsheets, uh, art, making music, all of these things you just need, need this, the Commodore 64 or something. And that's exactly the same marketing message that you see today for, let's say, the B2C segment of generative AI. We're in this awkward stage coming from very big application of AI in companies, very select. Basically, the last 15 years in the metas of the world, we had AI being applied like at very large scale for specific problems like mirror in the mainframe part, and now we're in that awkward stage where we're bringing it to consumers, but it's still like in that platform phase where we don't actually have proper products, products that you can just use and make music with.

Speaker 2:

Actually, that's a lie, but I'll come back to that later but there's still a lot of prompting that you have to do to make it work. And so then you have magazines telling us you have to become a prompt engineer. All these companies are investing in prompt engineering ai literacy. In the past it was you need to buy a database software so you can make your own financial planning software, you need to learn how to program, etc. Et cetera, et cetera. So it's like the same messaging. It's the same how do we make money out of this? How is it going to create value? And so that's actually where we're at today. It's very much that deployment phase figuring out what works.

Speaker 1:

I do think and I mean, maybe we can go back to the music generation there are some tools that they I think a lot of. There are some tools that they just paste their thing on top of AI. There's also some tools that you can see, like Notion, which is something that we use. They have AI built in. Now you also start to see a lot of these different tools that start to have a bit of AI, a bit here and there. You can add a bit of AI here and I think maybe, as a reflection that I have as a developer now is sometimes like how do you like?

Speaker 1:

Zapier is another good example of, it's just the glue, you know, and how much do you want to leverage on these things and how much do you want to build, or like, are we just trying to use the AI from Notion, use the AI from Zopper and use the AI from here to kind of create a nice coherent workflow? Do you want to have something completely separate as well, but it's probably going to be more work, like, how do we navigate these things? Because I feel like most of these tools they have, let's say, 40% of what we need, but never a hundred percent. Right, and then it's like okay, if you take the six tools together, then you cover 90, but then how do you put these things together as well?

Speaker 2:

well, this is a fantastic segue to the main points that I wanted to make in my talk. There you go so let's, let's.

Speaker 2:

Go so I think what you're actually referring to is the fact that we as people, we sort of manage a lot of states. We have I don't know 10 tabs, tabs open this program, this program, and we have it all sort of mapped in our brains where things are and we can bring it together and make a decision or generate an output or whatever. And that's what agents or AI typically miss. They don't have that. They're just running in a nice little box, a container, and maybe they have access to a search engine with which they can do terrible search and maybe sometimes get something useful out of it, but fundamentally they don't have the same context. That's the key word and that's also why right now, the new hype word replacing MCP is context engineering, which to me feels like a prompt engineering, plus plus.

Speaker 1:

Actually it kind of is Plus plus actually it kind of is.

Speaker 2:

We've been approaching it a bit differently for a while, because data roots comes from the MLOps side, the data platform side, and so for the longest time we know we can't do AI without having data in place, and so what is now context engineering is something that we've just been doing already for years with customers to focus on the data, to do it data-driven, to make sure you have the context that you need to make these things work in practice, and that's also been like the key message that I bring to our partners.

Speaker 2:

It's about the data again. It's again about just making sure that you have that shared state, so it's not 40% living here, living there. I mean, you're not going to overnight change your erp systems or your other operational systems. So you know, enterprise deployment is slow, it's going to take time. But, yeah, you need to invest into some kind of layer that can bring all of this context together, so that you don't just have agentic intelligence, something that does something independently from you, but you also have contextual intelligence, because without that context, you can't give it that responsibility, it's not going to be able to do what you want and so that's basically one of the key takeaways and then would you say that that's the main thing that people should focus on to really leverage ai so again going back to a lot of interest, a lot of experimentation.

Speaker 1:

They need to take the next step. Is it just on the focusing on the data and making sure that the data is accessible, that you have 100% of the context, or are there other things that people need to be thinking about when they think about agentic AI?

Speaker 2:

So contextual intelligence is the first step. I'm sorry for creating another new buzzword, but essentially just the idea. You have to make sure that you can find the right data at the right time for your agents, for your people as well. That can sometimes be the best first step just to build better search engine for your org, nice. And so the second step is to make sure that, whatever code and prompts you've written, the connections to your operational systems you've created remain relevant. So today you've written a fantastic I don't know business development agent that can help you to identify trends, generate proposals, but your business context changes. What is a good proposal changes? You don't keep using the same technologies. Your customers change, they have different expectations, and so your data, your prompts, your code it all has to change over time. It has to keep changing alongside the world and with your operational data as well.

Speaker 2:

And so the second thing that we emphasize in is continuous improvement, and that's nothing new, but it is new outside of the context of just model improvement. You know, historically, mlops is typically about you create a model artifact and it's, I don't know, good at detecting fraud or whatever. You detect that it's no longer working so well on some new fraud cases. So you retrain it and it becomes good again. But with the systems that we're building nowadays, it's not just the model artifacts. You're working with different parts of a big agentic stack. You have the infrastructure layer, you have the model layer, you have the data layer, you have the application layer and then you have the agentic AI layer you could call it, I guess and so you have to make sure that all of these are somewhat aligned with each other and able to change and continuously improve, and that's not something that's trivial, of course, I see.

Speaker 1:

But then on this scenario, like you're not really touching the models because the models are closed, proprietary, all these things. So it's more about on the tooling around it and making sure that the setup is flexible enough so when the context changes, you can still how can I say you can adapt the current solution to be effective in this new reality?

Speaker 2:

Exactly, that's part of the challenge is, of course, it's not just the business context that changes. That's always been the case and it's also why these tools often fail today, maybe day one they work. You have a nice demo, it works most of the time, day 90 people stop using it. They abandon it because it's no longer being updated, and that's like the product is mainly going to be about the updating, not so much the day one deployment, but indeed what you mentioned is true models also change. The technology changes, the stack changes.

Speaker 2:

What today is a good model will be terrible in two years perhaps, and so and so also there, you can't be rigid, you have to be flexible. Build around that. Make sure that once a new model drops, we don't just focus on the vibes of deploying that model. And it feels good, it seems to be doing better, hey, let's deploy it. No, we use benchmarks. We over time, collect data that shows whether we're doing better or worse. Use that to prevent regressions so that when we make changes, it improves. These are all traditional things that we know we have to do, but we're somehow we and I mean we collectively as an industry, obviously not Data Roots are skipping past that a bit.

Speaker 1:

I see, I see, and then by skipping that, you mean like people are really focused on the GenTIKI and stuff, but they're just kind of trying to to get a cool demo. I mean cool demo but like getting the wow factor, they're focusing on the wow factor too much and not on the there's a couple of things right like.

Speaker 2:

So, traditionally speaking, it deployment. Technology deployment is like you buy something, a product, you train your people on it and it works and it keeps working. It's like you just have to figure out how to integrate it. Ai is a bit different, because what most people are trying to deploy are actually like a form of mainframe, something that you can replace a lot of people with or reorganize people to work around it to be more effective for example, processing a whole bunch of data, a customer chatbot and that's not a deploy. Once keep the benefits forever. It's a constantly updating type of scenario because you have changing technology, you're changing business contexts and so in that sense, it's very uh different from from conventional uh product deployment. I'm not sure if I'm answering your questions no, no, no, you're doing.

Speaker 1:

you're doing fine, you're doing fine, you're doing fine. What else do we have? I have here something on Silicon Valley and Karpati. Tell me more about it, tim.

Speaker 2:

Sure, so maybe before we dive into Karpati. If that's okay, you're the boss, we dive into. Carpathic. That's okay, you're the boss. One of the things that we were just discussing is a lot of change, yeah, a lot of disruption, so it's very hard to understand what you should focus on, where you should throw your money at yeah, just throw it at us if you're in doubt.

Speaker 1:

Yeah, exactly, alright, next topic.

Speaker 2:

That's all I wanted to say Thank you. I just wanted to make sure we like sub in that sales message a bit right.

Speaker 2:

No. So I think one of the let's say, our personal, the marketing I'm trying to push here is anti-fragility. So I think what people expect when they talk about agentic AI is actually just a system that learns. They don't want to constantly tell an LLM what it should be doing. They don't want it to be constantly a friction process, and so for me, that is anti-fragility. It's about not just having a system that is fragile it breaks when things change not robust. You don't want it to just keep doing the same thing as it was. In a new business context, you want it to improve over time and again. That's not a new concept in mlops, but it is a new concept for people that are used to technological deployment um, buying the product once. It's new for people that maybe only work at a model layer and now they have to suddenly think about the whole system, the software around it, the, the prompts, etc.

Speaker 1:

All of the data integrations, and so that's sort of what I'm trying to advocate for that we should be organizing around change and designing the systems that we do in a way that they are flexible to that change and when you say like change, you mean like manual, like human change, right, like, because there there is also an argument that you could ask, like the ai, to kind of fix itself, as it goes, you know like I give feedback as it goes like okay, that's not what I'm trying to say, and then I don't have to actually call a developer to change the prompts and change the tools and all these things, but also ask it to kind of have this memory and I guess that's also goes back to the question before to say ah, that's not what we're trying to, what we're trying to do, but that's not what you mean necessarily and that's one example.

Speaker 1:

It could be like it's more broad than this. It's more like we, when we're developing or architecting solutions, to really think of something that can, that is flexible enough to change over time, but not so flexible that. No, I think flexibility is a good way, but not when the context changes. That was too state-relevant.

Speaker 2:

Yes, sure, but mainly that you're also able to change some meaningful parts of the system and see what breaks, what gets better. Being able to evaluate that and move on from there. That's part of the, the challenge, and what people, what we should be focusing on as an industry and as engineers yeah, indeed, indeed, and yeah, I think, industry still.

Speaker 1:

I still feel like it's early in a way. I still feel like, like you said, you hear a lot about this and for sure there are success stories and for sure there's a lot of value there. But I think, even in the way that a lot of companies work with the budgeting cycles and like actually building these things and actually getting the results.

Speaker 1:

It's still a bit early. I think the thing that's a bit crazy nowadays is that the amount of signal that you get to what I actually see happening, like the scale of things, it still seems very disproportionate. You know, you still get a lot of people saying like that agentic eye will solve all your problems, that will cut your head count and all these things. But on the other hand, when you actually look at the things that people are doing, it still seems very experimental, very low risk, right, which I think is usually how things start, which I'm not saying it's better, I I just think the the difference of what I hear and what I see is quite, quite big still I think it's because people are talking about different things, like the capabilities of models.

Speaker 2:

Of technology have indeed increased a lot, but better models don't necessarily mean better enterprise AI. There's a big difference between that, because even if your model gets better, even if tomorrow you have quote, unquote AGI, that AGI still won't really know what success is for you as an engineer or for a company, without you giving it the context saying this is success. This is an example of what is good and bad, unless you also assume it is omnipotent, all-knowing and so forth. Um, not there yet. Not there yet. I agree. Thank you, and again a thumbs down.

Speaker 2:

Okay, thank you, omnipotent god, yeah, um, the ai doesn't agree yeah, exactly, and so maybe also to come back to to what we we talked about earlier, but I think we're missing today, and it's the final pillar, the final thing that I think is really important, you know, next to contextual intelligence, next to continuous improvement, is bringing humans and AI together. I don't think most tasks today can be done by an agent.

Speaker 1:

Okay, ai is not good enough so you just think just because, yeah, it's not good enough, because I also have another, I agree. But I also think is because we work in society, we need to have accountability, and agents don't have accountability. Yeah, that's a good point we always need to have someone to say why is this not happening? We always need a head to roll when things go south. Wow, yeah, you're not wrong, I guess, but I feel like that's why it's a bit.

Speaker 2:

And that's why you have HR engineering. We've come full circle.

Speaker 1:

There we go.

Speaker 2:

No, I get what you're saying and I agree with you. No, um, I get what you're saying and I agree with you. Um, I think, indeed, part of it is ownership we don't trust these agents to do the things like we can't actually guarantee that they're going to do well and it's partially capability related, to be honest.

Speaker 1:

But I think there's always going to be even like, okay, it's not necessarily go well or not well, but I think if you have a question you can ask maybe the agent. But like, if that's wrong, you know, like I think you're always going to need someone to kind of say I sign off on this, yeah that's more or less true.

Speaker 2:

Yes, and that's what you're seeing now in a lot of traditional industries being transformed, like news, like people are using ai there, but the journalist, hopefully, typically, is still verifying what the output is exactly, whether it didn't make up some stuff, a quote, that didn't exist. It's exactly, that's true, but for some cases, you can generate at scale, but you can verify at scale, and there's a lot of use cases like that that people want to support today and that's where that sort of approach can break down. Also, it's not the most transformative thing, let's say, and people don't like verifying things most of the time, but I do agree that is true in principle. We still need some type of ownership.

Speaker 2:

I think, in practice, the way that we're going to get to things that work in enterprise ai and have that type of ownership and trust adoption is by taking a more, let's say, um, how do you say it? The road in between. Yeah, okay, make a compromise. We're going to have to compromise in the sense that we're going to be building systems that do things automatically. Things will fail and we need ways to surface those failure modes and let people look at them, verify them, improve them, give that back to the system, learn from that, do those things better. Essentially, continuous improvement, but with people in the loop Human in the loop, as people call it, all the time.

Speaker 1:

Yeah.

Speaker 2:

But I also think the reverse is true, what you've mentioned about you know you doing a task and it takes you maybe many more days using LLMs a task and it takes you maybe many more days using llms sometimes. In other cases, you will use an llm to generate some code and you're not sure if it's actually doing what it's supposed to be doing. I think we're supposed to be working as people towards a point where we learn from the ai. We we should come to a point for most tasks where we don't need it to do a bunch of things like coding. We should all become better coders by working with llms, not worse. We should be learning from it. We can't give it all of the responsibility because indeed, we still need to be able to diagnose in depth what it's doing for a large part of the system yeah, I agree, I think we but and I think the diagnosis, diagnosis need as well.

Speaker 1:

Again, it's to me at least, I see it a bit as accountability Right. I think even if you have a whole bunch of agents doing a lot of things, you still need someone to make sure things are going in the right direction, to make sure that if something goes wrong, if you need to steer I think I agree with you, I think there's still it's never going to be about replacing. I think it's a bit of a pity that a lot of times what we see in the news is like replacement.

Speaker 1:

I really think it's more about like enhancement.

Speaker 2:

Right.

Speaker 1:

And again, maybe the two things are connected. Right, if you're more, if you can, if you're more, less productive now, as 10 Tims a year ago, a year ago, that's maybe maybe you can cut off nine teams. Now. That's not true, no, no.

Speaker 2:

But you know, like maybe the two things go together, but I think, uh, people are really focusing on the, the replacement, but I don't, I don't really see it as replacement, yeah, and I don't know if, like, we as a society work to to really replace with these things I think we, as practitioners and as people that can help technical leaders, senior leaders and companies move in the right direction, we have a responsibility to steer them, to support them in that decision making so that indeed, they get value from AI, but like in a responsible way, not where you just cut an entire team and then a few years later you rehire them, yeah, and I think that actually that path is going to be at least as valuable as going for full automation in most tasks.

Speaker 2:

There are some tasks, indeed, that just today sorry, ai is just better or just too cost-effective, but for the most part, I think the future will be AI and people working together yeah, augmenting each other yeah, and I also think that the things that ai is automating today, like the world changes, like you said, everything is in constant change and the things are automated like there will be new problems tomorrow there will be new problems tomorrow yeah, you know, like I created them with ai, maybe with ai, maybe with something else, but I think it's more like everyone has a bit of a gloomy vibe of they're gonna take away jobs and then people are gonna be unemployed and people are gonna be this.

Speaker 1:

But it's like there will be stuff. There will be stuff, you know, like, like I don't know, uh, I don't even know what a good example is. Maybe you have like I don't know, you have, uh, robot grass cutters, right. So you say, ah, but then we don't even know what a good example is. Maybe you have like I don't know, you have robot grass cutters, right. So you say, ah, but then we don't need more. But like, no, but you need other things, right. Like it's fine, like new things come, people invent things and new jobs because, like, when AI came, I know there were another class of jobs that was about labeling, labelers, human labelers, right, not right. Not saying that that was before the nei.

Speaker 1:

Yeah, but I remember this, maybe not the best example, that was before jenny I, but but I remember I heard the story about like ai and I remember, even like articles, of saying, like actually, ai is not jenny I to be clear ai is not taking away jobs, actually creating jobs, because now we're paying people to label stuff right, yeah not saying it's the best job ever, but my point is more like yes, some jobs will be automated, but new jobs will arise yes, that's hopefully the case, but we don't know, but I think it's also like, I think ai and gentec, ai and gene ai, and the change has been very big, yes, in a short time, but I also, in a lot of ways, in a lot of respects, I don't think it's very different from other changes that we've gone as a society.

Speaker 1:

Like you mentioned the mainframe, they can talk about the internet, you know yes, to some extent, um, I agree with you.

Speaker 2:

Okay, I I do think it is a bit different. Okay, we're gonna go into like the more the let's say the policy side of things, because I think it's not so much. Indeed, there's always enough work, there's always enough um opportunity for people to have at least a an ability to contribute to society, to make money, etc. But it's maybe not the ideology of everyone that is pushing for these changes and it's also why, actually, with the work we do, it's actually important to to talk to enterprises to make sure that they're the first adopters of this new technology and not just the hyperscalers. And let me explain to you why I'm saying that. Hyperscalers, big companies that train these models. They don't have to follow social contracts. They're breaking social contracts companies, for the most part, they follow social contracts. We've fought for them, sure? Can you give an example? What do you mean by?

Speaker 2:

for example labeling data labeling yeah so we're.

Speaker 2:

We're transitioning more and more towards a gig economy where, instead of having a contract, a permanent contract with certain benefits, you work nine to to five, et cetera. We're moving towards here. Is this data, label it for me and then come back to me Like that's it, you just get some money for that, but there's no social contract. These companies don't typically even have a presence in those areas, and so that's where actually, you see a lot of problems today with data labeling companies, people coming out saying, hey, actually we helped train famous model X, famous model Y, we can't put it on our resume.

Speaker 2:

One, two, we saw a lot of explicit images that were traumatized by we didn't get any support. Three, we didn't receive I don't know any benefits. We just got laid off as soon as we were no longer useful, et cetera. You know it happens first to them, but it's also happening already to us. Github code is being used to train models. Our knowledge is being extracted and then rented back to us, and that's sort of the future model that some companies want to go for, that some visionary leaders want to go for, a new form of feudalism where we bring our expertise, put it into model once and then they give it back to us and we pay for an api through it yeah, it's, uh, it is.

Speaker 1:

I mean it is true in a lot of respects and I think I mean it's not very different from open source as well. Right, a lot of times, people, I mean that's the thing, like you host some stuff and then people make profit out of this and then they're paying for it with the stuff that you built.

Speaker 2:

In a lot of ways, yes, yes, yes and no, um, there's still a bit of an unspoken contract and some incentives to give back, even if people don't always do that, um, but here there's really a bit of an unspoken contract and some incentives to give back, even if people don't always do that, but here there's really almost no incentive. Meaning, openai took the internet and just trained on it. We now have all these companies training on all of our data that we maybe gave to them, maybe not. We have Google training on YouTube, on who knows God, god, what else, and we didn't necessarily sign up for that. It's data that we, yeah, gave in the past and it's now being used to build something new that they can then rent out and maybe replace entire categories of the things that, yeah, we used to create videos as well, and so, yeah, I'm not too, too confident that this is going to necessarily lead towards the better future, which is why, again, it's important that companies can adopt this, because right now, what you're seeing is that all these hyperscalers are also changing their model a bit. They're changing their business model to forward deployment. So, instead of just keep building better models, they're saying, actually, we realize now that to make money or to make enterprise AI work, we need to work together with these companies. We need to bring consultants there, because, in the end, the models are not good enough on their own. They still need to be integrated, they need contextual intelligence, and so companies own the customer relationships, they own the context. So together they still have a moat, like they can still defend themselves against these AI companies coming in and taking the entire pie, the entire category. But to do that, they also need to be adopting this technology, being resilient to change, even thriving in change, and that's what I want to help push for. So, in a very weird, convoluted way, help the companies, help the employees, so that we don't get screwed by a Black Mirror-esque scenario. That's my fear. All right, and that mainstream message, yes, maybe.

Speaker 2:

Just on a positive note, I think a lot of companies are working with this new technology, trying to do good things with it, making sure that employees have more time to work on meaningful things, things that they're passionate about, moving away from routine tasks that nobody enjoys. That's the positive side here, I think, and also, of course, people learning from AI, being able to talk to it. As a coach, I've become a much better developer by having AI being able to ask all the stupid questions all of the time. So I think for individuals, this can still be a positive outcome, to be clear, and I think it's up to us as practitioners as well to help steer things in the right direction. And so, coming back to you know sort of the beginning of this podcast, but also the event that I talked about, trying to separate what's real, what's hype from reality I just want to, let's say, synthesize what are those key lessons learned over the last few years.

Speaker 2:

And so, for me, the first lesson, let's say and feel free to interrupt me my monologue my first lesson, I would say, is that for most companies today, it's way too early to build and even fine-tune your own large language models, so not to use them To use them, yes, to build your own thing you should use them, but indeed building them is not really realistic because the amount of money in the space you can't compete with OpenAI, you can't compete with Entropic, you definitely can't compete with Google, even if you're fine-tuning Lama models.

Speaker 2:

Even then, except for very specific use cases and specific industries, I'm thinking In general.

Speaker 1:

Then let's say, yeah, Probably not your first bet.

Speaker 2:

No, not in the next few years. Okay, I think this will still change. You know, models will get cheaper. Building them yourself will become easier. There will be more companies operating next to the hyperscalers that give you a kit to build your own model perhaps, who knows? There's a lot of ways that this can evolve, but today it's too early. The business value is not going to come from just building your own model. It's going to come from engineering everything else around that yes, yes agree.

Speaker 2:

The second lesson learned is that you'll always still need your own data, your own context, to make the ai or even agi work for you. So, coming back to what we said before, models are getting better. We can assume that they're going to keep getting, but getting better in the future. So everything we do should be designed around that truth, around that assumption, and that means that we need to invest into benchmarks, evaluation, knowledge bases, so that if tomorrow you change the model there's again a thumbs up there If tomorrow you change the model, you can just plug it in, see what's better, what's cheaper and move on and not be disrupted, so that the custom logic that you wrote in code or in prompts isn't suddenly useless. You can just move on and get the benefits from the model and um, yeah, yeah, make it useful for you three. So, building on that, that entire part of one, don't build models to invest. Your data and your context, especially the models today, are commodities and data is going to be remote, meaning data, the, the proprietary data that you build up over the years. Interacting with your customers, managing your products, is going to be a thing that makes your company special Meaning. If tomorrow, again, there's AGI, the entire technology stack changes. You have your knowledge base with your products, your services and your practices, your operational data, your benchmarks, your enduring definitions of success. You can just plug that in and you'll have a system that's ready to go even better and to also differentiate you and other companies. It's essentially what makes your company special. Even in an age where everything becomes ai or agi, that data is going to be your major differentiator. It defines what is your way of working and your success. So that's super important.

Speaker 2:

Okay, four, yes, there's a lot of lessons, so good luck cutting them. Don't design monuments Over a talk of one hour. It makes a lot more sense. Don't design monuments. Design for change. So software ate the world before. Now AI is eating software and data. So when we work with engineering teams, some of them already do this. They design around the model, they make sure that the data is being pulled dynamically that prompt side dynamic. But many are also still engineering a lot of business logic, and that's not necessarily good, because the closer you tie your business logic to traditional codes, to frameworks like langchain et cetera, the more you're going to have to rewrite that and the more it's going to become technical depth. So, essentially more and more of the business logic that we used to write, and customer chatbots are a very good example of that. It's just being bundled inside the model you know intent recognition, named entity recognition, all of these different layers used to be additional software that you have to manage yourself.

Speaker 1:

All that is being pulled into the model and so it's like a one-stop shop kind of exactly, and so that's going to continue.

Speaker 2:

We've also seen it with reasoning being embedded inside the model. Yeah, and that's going to be the the trend moving forward everything's going to be embedded inside the models, and so we should again prepare for that eventuality and design also our systems around that cool.

Speaker 1:

I can also disagree, by the way. No, I mean, actually I agree that this is the case, but I think years ago I thought it was going to be the opposite. Actually, I thought we're going to have, like, more specialized models for x and y, and there is true to a certain extent, but I do think that the big winners are the ones that kind of do everything I.

Speaker 2:

I mean that could change Again, I think in the future.

Speaker 1:

I think today, that's Well a few years ago. I predicted about now. Let's say about now, and a few years ago I thought it was going to be more like we're going to walk more towards specialized everything and I feel like there are some focuses or niches, but the big winners I think they're more agnostic.

Speaker 2:

let's say it's like a swiss knife kind of thing yeah, when it comes to the research, frontier scale trumps everything else. So just throwing as much data in there, making the model bigger, is what wins, but for companies that's not the case. For companies, it's about using that model and then making it work for you with your data, your business logic, and so, indeed, for now it's around designing around very, very general models, but in the future it could become more specialized, once we hit some type of ceiling and performance improvements.

Speaker 1:

That's very likely actually, because then you can start to cost optimize, etc yeah, that is true, and maybe you can even go from the generic when the foundation models and just kind of like try to trim it down to specific things if you have a more specific need. That is also true, indeed it's just moving too fast.

Speaker 2:

Yes, lesson five. Five gets me slowly losing hope. No, no, um, lesson five uh, we talked about this. Uh, data-driven, continuous improvement is the product. So the world and your company changes faster than you can betray in a model. So you have to make sure that you're constantly integrating, continuously updating your data, your knowledge and your benchmark, and that means having people in the loop, making sure that it's easy to set up that feedback loop so that your retrieval system, your routing system, your different agents or whatever you have deployed also change, so it's connected and using data to change over time.

Speaker 1:

Yeah, so data, like the context and all the purposes and all these things, exactly Okay, cool, is there six?

Speaker 2:

There's more.

Speaker 2:

Go for it so lesson six it's never just technology, um. You need to consider also the people, the organization and the process. Even so, if you want to build the next generation of agentic ai or generative ai inside your company, or at least benefit from those transformative use cases, then that means that you're going to have to organize yourself differently, so not just your business process, but also your engineering teams. If you want to build, for example, a great internal chatbot, a customer chatbot, a rack system, whatever, it's going to be very difficult to do that if you don't connect your data, your AI and your product teams, as well as your business more intricately.

Speaker 2:

Because it's no longer an AI team building a model, throwing it over the wall. It's a product experience. It's a user experience. The model is the experience that people have when they're interacting with it. It defines how good or how bad it is, and so you can't be disconnected from that anymore.

Speaker 1:

So you're saying like teams in themselves. They need to be a bit more end-to-end.

Speaker 2:

Yeah, If you have the slides still open and you're willing to share them. I believe it is for you, page 34.

Speaker 1:

34.

Speaker 2:

There we go. So this is like an example of a blueprint. Um definitely doesn't work for every organization, but the idea is that if you want to achieve agency for these big tasks, these categories of things that you do, like customer, customer service, finance, whatever then it's not going to be enough to just buy an agentic, saas, an agent for X, y or Z. It's going to be about also building capabilities inside your company, for example, taking your existing data team, but also making sure that they're connected to how you built these Rack Chatbots, these generative experiences. So, for example, making sure that they're connected to how you built these RAC chatbots, these generative experiences. So, for example, making sure that they are integrated with the knowledge management and the content management in your company.

Speaker 2:

Let's say you have a big product portfolio. You need to make sure that those products are also integrated in a standardized way so that you can then search them when you're interacting with a chatbot. Same for retrieval. Even if you have that good data, it's already been cleaned, it's consumable as a data product, it doesn't matter if you can't find it at the right time, and so, a lot of the times, the experiences that we're trying to build are search-based, and so you also need to make sure that between data and search there's a very strong connection, because every use case is going to need different ways of retrieving that information.

Speaker 2:

And so you first solve the problem of contextual intelligence, enable getting the right data at the right time, but then you also have to make sure that what you build around, that, the actual use case, the value, is going to be using that data, the search experience, in a proper way, making sure that you have a team focused on building the best possible user experience. So engagement, what I call it here, improving the personalization, the UX, et cetera, and then finally, to complete the circle, all of these teams have to work together to improve these over time. So setting up evaluation, benchmarks, human in the loop, and none of these things can be completely separate. You still need to, for ambitious use case, bring these capabilities together. That's the essence of what I'm trying to bring I see, okay many different things coming together.

Speaker 2:

Yes many, many things. Yes.

Speaker 1:

Well, yeah, as you should. Lesson eight or seven.

Speaker 2:

Lesson seven yes, seven. So lesson seven Teaching people how to prompt is not enough, or context engineering? Well, whatever you call it. You can't just come into an organization by co-pilot or by an AI and say, look, become more productive. It doesn't work. So, yes, for some people, they will be champions, they'll experiment and they'll be more productive. Maybe they'll even share it with their colleagues. You will still have to either buy products or build your own interface that takes the institutionalized knowledge that you have and turns it into something simpler. There's a reason why Excel is organized the way it is, why PowerPoint is organized the way it is it's crystallized knowledge over generations about what you need to build a slide or to build a spreadsheet, and the same is true for problems in an organization. You don't want people to have to figure out every time again what prompt to use, which prompt to use afterwards. It's a bit like a terminal experience, and that's not what most people will be effective with.

Speaker 1:

So then you say, the prompt management solutions are not.

Speaker 2:

I think it's a good start and for some problems it's enough, but not for everything and not to make it truly scalable. Basically, if you have a consistent task across the organization that is big enough to justify building something custom for it, then do it by an build an interface. Simple interface could be enough to make people actually adopt it and to to be more effective with it. That's what we've noticed across a lot of different use cases. It's still valuable to build interfaces aside from just chatbots, basically, I see, and then finally, sometimes you can also just indeed make sure that it personalizes over time. So make it learn for the user, which comes back to what we talked about memory making sure that, indeed, maybe the chat experience does get personalized to you as an employee the files that you use, the type of style you like to write, and those are also valid ways to to meet the needs of users, but it's typically not going to be enough in many cases.

Speaker 2:

Okay, um and so. Yeah, sometimes within that lesson seven, sometimes you just have to redesign the whole process. Sometimes that's just the reality. No matter how much you try to make people adopt it, you're going to have to change how you do things, sometimes there's a better way.

Speaker 1:

Sometimes you should change Exactly. The world's not the same, right? It's true? Yeah, indeed true.

Speaker 2:

And finally, what we talked about before, because we're talking a lot about production, maturity, deployment. The final thing is it's definitely valid to continue to experiment and you should be experimenting ruthlessly with all of these changes. It's really important that, also within companies, that you let people experiment. Maybe don't take every experiment into the pilot phase, but it's important to understand what these things can do for your company, for your customers with your data yeah, and that's not.

Speaker 2:

So that's not going to be something that you can learn from your vendors or from the benchmarks, because they're always gamed.

Speaker 1:

So yeah, and I think maybe if you experiment and it doesn't go become a pilot, it doesn't doesn't mean it's a failure, right? Exactly, yeah, I think there's also a lot of like evangelism for people to really try things and really, like I said, there's a bit of behavior change that you need to people need to undergo, and I think sometimes the only way to go through it is to experiment.

Speaker 1:

This is not going to work, try again, try again. And then you start to okay, this is good at these things, it's not good at these things, it's not good at that things. And maybe you need to get in the habit of, every time you face this type of problem, to reach for that tool or reach for that, or don't reach for that right exactly. Um, so yeah, I think.

Speaker 2:

In any case, I fully, I fully agree, fully agree, I'm happy. So those are all of the, the lessons learned and so, overarching it like when you're doing all of these things, when you're investing in contextual intelligence, continuous improvement, enabling human, human ai interactions, you have to design for change. Things are going to change, so build around that. It's just you. If things are going to be radically different in a few years from now and we don't even know how exactly, exactly if there's.

Speaker 1:

The past few years. Teaches anything about gen ai, is that it's going to change a lot. Yeah, tim, is there anything else you want to say before we call it a pod?

Speaker 2:

Thank you for listening to my rambling.

Speaker 1:

Thanks for joining. Thanks for all the insights as well. Thank you, alex as well, for saying everything. Thank you, alex. Cool, then bye everyone.

Speaker 2:

You have taste In a way that's meaningful to software people, Then bye everyone.

People on this episode