DataTopics Unplugged

#53 Can AI Replace Human Creativity? & Latest Tech Updates (Meta, Klarna GenAI, Apple-OpenAI Partnership & More)

June 06, 2024 DataTopics
#53 Can AI Replace Human Creativity? & Latest Tech Updates (Meta, Klarna GenAI, Apple-OpenAI Partnership & More)
DataTopics Unplugged
More Info
DataTopics Unplugged
#53 Can AI Replace Human Creativity? & Latest Tech Updates (Meta, Klarna GenAI, Apple-OpenAI Partnership & More)
Jun 06, 2024
DataTopics

Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.

Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!


In this episode we are joined by Christophe Beke, where we discuss the following:

Show Notes Transcript Chapter Markers

Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.

Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!


In this episode we are joined by Christophe Beke, where we discuss the following:

Speaker 2:

Hello, I'm Bill Gates. I would recommend TypeScript.

Speaker 3:

Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here, rust.

Speaker 2:

This almost makes me happy that I didn't become a supermodel.

Speaker 3:

Cooper and Netties.

Speaker 2:

Well, I'm sorry, guys, I don't know what's going on. Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here.

Speaker 1:

Rust Data Topics. Welcome to the Data Topics Podcast. Welcome to the Data Topics Podcast. We're on live streaming on YouTube, linkedin, x, twitch, you name it. We're there. Check us out there. Feel free to leave a comment or a question. We'll do our best to address it during the recording. Hang out with us. Today is the 4th of June of 2024. My name is Murillo. I'll be hosting you todayined by the one and only Bart. All right, alex is back. Hello, hey, alex, and we have a very special guest with us. We have Christophe Christophe. Yes, is that how you're, christophe Christophe? Yeah, okay, christophe is a team lead at the Data and Cloud business unit. Um, christopher is a team lead at the data and cloud business unit. Um, we already discussed the how kind of, what kind of actor he looks. He looks like, uh, bart.

Speaker 3:

Well, you said what's the name the guy david I think he uh looks a bit like this david hasselhoff yes, so the people following?

Speaker 1:

just on the audio it looks exactly like a spitting image of david hasselhoff. Uh, plus the six packs, plus more six packs these are his words, yeah he says exactly the same. So, uh, more of a reason to check us out the live stream afterwards. And uh, I already apologize in advance. I am recovering from illness, so if you notice something weird in my voice, it's because it is weird, but I don't think I'm contagious. I did a covid test. It was negative, so good guys.

Speaker 3:

It's not just covid that is contagious.

Speaker 1:

You realize this right yeah, but like I think, isn't there, like you also, maybe fun factor, maybe the fun? All right, I'll start the episode today with a fun fact. Bart, you have a dark past, don't you?

Speaker 1:

no, you know where I was going I do yeah, I like how, as I like I didn't even say bart and he was already like no, no, um, you know about. You mentioned that you. You had this very insightful point that not only covet is contagious. Right, true, you must know a lot about you, have a? Do you have a background or anything? I studied healthcare economics oh really, did you ever like anything else, like uh, more hands-on, or no?

Speaker 3:

maybe a bit, but it is true, not not just covet as contagious maybe I have a I have a question for you.

Speaker 1:

Like maybe so this is a legitimate question Like there is a period in which you're contagious, but after a while, like even if you're, I guess I don't know, maybe this is a myth, right but I heard that even if you are exhibiting some symptoms, how contagious you are decreases a lot. Is that true? I'm asking you because of your background.

Speaker 1:

I have no, uh, no comment okay, all right refrain from comment if next week everyone is sick here, uh you know, it's my fault, I'm just too professional. You know, I was like now, man, it's like I'll be there, the listeners. You know our fan base is waiting, all our followers here, so I said no, I'll be there. So here I am for you guys.

Speaker 2:

So you know, but, christoph, enough about me, tell us more about yourself yeah, you just already did my intro so, uh, basically I'm the new team lead in the data and cloud units responsible for the energy and manufacturing. I am an engineer in IT by background, 28 tomorrow, so that's a fun fact. Oh wow, can we?

Speaker 1:

get an applause for the birthday 28, yeah, 28. Yesterday I celebrated my first year.

Speaker 2:

Applause for the birthday cake 28, yeah, 28, and yesterday I celebrated my first year of marriage oh, wow, congrats.

Speaker 1:

Another round of applause. Party week, I know, wow, did you? Did you do this on purpose, so like you have one party for both and like save a bit of money and stuff? Or is this like strategic, or strategic waiting for the sun? A? Did you do this on purpose, so you have one party for both and save a bit of money and stuff, or is?

Speaker 2:

this strategic. It was strategic.

Speaker 1:

Waiting for the sun a bit, yeah, you're like okay, yeah, maybe, yeah, yeah, okay, let's do it there. Yeah, Very cool, congrats, I planned it 28 years ago. Wow, one day. Okay, congrats, congrats, congrats. Any big plans for tomorrow?

Speaker 2:

No, okay, congrats, congrats, congrats. Any big plans for tomorrow? No, it's Wednesday.

Speaker 1:

Yeah, that's true, I have to work. Yeah, actually I heard from someone from Datarace that they said, oh, I don't know the last time I worked when it was my birthday. I was like, oh, yeah, today's my birthday. And he said, oh, you work on your birthday. I was like, yeah, oh, I don't know the last time I worked on my birthday. Okay, bro, it was a bit he was very strong opinionated about this. But let's have you with us, so I'll try to be at the office. You know, I know there's a bit of a tradition in Belgium to bring coffee cookers. I'm not coming to the office, I won't be here then Smart enough.

Speaker 3:

Something just came up.

Speaker 1:

I can't be here. Cool, cool, cool, cool. And what do we have here today already? Um, I see here matt identifies networks for pushing deceptive content likely generated by ai. What is this about?

Speaker 2:

yeah, so it's basically the first time since the release in late 2022 that there is a report of yeah, deceptive content generated by ai. It was, uh, basically used for praising Israel's handling in the war in Gaza. It's used for political disinformation campaigns, but it also can be used for fake news in general. They are a bit scared that there are some possibilities, especially, for example, in Belgium right now, with the votes coming up, that they're going to use it to sway political elections.

Speaker 2:

So security executives of Meta also said that the generation part is not good enough to be undetected but, I believe that there's going to be a time that it will be very, very difficult to have it detected and bring some significant issues For deceptive content.

Speaker 2:

Yeah, deceptive content Because, like meta is a huge part of everybody's life. Social networks in general, and they can. Yeah, for example, if you would, for example, maybe like a fake picture of you, how, and it's like, when it's fake and it's not you, you did something wrong, you and it's put out on social network, can't stop it. People will take actions, maybe against it. Yeah, okay, we're not super important, but imagine that for like political persons like the, the video of obama was like saying a lot of stuff and it wasn't really him or donald trump. It's like all fake generated.

Speaker 1:

Would you, maybe would you consider a bit of a side question would you consider slack as a bit of a social social network?

Speaker 2:

uh, no, as long as it's internal, not really no, and I don't think you can do public stuff in slack?

Speaker 1:

no, because if you consider that, then I would say I was a victim of the deep fakes generated by bart. So I'll put myself in the victim category, but if not, we can move on um I can, but I get a notification that apparently marilo and christopher are hard to hear.

Speaker 3:

I'm not sure. Uh, the, yeah, maybe turn them all up, all of them also.

Speaker 1:

The number three yeah, like this maybe you can also move your mic I think that will be a better yeah yeah, I should put a bit like speak a little softer as well.

Speaker 3:

No, but maybe on this Do you want to see the deepfake for us A little in action. Let's do it.

Speaker 1:

I feel like it wasn't an invitation, but I feel like what do you want to say?

Speaker 3:

What do I want to say yeah, or do you want to put music under it?

Speaker 1:

I'll let you take your imagination, let it go wild.

Speaker 3:

I'm just thinking.

Speaker 1:

Because I feel like if I give you guidance, then I cannot complain that you did it, you know, so I feel like I'll let you do it and then I'll criticize you after okay, I think the problem is with the music on it, like it's the, it's maybe not the most most sfw content, if you see what I mean I see, I know what you, I know what you're saying I'm gonna, I'm gonna, I'm gonna quickly gonna play it okay but like take your kids out of the other room, yeah, but I'm gonna just see towards alexa to see if you see something moving on our, on our equalizer.

Speaker 3:

That means people will hear it. You know, no, I know the song, but how can I be homophobic?

Speaker 1:

my bitch is gay. Hit a man in the top I see him in the top floor.

Speaker 3:

See him in the snake.

Speaker 1:

He's gay hug him, my brothers and say that I love them, but I don't swing that way. What can?

Speaker 2:

I say the man wow see, yeah, I would.

Speaker 3:

I would think it's real, to be honest but it's uh, but to illustrate a bit the danger of of deep fake. So this is trained on like literally like half a minute of speaking and not even good quality right and not good quality, and you get. You get this where you still distinguish, like yeah, it's not really marina, but it's alike. But like just imagine this two years from now this is going to be a big problem. Arguably, if you had put a bit more wrapping over everything can't. Can't wait.

Speaker 1:

Can't wait for that day, but arguably as well if you had put more effort into it. Right Like find nice samples.

Speaker 1:

If I would have used like half an hour of audio quality, it would be very good already today, and I also think that if you are trying to do a song, it's also going to be more noticeable, right? If it's just a speech, you know, I think it's just talking, true, right? So I also think it's a harder target to when you're looking at, like, lyrics of music and all these things, right, and uh yeah, we're just generating new music with the voices of real artists.

Speaker 1:

It's also a big topic oh yeah, bart, that's a part's uh. Second to favorite topic no, generating. Generating AI-generated music from rap artists like Drake and all these people.

Speaker 2:

Murilo.

Speaker 3:

But it's a big problem. You say these Facebook discovered networks, but what kind of networks are we talking about? Like a political network? Oh no.

Speaker 2:

Just general. So this was based on social networks, but, yeah, a lot of.

Speaker 3:

I like that. So yeah, if you look at facebook, or instagram. You get a lot of marketing, uh about specific politics, for example in belgium right now but the question is a bit like, if it's deceptive content, like I think that the problem is not necessarily the set of content, like fake news will always be there, but like cheney, I like accelerates this, like the amount of content you can generate which very much looks authentic, written by someone is at a different scale.

Speaker 2:

If it's visual and it looks real, it's often to be real.

Speaker 3:

Yeah, that's a good point. I was thinking about text messages, but indeed.

Speaker 2:

So I was just wondering what's the role of social networks in this, because they get like this responsibility With great power comes great responsibility, kind of thing. But yeah, I think they need to do something about it.

Speaker 3:

And they're already trying to do stuff about it, but still it's a very hard problem to solve this?

Speaker 1:

um, yeah, and then. But I think again, I remember the the thing, I think the us election, and they were saying there were russian bots and stuff. And this was before junei, right, but there was already an opportunity for people to leverage these networks and I mean, I'm not super into it, right, but I have heard, like interviews and podcasts and whatnot, that they were saying like they would create fake profiles and sometimes they would use those profiles for other things.

Speaker 1:

Once it didn't click, or they were organized events, uh, from two extreme opposite groups just to kind of create friction, you know, and create conflict. So I think there is already uh like the, the social networks was already a venue for misinformation and all these things, right, but I also feel like with jna, indeed, then you just kind of magnify this yeah, they say it's their first report and uh, since 2022, but still and meta is saying like they, uh, they want to, like they identify this, but now they are, they are they mitigating this somehow?

Speaker 1:

Are they investing this heavily on this?

Speaker 2:

So did that mitigate several risks. So they are working on it, but I don't know how far it is completely automated to be honest?

Speaker 1:

So probably not. But also, how can, even if it's not automated, right, Like, how can you, even yourself as a human without Knowledge? Yeah, how can you discern these things? Do you feel like we're going to get to a place where this is completely indistinguishable?

Speaker 2:

Yeah, completely is hard to say, but close to completely.

Speaker 1:

And what do you think we can do once we get to that?

Speaker 2:

To be honest, that's the the game over. Enough of social networks, no, I don't know. And like facebook, for example, or meta in general, I think they really do their best to to fix these issues, but that doesn't solve other issues, like if you would to send out propaganda in another country yeah, and that's not monitored by metal, for example, and it's uh, yeah, I get it. Yeah, I'm not going to go to go into detail about politics too much, but uh, yeah, for example, in a, in a country where you want to create like a hate towards a certain group, you just release several videos of that group doing things that will target your group.

Speaker 2:

And visuals is really, really hard. Like I said, Text is something you could say, okay, maybe they really do that. If governments will use it, then it's super scary yeah, indeed, I think.

Speaker 1:

Well, and do you think for video I don't know for there already that we have videos that are so convincing that people will really no, not, not yet you. You see the difference like, but it's getting harder yeah, I think the tendency is just to keep getting harder and harder, right?

Speaker 2:

It's up to us to create good AI models that can detect it.

Speaker 1:

Yeah, and I think right now the solution also is like a bit okay, the things you're leveraging on the hands of meta and whatnot, but I think it's also a bit should we just leverage on them, right?

Speaker 2:

Is it just that? Who else is going to do it?

Speaker 1:

Yeah, but I think it's also like should we just depend on them to really make the things work, you know? Because I feel like right now we're saying, okay, Meta is the only one that can kind of save us, which also is not a very good prospect, right?

Speaker 2:

No, because who controls Meta Indeed yeah.

Speaker 1:

So I feel like it's very centralized these things, all right one second, we're changing the infrastructure, yeah. We're debugging. All right, I got it. I got it, all righty.

Speaker 2:

Maybe moving on to another topic, maybe a more positive use of Gen.

Speaker 1:

Ai Depends how you look on it. Dep depends on you. Look on it's right. Klarna using Jenny. I took a marketing cost by 10 million annually. Alex, don't worry about it. We're not there yet.

Speaker 2:

I'm kidding, of course, but what it's about, so basically, clara Klarna is using Gen AI for example, midjourney DALI Firefly to generate content that replaces the need for production teams, photographers, models, just sets in general, yeah, so basically, weekly they update all of their images for occasion, like special events, for mother's day, valentine's day. So they they just use it using the technology. They already generated more than a thousand pictures since well, in the first quarter of 2024, and they reduced the development cycle for their marketing purposes from six weeks to seven days, which is insanely fast.

Speaker 3:

What is Klarna For the people that don't know? Payments Online payments yes.

Speaker 1:

And this is purely like Gen AI, but it's images.

Speaker 2:

Yeah, mostly images. They also use Gen AI and other stuff, for example. They are now gen ai and other stuff, for example uh, they are now partnering up with open ai for their ai assistant for customer service, which will have the for well, will have the power of the same work equivalent of 700 full-time agents, meaning that's okay. What will they do? Will they fire those 700 people and replace it completely?

Speaker 1:

So then I guess in both cases actually in a way we are cutting jobs or employees, right, because we're shifting jobs.

Speaker 3:

But this sounds to me also like a very, very big number 10 million Like I marketing. So what I can really follow is like to come with the first concept like a mood board, like this is the direction that we want to take, like something like mid journey or or uh, dolly, like very much helps, like I'm thinking about this. Give me some first sketches and then you will still probably have someone manually that actually brings it together into a marketing campaign.

Speaker 2:

Yeah, To date and the complete workflow.

Speaker 3:

Yeah.

Speaker 2:

Somebody will still need to intervene and you still need to creative thought because, yeah, like the gen AI, it's limited to what it knows, so it still needs input from a human to be creative, I believe to be creative, I believe, um, but I think it's like, from what I understood, a lot of this 10 million is a.

Speaker 3:

It's like photographers and models and costumes and all these things but that's like a lot, I mean, but indeed like it's two cuts, like this is the forecast, like this is also a bit of a hype 10 million is a lot yeah, that's what they say, but but I've, aside from the number, I do agree, like you, like the preparatory stages, like the first phases, come into a first concept for something visual like accelerates very quickly.

Speaker 3:

Yeah, with the technology that we already have today, I would argue that the technology that we have today is not ready for the end result.

Speaker 2:

No, I would agree.

Speaker 3:

I think what we've seen on all the social platforms is a lot of Gen AI generated images and today there's very much fatigue and cringe factor when anyone posts Gen AI generated images. I don't think it's good for a brand today to post Gen AI generated images. Why do you think that is? Do you think it's good?

Speaker 1:

for a brand today to post generated images. Why do you think that is? Do you think it's because it doesn't look good Like? Why is it cringe today, Like if it was, like, if it was really, I mean because it is clearly like uh, these are, there is no authenticity here.

Speaker 3:

I think what we have today, I think for today you can very, very visually see this is something that was done by an artist and I agree, like, like, like, if you have something that is an end result, that to get to that end result, the preparation towards that, like generally I can very quickly very much accelerate and make that much more efficient.

Speaker 1:

I think for branding around visual concepts, I think the end result, like you need that human touch today yeah, I think you need like uh, but it is to me it's analogous to text that you need a tone right, like you need to have, like exactly uh, coherent, like you don't want to do just random stuff.

Speaker 2:

I think there's a good parallel indeed to uh but like they mentioned, like it's not that development cycle is completely gone, it's not like it's in half an hour job it's still six days, so it does require some, some thoughts and like validation and all that kind of stuff, but still it's six weeks to seven. What did I say? Six weeks to seven days but that I can understand.

Speaker 1:

Yeah, yeah because I'm still I mean my I'm thinking of instead of you posing people and you take a lot of pictures and you have to hire models, you have to get the costumes, you have to book the venue and you have to hire models. You have to get the costumes, you have to book the venue. You have to do this. Maybe you have some gen ai I mean hypothetically, I'm not sure if we're there, but you have some gen ai images and then you still do the editing.

Speaker 1:

You still do this, you do that to kind of get what you want. Right, and I still think that even using a system like this would still be way cheaper. Right, because the amount of people that you don't have to Exactly yeah, you know the pay and the renting the venue and all these things is probably going to cut down a lot and it's going to make much faster as well. Right, because one thing that and then I agree with you right Like if you have an idea and you want to sketch it out and see, you can also cut down the time quite a lot.

Speaker 3:

Yeah, and the same when it comes to text like uh putting an uh interesting text on socials and not only that, they say, they also save money on translations I can imagine they don't need to hire a team that uh

Speaker 1:

good point, sir but then translations for clarna is like uh, they is like the chatbot kind of thing, or what do you mean by that? Or like a message to put out on your socials, on all different channels languages locales, but then they don't spend in translation, because before when it was like what's the difference?

Speaker 3:

you have to pay a translator versus just running it through OpenAI but you could Google Translate stuff right or DeepL or whatever, it definitely wasn't up to the same level.

Speaker 1:

It was not ready to be read by someone but then to me it's like GenAI is just it's a translator, it's a good translator, right, it very good translator. Yeah, but like if google translate or deep l or whatever, it was really good, then there would be no difference, right?

Speaker 3:

no, but there is a huge difference, because it's definitely not as good.

Speaker 1:

Yeah, yeah, yeah okay, yeah, cool, cool, cool.

Speaker 3:

Anything else on this one before we move on what do you think the the future of the creative space looks like? Like?

Speaker 1:

hot, hot, hot, part. No, um, what I? What do I think? So I had some uh for music. I think that's the one that I thought. This is the one that I thought the most because of your, uh, you're aspiring to be a rapper right I thought you're gonna mention taylor swift. Um no, but actually I did talk to Alex a bit about this outside the podcast as well that as Genai becomes better and better. And there is Genai music right.

Speaker 1:

I still feel like there will always be a space for human artists, because I do feel like a big part of music is like you relate what the artist is saying is feeling, and I think if it's just Genai, then even if it's the right words, you know that it wasn't someone that really went through it. I think it's not as relatable. It's kind of like when you watch a movie and you just watch a sad movie and okay, but then you watch a movie and it's like, oh, based on true story, right, and I feel like it has a different touch to it because it's more relatable, it's real, um, and I think that will always be there, like the human needs to connect, you know, to really.

Speaker 1:

You know so like oh, I know what you like. Oh, you went through a heartbreak. I know what you've been through and this and that you know. I feel like that would always be there. So I think that maybe they may coexist, but I think for music there will always be a space for human generated songs.

Speaker 2:

I think it's the same, like music has changed over the years we and it's the same that with the creativity of, like, creating the marketing campaigns. You need to be creative. And again, gen ai is a tool, but it's not creative, it's limited to what it's, what it knows.

Speaker 1:

Music changes but then I have a counterpoint there, like because we're saying creativity, right, but I don't know, like if I have a model that takes all the songs from mozart and then it creates a new one based on all these songs, okay, do that 10 times then I could do it, I think yeah, it will.

Speaker 2:

It will do it 10 times, but 10 times we'll probably get most of same answers.

Speaker 1:

Maybe, maybe maybe not, but I think it's like but I guess my thing is more if I take five things that I know and I mix them in a unique way, am I being creative?

Speaker 3:

Which is basically what every artist is doing.

Speaker 3:

You know, you know and that's the thing it's like, like without Genii, like this is what you're doing, like you're, you like a certain genre, you like certain methods and typically you didn't just crawl out of a cave and discover the old as yourself, right, but you're inspired by all those around you yeah, indeed, but I think, even if you're very like, okay, you have these hundred songs and you can only mix them in a way to create a new one, like, even if it's really that constrained, yeah, would you say that that's creativity.

Speaker 1:

I would say yes, and I feel like in the same way that maybe that's what a model is doing, kind of right.

Speaker 2:

A model is fixed, Like when you train to create a fixed model.

Speaker 1:

it's just if the input changes, the output will change. Yeah, but I guess it's like if you add some randomness to it, right, like you always rent some random noise, and then the outputs are going to be different.

Speaker 2:

Yeah.

Speaker 1:

Right, and I feel like that random noise, combined with the patterns that you have before, will come up with something new. That will be a combination of things.

Speaker 2:

But then it will still need the creativity of the person. Well, no.

Speaker 3:

But the thing is.

Speaker 2:

Randomness a little less, but you still need to have the creativity and people who validate it.

Speaker 1:

Yeah, I think being creative and being good, I think they're different things.

Speaker 3:

Actually, I even heard like and I think that will be a difference, like being good as an artist versus being in a creative space.

Speaker 3:

I think humans, with all this happening, happening is that we will value true talent more like featured by a real human 0%, like I think we will value authenticity more but, at the same time, true talent, and that for them there's maybe, maybe even a higher willingness to pay for their output than there is today. Kind of like uh with chess, but for all the other ones, like it's uh. If today you were involved in uh, the preparation of creating a mood board for clarin's next marketing campaign, it's a very uncertain future, yeah, yeah true, but you touched a really good point.

Speaker 2:

Like it will be if you would generate all of your music using gen ai tools, for example. You will have more budget remain because you don't need the entire production to to like end production time as well. That costs a lot of money, so you will have more. You will have more money available for like your marketing campaigns, and so you will.

Speaker 3:

And real humans can't compete with that because they just have less budget or they need to be extremely good to stand out or be extremely good yeah yeah, and I think, but I think this is the case, but I think at the same time, how you describe it, you will use Gen AI as a tool, and everybody will have to learn to use it as a tool to some extent.

Speaker 2:

That's what they always say. It's like will AI make you lose your job? No, it's only for the people who don't use it. That's a scene 10 years.

Speaker 1:

But I think a lot of the we're talking about creativity and marketing, right, but if you said the same thing about programming, it's the same right.

Speaker 2:

Yeah.

Speaker 1:

Like it gets 90% of your code done On the 10% you need to really be there, but it still gets 90% done. Yeah, it's still a tool.

Speaker 2:

How is it called? The tool, the software tool? That was like. Devon, yeah, yeah yeah, I was uh in the beginning, was like oh snap, but uh, yeah, again a lot of people. Then there was a huge discussion, yeah, regarding this topic.

Speaker 1:

Like yeah, like we discussed this a few times as well here. I think it's like there's always a I mean, it's an interesting discussion, right, but uh, in the end of the day, we still, we still have our place, right like it's not fully replacing us hope so for now, yeah, yeah, yeah, yeah. Okay to move on to the next topic. Sure, uh, maybe we can just go here on the list here right. More on ai. Um, I think last time last week we speculated that, uh, open, ai and apple were gonna do a deal.

Speaker 3:

I think the week before, but yeah, it's been in the this in the rumor mill for a while now yes, and I think now all the rumors have become more concrete.

Speaker 1:

Is this uh, yes, apple open. I have signed a deal to partner on ai. What is this about? Part? What is the deal?

Speaker 3:

well, we don't know the exact contents. We know that there was a deal signed between apple and open ai which uh basically allows apple to integrate open ai's uh um generative ai technology, because even they're not very specific, is it the text? Is the images into their services.

Speaker 1:

Just services Into their services.

Speaker 3:

There's nothing very specific. We know that there are some rumors that there was a lot of internal opposition within Apple. From John D'Andrea, I hope I'm pronouncing it correctly.

Speaker 1:

I don't think you are.

Speaker 3:

Who very much opposes using large language model-driven chatbots in Apple's products, has been vocal on it for a while.

Speaker 1:

Did he say why this is a very that's a hard take? No, you don't want to use LLMs for chatbots.

Speaker 3:

Yeah, to be honest, I don't know his exact rationale I can imagine, so I'm trying to put myself on his feet. I think Apple tries to have a very strong control over the user experience, to make sure it's a very good user experience, which is, by default, a bit difficult when you work with something that is probabilistic.

Speaker 1:

You have less control.

Speaker 3:

You more or less think it's going to be good enough, but it's hard to test all edge cases.

Speaker 2:

Yeah, they already use Siri. Well, they own Siri. I hope they use it to improve Siri, because, yeah, you said customer experience, but Siri is not the best customer experience at the moment.

Speaker 3:

When we were discussing this, when it was in the rumor mill, Siri is indeed the one that we brought up a lot. I think Siri can improve.

Speaker 2:

Yeah, I also saw in the article that they may use it for their search engines as well, maybe even like your internal search engines, for like, on your iPhone, when you're searching for stuff. Cross-document, all of that.

Speaker 3:

Yeah, yeah, yeah. I think everybody hopes it also a little bit will if they do this, as to really build significant features is that there's still some independence. We know that there are some rumors that Apple is building internal LLM capabilities and that this is a bit an accelerator to get them started on these things and then at some point switch to their own capabilities.

Speaker 2:

Doing what they do well. See stuff a few years later and the new and the new jackets.

Speaker 1:

But there is also the question on what does this mean for the siri team at apple right, like if they are seeking externally for these things, then do they still have a siri team?

Speaker 3:

well, yeah, I think siri is really a problem project and this is this is a tool in that project right.

Speaker 1:

But then it's like should they be working on how to integrate this tool better or to become part of their product, or do you think they're gonna? Because today's series is a separate thing, it's a separate track completely. It's completely independent from the open ai models and all the things, right. So are they going to continue to separate tracks? You know, I think, for example, if you work for siri yeah, you're a siri engineer then you hear about this deal. How do you feel about it, right?

Speaker 3:

I'm like, look at that, optimistic and pessimistic. Yeah, I think the optimistic view is really cool.

Speaker 1:

I can now also use these tools to improve siri yeah, I guess to me it's like this like what I have a bit of a hard time is like to use these tools to improve Siri. But to me the tool is Siri, like it's the same, like it's replacing, like I don't see how. How would you just want to improve the other right?

Speaker 3:

Like no but Siri is much more serious. Also all the actions that happen, because when you give an instruction, you're just thinking about the voice to text and interpreting that.

Speaker 1:

There's much more happening than that. That's true, that's true, that's true, that's true.

Speaker 3:

And I think one of the questions that Microsoft probably has is what does this mean for us? Like they they are, they've very much leveraged their, their open AI partnership within the community to within the replacement. Apple is, to some extent, the competitor for Microsoft. This raises some questions, of course. Like they used to have an exclusive partnership, apple is also apparently exploring similar deals with partners like Google, so it's also not from them. It's not a unique stance or not an exclusive partnership with OpenAI, so let's see where it gets us.

Speaker 1:

Indeed.

Speaker 3:

I think it's. To me, the most exciting thing would be if next month, I use Siri and it actually understands me at the first go.

Speaker 1:

True. In other words, I'll be really excited if I use Siri and it works.

Speaker 2:

Yeah, exactly.

Speaker 1:

If it does what it's supposed to do.

Speaker 3:

But what I sometimes wonder with Siri is like people in the US native speakers, but are they less of a problem than me in Dutch?

Speaker 2:

Yeah, I think so Because I heard that I haven't trained any model in dutch, like for production use cases. But that's really hard to train because there's way less materials available, especially flemish, because that's not dutch as well yeah, we actually had someone that was a researcher that they released.

Speaker 1:

Uh fine, two models for from dutch and they have quite a lot of work, and even the tokenizers, and how are you going to split words and all these things? That's all.

Speaker 3:

And then there's a lot of dialects, so we have Christoffer, for example. He doesn't know how to speak the G in Dutch.

Speaker 1:

He doesn't know. Okay, note it, he's from Kent.

Speaker 2:

I'm limited by my capability. The H is becoming difficult.

Speaker 3:

I live in West flanders as well, so I'm losing some letters of the alphabet. Good thing this podcast is in english, but when you talk to siri, you talked in in dutch. Uh, typically I do yeah, yeah, yeah, but do you actually try, like actually. But I think when I speak in english it is even worse.

Speaker 1:

I can understand because you think, because of your accent yeah, I think so interesting. Yeah, I don't know, I feel like I'll be. So. I mean, in my opinion, right, it should still be good, and I know there is a difference, but it should still be good enough too.

Speaker 3:

But it's super annoying with siri is that like uh, non-dutch names, like it doesn't know how to pronounce it, so you need to like, like, read them up literally, like how it's as if it would be a dutch word, but it's not like dutch siri doesn't know how to pronounce non-dutch names.

Speaker 1:

Yeah, exactly yeah, but that's that's I mean. Yeah, but uh that I expected it like I'm, my name is marillo. I mean, that's not how you would say it in portuguese, right in brazil.

Speaker 3:

So, for example, gotier, french name yeah if I want to call him for a siri, I need to say, please call otherwise it doesn't know who I'm, uh, who I'm referring to yeah, but, yeah, yeah, but how would you?

Speaker 1:

what would you?

Speaker 3:

it's tricky, huh?

Speaker 2:

no, it's not really like with when I use uh, opening eyes voice mode it understands it, it's fine I think it's different, because I'm not sure it's just speculation, but, like, probably the first thing that your Siri, for example, does is look, detect which language are we using and, based on that language, do like the interpretation because, and on the other side, if you use OpenAI, openai uses like word after word after word, and it analyzes each word separately, well, separately it's a multilingual model yeah, like you see me.

Speaker 1:

Like the speech to text is on the word level and on the dutch is like it first determines the language and then it does text to speech. Uh, speech to text. You lost me sorry. Uh, yeah, but I feel like I don't know me personally. I've heard my name being my name being pronounced in many different ways and every time, if I have to I mean even if I'm talking to someone like oh, what's your name I never say like muriel right, like no one understands no, no, muriel, what's, what's so?

Speaker 1:

I don't say it correctly either. No, I mean correct. I think I've been. I've been away long enough that I've given up the idea of correctness. You've never corrected me. That's the thing I can remember when I was in the US, like it was like one of the first classes, right, and the professors they're trying to be super nice, you know, and they're calling the names to see who's there and they're like Martha Murillo and I was like oh.

Speaker 1:

I'm here. No, how do you pronounce it? No, oh no, it's fine. No, no, I want to say how you say it. Okay, it's Murilo. All right, nick, who is it? And I was like it's fine, but my mother would say Murilo, murilo.

Speaker 3:

Murilo no.

Speaker 1:

Murilo, how much time do we have? That's okay, but that's the thing. We'll take this offline. Take this offline, but I think it's tricky, especially so there's some names in brazil as well, like joan, which is very hard for people that don't speak portuguese, which is not as it's like john in portuguese.

Speaker 1:

Okay, joan, joan, one kind of but, like the own people have a really hard time right and I can imagine that these models are going to be. It's going to be super difficult for them to pick it up. So I'm not that surprised about that. But uh, but yeah, maybe one thing I don't know if it's, it's, I think it's a bit of a hot topic. It's from one of our colleagues. I'm not gonna call them out, I'm not gonna put them on the spot, I'm not gonna throw them under the bus, but they weren't very positive about this uh partnership. They said like, oh yeah, I don't want to be hearing everyone talking to Siri on their phone all the time. And I was like he wasn't very optimistic. He wasn't excited about this partnership, let's say, because in his eyes, he doesn't get excited about the possibility of talking to your phone. That's a change, yeah.

Speaker 3:

But isn't that everybody their own thing, right?

Speaker 1:

Yeah, I think that's the thing I mean. Again, we were very brief discussion about it, right yeah, and I think what I tried, my position on this was it's not that we're trying to create this new feature right, the feature is there, siri exists. It's just that it's not good.

Speaker 3:

But I think some people like it. I spend a lot of time in the, in my car, and then'm saying going like hey, siri, please do this, but at the same time like if the capabilities would be there that I can say while I'm in the office hey Siri, please find a slot for me and this person to meet next week on that day. If I could do that just early and it works, I mean I would immediately immediately do it.

Speaker 2:

I would be very excited about this. Yeah, that's true. I would use it a lot more if it would like create normal senses, like with exclamation marks or question marks or stuff like that, and you can just speak with a certain intonation, it captures it and it makes a real message like when you send a text message yeah, because, like you said in the car, if you say something, you have to say but exclamation mark question mark.

Speaker 1:

Question mark funny face wink emoji with tongue out. Wink emoji with tongue out. You know, uh, yeah, yeah, true, true, true, true. But I'm still waiting for the day that people are going to be at the office. Everyone's like hi siri, hi siri, hi siri. But one thing actually like and also I was thinking of your example of booking a slot the thing that I'm waiting for lms to get better at is to ask follow-up questions, because most of the times they just do stuff. But I feel like in a realistic scenario, what you're saying hey, can you book a meeting with me and bart for thursday, whenever we're both available?

Speaker 3:

and that is where the series team comes in.

Speaker 1:

Yeah, that's what I need to build maybe that's true, that's true, but I still feel like there needs to be a. Okay, you don't have any matching slots here. I book over something, you know. Ah, maybe this meeting is the longest. Should I book over that? Or do you prefer the morning, afternoon, like I feel like that, and then?

Speaker 3:

I would like to see her to work like this. Then I can say don't bother me with these questions, just get the shit done. And it goes back to the point where it's gonna like emotionally abuse the ai does I have feelings?

Speaker 1:

you're gonna be like serious, but it's gonna be driving, hey, seriously, yeah you know what?

Speaker 2:

now just tell me.

Speaker 1:

Okay, everyone's getting on ai these days, even raspberry pi, maybe before making the switch.

Speaker 3:

I think what what I've also heard about this in this context, like people being negative about it. I think it's also like the image, the public image around opening eyes changing like it used to be this new research frontier good for humankind, and like we've seen a lot of examples in recent months where the commercial side is prioritized over the R&D and safety side, and I think this is also like it changes a little bit to the public image. I think especially in the tech sector about OpenAI. Maybe do you think that this is also like it changes a little bit to the public image. I think especially in the tech sector, about OpenAI.

Speaker 1:

Maybe do you think that this is inevitable. This is kind of natural progression of things in a corporation. Or do you think they got greedy or something Like what changed in that, Because OpenAI they were standing for the ideals and the openness and fairness.

Speaker 3:

It's hard to say this is the million dollar question, but I think, like they state, that the ethics around this, that safety around this, is very important, but in real life, what are they? They're a commercial company today and that means that you need to make your shareholders happy, which means maximizing profits, and that becomes quickly, becomes a focus, and then probably like, if you're very passionate, you're still going to spend some evening hours on ethics and safety, but, like it very quickly devolves into this yeah, that opens up the discussion, like who decided it because, as far as I know, sam altman didn't want to do it.

Speaker 2:

He got like fired for it, and then suddenly we're like you didn't want to do what so like just disregard all the ethics and like really focus on the really business business and profitability I didn't want to double down on the profits, yeah indeed. So yeah, the question again who who asked? Hey, yeah, who?

Speaker 1:

made the business and I think also the. The counter argument is always yeah, okay, but they were like it's not, like they were not profitable, right, and they had to double down on the. So I think it's also like okay, every company will value profits, right, it's the nature of the game. If you don't have profits, then you don't have a company, right.

Speaker 3:

I don't know actually if it opening is profitable yeah, really they're huge selling. I mean they're burning cash, yeah that's true not sure if they're a good question.

Speaker 2:

I don't think so. They just raised like a whole lot of money to burn. So interesting.

Speaker 1:

But yeah, I think that's made the well, what do we need to do all this? To stay, to keep afloat, right? Like? Do they need to double down that much? Like I, there's always a line to be drawn, right? I guess that's the.

Speaker 3:

That's the discussion here and I think the what a lot of people also, because it started as a non-profit and then what? As a commercial branch. I think there is also the argument to make, like the counter-argument there, like if it would have remained a non-profit, they would never have been able to raise the amount of money they needed to get this rolling.

Speaker 1:

Yeah, true, true, yeah. It's usually like we have this helicopter view. That's all nice and pretty right, but then when you zoom in, it's never like that clear cut so politics yeah, indeed, indeed, but what we can all agree is that ai is hot. Everyone wants a piece of ai even raspberry pi's.

Speaker 3:

Is that right? Bart raspberry pi's wants a piece of the pie or the ai pi. Indeed, don't have a lot of information about this, but I thought it was an interesting headline. But Raspberry Pi for the people that don't know it, it's like a very small microcomputer you have basically when you buy this without a case. It's like an integrated CPU. You have. Hdmi access points, usb-c. Costs around depending on the type, like 50-ish euros, 60-ish euros.

Speaker 2:

Did the price drop again Because it was super high for a long time due to the chip shortage.

Speaker 3:

Yeah, there was chip shortage. I think it dropped again, I'm not 100% sure and now they have partnered with Hylo a company I didn't know, actually to introduce an AI add-on they call it which I understand is a specific chip set to do some matrix calculations on, which would make the Raspberry Pi. It's still very affordable I think the image is to be around $70, but that will allow to do local inference of AI models much more efficient than it currently does.

Speaker 1:

Which is interesting, right, because I think today there's a lot of AI gen, ai hype, but usually you still need you cannot do this on the edge, right? So I think, yeah, there is still a big, yeah, big ways to go right. To like, just because we can't do these things doesn't mean you can do this on your phone, right? If you go on airplane mode and you talk to gpt4, oh, it's not going to work right, so it's also interesting in way.

Speaker 3:

It's also a little bit of a trend where you see producers of hardware having more capabilities around AI, or basically what it comes down to is matrix calculations on their local hardware, like Microsoft now announced their AI-powered laptops, for example, and you see these things coming to really local hardware and for me it's still a bit hard to see what would be my use case for AI on a Raspberry Pi. Like you need to start thinking like maybe like AI on the edge, image recognition, but like it could be cool for a home project, like connected to your home camera.

Speaker 3:

Yeah, and that is maybe, like raspberry pi, is also very much an educational tool. Yeah, so to play around with it, tinker with it and learn how to build models, maybe maybe um you mentioned the microsoft ai power computers.

Speaker 1:

Have you seen the perfect recall ai feature? Yeah have you seen that? Have you seen that? Yeah, I didn't put this here, but uh, I did come across this. Uh, basically it's like an ai feature that would take they call it a perfect recall. The idea is that everything you can think of you can just ask and the ai would fetch. But uh, basically, it's like I think I guess we like take screenshots.

Speaker 2:

Oh yeah, yeah, just be like and it would be safe because it's on your own computer.

Speaker 1:

Yeah, but it's like super like sketchy, it's like super, like big brother, like looking after, looking over your shoulders right and um, I think, yeah, I mean there was a lot of negative backlash, but I'm surprised that they still announce these things, yeah I think people are just a bit scared, like if it would be truly private, like truly, truly private.

Speaker 2:

How do you know? Yeah, that's the issue Like people, just don't trust them.

Speaker 3:

If there will be no privacy issues, I would very much welcome it. Yeah, I often have like oh shit, I ended up on that website and I still know a little bit, but I can't really, I don't really remember anymore I was doing this and then I can just like with a few words it was something with a book in the background and it was about this topic what website was this?

Speaker 1:

Something like that yeah, but then I guess we start to go again in the in a black mirror episode. You know it's not. You know, have you seen that episode of black mirror? That the guy, the people that have a, they record their, their eyesight and then like, they also show, like before you go to a dinner party and then you can rewind last time you saw that person and you can be oh yeah, how's your husband?

Speaker 3:

oh, yeah, how's this, you know.

Speaker 1:

But like people they don't know at all, but like they always have this because they can always go back and refresh their memory, they always have this expectation to really be up to date with what the person is doing. And then I mean it's a Blackberry episode, so you can imagine what kind of turns it takes. But I think also the practical issue with this is like, even if they say I guarantee that it's 100% safe, it doesn't leave your laptop that's cool no, you don't think so.

Speaker 3:

I think it's a bit freaky. Yeah, it's freaky.

Speaker 2:

But like a lot of stuff is freaky in the beginning.

Speaker 3:

How I understand this works. Again, my understanding not sure if I'm 100% correct is that it takes a snapshot of your screen every X minutes. It makes an encoding out of this, so it becomes searchable. So in the encoding there will be a representation of ah, I'm really looking at this window. It looks like that it's about this topic, and then afterwards you can search all these encoded vectors for something that you're interested in, and then that you will be able to link back to a snapshot to understand what you were doing and even rewind to the snapshot.

Speaker 3:

I think I mean there could be, and it's actually like you don't need to wait for it. There are tools out there for it. So you have, for example, wind Recorder for Windows. You have Rewind for macOS, which basically already does this. I think they're both open source, but I'm not 100% sure. Rewind macOS. So it's interesting to see this happening, but indeed, from the moment it's a hosted service, it becomes very privacy-densitive.

Speaker 1:

Yeah, I mean, I think so We've just had the.

Speaker 3:

Ticketmaster data breach last week, millions of users' data exposed and you can say, of course we're going to treat your data safely, but like something just has to happen. And then you have a lot of detailed information of what what you were doing on your laptop at 10 o'clock in the evening out there yeah, indeed, and I think it's also like what if the the regulations change and what if their policy changes?

Speaker 1:

and data and this and that I think it's a bit. It's a bit freaky. But getting a bit back to the Raspberry Pis and AI, NVIDIA also has done some work on the AI hardware. Is that the correct part? Nvidia jumps ahead of itself and reveals next-gen Rubin AI chips in keynote release.

Speaker 3:

Yes, so there was a keynote by nvidia, I want to say yesterday. I'm not uh, I'm not 100 sure. Um, where uh? They uh a bit hesitantly, where the, the ceo of nvidia, what's his name? Again, I forgot his name. Um, basically, basically already introduced the next generation of AI chips, following up a generation that has yet to come next year. So two months ago I want to say maybe three months ago, they announced blackwell, which is their next generation of ai chips, which I think will come out somewhere next year, and then the next next generation is what they uh released, which we gave some information about on uh on yesterday's keynote, is ruben ai I like how dave in quote is.

Speaker 1:

I'm not sure yet whether I'm going to regret this.

Speaker 3:

Yeah.

Speaker 1:

That's a great keynote.

Speaker 3:

That's a great keynote. Yeah, so there's a lot happening there.

Speaker 1:

Maybe. Why did they explain or did they speculate why they were jumping ahead of themselves in this case?

Speaker 3:

I don't know. I can only uh speculate. Um, I think nvidia has been the front runner for a very, very, very long time do you think, is there anyone catching up?

Speaker 1:

really?

Speaker 3:

I don't think in practice. I don't think. In practice it always looks a bit like that. And then there's a new release from nvidia, and then everybody's behind again yeah um and this is maybe already to be a bit like, uh, proactively say okay, it's not one black world's release that we're running. We actually already. We already have very concrete plans on the next generation.

Speaker 1:

That is now so interesting, but I mean to me, it's very puzzling. For me this creates more questions than it answers. To be honest, it's like you're already the leader. You already announced one thing. Why would you in a keynote, you know like, yeah, I don't know in the way that it's put as well, uh, but more on tech. On the tech corner, bart, I have here marker, so moving a bit away from ai, finally, well, maybe we'll make a comeback, but uh, while we're on the tech corner here, bart, I see Marker. What is Marker?

Speaker 3:

Second. So we are doing a library a week too. Keep the mind at peak. You like that.

Speaker 1:

Christophe, I like it. Bart Craig came up with this on the spot live, you can go back.

Speaker 3:

It's a poet 'll use a part-time poet murillo told me that he that he forgot to prepare and asked me if I had an interesting library. So I went on hit up trending, uh, and this was trending, okay, nice, and it is actually something that something that I've frequently run into recently, especially with all the Gen AI stuff. That we're doing is that you have a PDF document and you want to turn it into readable text.

Speaker 2:

Yeah.

Speaker 3:

What I used to do and I'm talking two, three years ago, that I used to turn this into plain text, two, three years ago, that I used to turn this into plain text. We have quite a few libraries available that very easily turn this into plain text for you. What I do today is I turn it into a JSON file which retains much more metadata, like this text is on that position on that page. So you have this a bit more context, uh to give to, uh to an lm typically, and this is a library called marker that turns your pdf very quickly to a markdown file and then you have all the context uh well the structure you have more structure.

Speaker 3:

Yeah, you have to, like you, retain tables and stuff like this, um. But I think for, uh, a lot of cases, a lot of use cases like this is as easy to do as to go from PDF to plain text. Here it goes from PDF to Markdown, but with Markdown, like it's a richer context than just plain text. For a lot of use cases it's rich enough, I think. So it's an interesting.

Speaker 1:

I didn't know that this existed, so I have just some examples here, so this is maybe I'll just show very quickly. This is the PDF, this is think Python how to, how to think like a computer scientist. And this is the converted think Python how to think like a computer scientist.

Speaker 3:

So you've had a reason, alright looks pretty cool we're already gonna say thank you, christophe, for joining us. He's gonna wiggle out indeed very fun to be here.

Speaker 1:

Thanks for being here. Hope to see you back here some other time maybe bye guys, good luck, cheers, thank you so interesting.

Speaker 3:

So yeah, here you see the markup, like Hedera's bullet points, stuff like this this markup like headers, bullet points, stuff like this.

Speaker 1:

This markup looks very nice. I would like to see a small snippet with a table or something. And also, how do they deal with images? There's another example here. It's a bit different.

Speaker 3:

But this already gives a lot of context to the LLM model. Like how do you? What is this? How should I interpret this? Like this extra markdown, like bullet points, urls, like it is valuable for the context in an lm model. I sometimes do this, for example, as well, for, uh, if I scrape a web page, so an html page, which basically is xml, which is very, it's a lot right and it's expensive on your tokens. So, I sometimes turn HTML into Markdown before passing it to DLLM.

Speaker 1:

Ah, yeah.

Speaker 3:

And you still have this important context on what are headers, what are bullet points? Like this stuff, like this it retains but it removes all other cruft.

Speaker 1:

Also, do you think it's better performance if it's Markdown instead of HTML?

Speaker 3:

I can't really judge it to be honest, there are like the two arguments is like if you have HTML, which is XML, like you've way more context to understand, like where is something on a page, for example, At the other side? Is that there is? Also. There are arguments to say that some models perform worse with a bigger uh intake input prompt you know, I feel like as long as well.

Speaker 1:

Yeah, I'm thinking very rationally here, right like. But uh, if you, if you don't know what the html tags mean, it's going to be very hard to really understand what. How the actual layout is right, like it just kind of adds noise in a way.

Speaker 3:

I don't know I want to, so I like bigger models. I would assume that they understand what they want.

Speaker 1:

Yeah, it depends a lot on the model. I'd say as well, right, yeah, cool, switching gears a bit for the signal you market exit. What is the signal? So this is more of a signal.

Speaker 3:

is a, is a is a text message application like whatsapp, like uh, like iMessage yes, like you open up signal here. Apparently, like the way I think of it very simplistically, is like a whatsapp, but it's privacy focused yes, I think that's maybe a way to describe it, and I think whatsapp would say that their privacy focus as well yeah, but I think, is this even open source or no?

Speaker 1:

um, I'm not sure to be honest, I think telegram is open source. I'm not sure to be honest, have you?

Speaker 3:

I think Telegram is open source. I'm not sure, but why are we putting this in the show notes? There was a statement by the president of Signal App, meredith Whittaker, on X. She says Let there be no doubt, we will leave the eu market rather than undermine our privacy guarantees. This proposal and she means the chat control proposal by the eu, if passed and enforced against us, would require us to make this choice. It's surveillance uh, it is surveillance.

Speaker 1:

Wine and safety bottles I don't understand what the last sentence means I mean.

Speaker 3:

But uh, I think this is interesting and I'm talking a bit about this with too much background, with not enough background knowledge. So what I understand is that the chat control proposal enforces a framework where peer-to-peer applications texting applications like Signal, like WhatsApp, like whatever they should allow for a backdoor for federal law agents from the moment that there is actually good arguments to do so.

Speaker 1:

So, basically, all the things could be looked into as soon as there is reasonable arguments to do so by authorities exactly it's very iffy.

Speaker 3:

I think. Well, the counter argument to that is that it's also super iffy to have such a major network where you can have all criminal activity that you want and there is like zero you can impose, zero, zero, zero control yeah, I think it's the, the fear.

Speaker 1:

I guess it's like once you open the door right, like once you turn on the the tap, you know how you're going to control how much is going to come out of it right, how people are not going to abuse it. I think in the us, like they had some, they had like I mean, with the edward I know his name is no the guy I think under zoldan right that he was a big, big time whistleblower, that they were saying how they.

Speaker 1:

they also had something similar. That was like, under reasonable arguments whatever the the cia, what they could actually get access to your, to your data. But then what they saw in practice is that this was heavily abused. Everything was reasonable oh yeah, maybe. Oh yeah, okay, maybe let's take a look. You know, it was not. It was not very. What is what is enough reason for someone to go and look into your private information?

Speaker 3:

What is a reasonable for people to bring up as an argument to say, okay, we need to look into this person's private data. Uh, yeah, so some I'm a bit biased here so we've done some collaborations with, with federal agencies, uh, in multiple countries in the eu. Uh, and I know that this is it's not easy to do, like it's a very big process to get access to someone's data. It's not taken lightly. I don't think that we see today the same cases that we hear rumors about in the US.

Speaker 1:

Yeah, I do think that the EU is set up differently right, Probably I can't really judge that. But yeah, and what do you think about this move from signal? Do you think it's a bad move? I mean, I also see their side. If they are really, they want to be the privacy chat that is, they're making these claims right, like whether they're going to go through with it or not, but uh, well, I think the the argument against it.

Speaker 3:

Let's say, there is no like, it's fully peer-to-peer encrypted and there's no way for government to have any control over criminals abusing this as their default communication method. Because you need to think about like do you want to be able to, from the moment that I know that you're a part of a very, very big drug or human trafficking organization? Do you want to allow police with a good, with a good procedure, good governance, to allow them to tap into your communication, to basically expose this whole network?

Speaker 1:

because that is the upside yeah, yeah, I think that is the upside and what we're saying, what signal?

Speaker 3:

Signal is saying we don't allow for this, we find privacy more important than that, and I think that is where we're at and I think from the moment you can also make the argument if Signal should open up to this, if this gets enforced, they'll find another way. Sure, they'll find another way, but it is hard. Today there is no barrier. I install Signal from the App Store and I just send whoever. Sure they'll find another way, but it is hard. Like today, there is no barrier. Like I just open, I install signal from the app store and I just send whoever and we will never be able to trace it. Yeah, like from the moment that I need to start using networks and these type of things, like you need some actual expertise on how do you send private messages, right, like there are ways around it, but it's way, way less accessible yeah, no, I see, I see your point and I think, for that case, I think it's but those are the cases, I think, why it is important true

Speaker 1:

true I think. I mean, I think no one's going to disagree with you when you, when you look, I don't know for that particular case I think there are a lot of people like signal that value privacy more than the ability to. But to me, when I look at this statement, I don't think they're talking about like, yeah, if you're in a private, like if you're in a crime organization, and there is an opportunity to really do good.

Speaker 3:

Very thoughtfully, speaking right, but what you're saying is that at that point you have arguments to say that that thing, that is doing good, is more valuable than privacy, and I think that is the discussion that is doing good is more valuable than privacy, and I think that is the discussion that is going on but I think, I think, but yeah again, maybe I'm wrong, but I'll be very surprised if someone still says no, no privacy all the way.

Speaker 3:

Nothing beats privacy, even if it can do that good of this much well I think people also have this fear that they don't just think about that case, but also like it's a slippery slope, like if you open it up for this case like what was? What is the future?

Speaker 1:

exactly, and I think it's like for that case. I think it's one end of a spectrum, let's say but then what if, I don't know, you do some petty crime? Is this enough?

Speaker 3:

what are you planning to do?

Speaker 1:

no comment. But you know, it's like, I think there's a, there's still a spectrum and like, maybe, like, like, okay, for this, is this enough and that enough? And who's going to apply this? And I think it's not the the what is more the how. That I think some people have been more hesitant.

Speaker 1:

And I think in this case I mean again, I'm not sure if this is signal stance, but that's, that's what I would imagine Most person would have a question mark, hesitance or a problem with it and again, I'm not super involved in this but if done properly, I think most people agree that this is something to go forward with and this is EU as well. Eh, this is EU, so it's not like. Also, how is it going to be applied in different countries in the EU?

Speaker 3:

But I for one prefer this way, the EU. But I for one prefer like this way, the EU, way where we say let's create a framework around this and then companies need to via a democratic process, and then companies need to adhere to this framework. Where we have other examples in the US where they suddenly say, yeah, but we don't really like TikTok, that's banned. Tiktok, that sounds too ad hoc.

Speaker 1:

Yeah, yeah, yeah, yeah. I do think in the US it's a bit more.

Speaker 3:

Like you need to have, like, why don't you like, like, maybe create a framework of things that are okay, that are not okay, or an edge case, and that's like what do you need to actually comply to to be a good actor?

Speaker 1:

yeah, yeah, but they're angry, they're angry, they're angry. Okay, and I think it's also regardless of what the outcome. I do think is important to have these discussions and keep having these discussions right, because things also change in the throughout time. Um, one last topic here tech debt. Um, I have been personally thinking a bit about tech debt on my projects. I'm not contributing individually as much, but I am overlooking a bit more.

Speaker 3:

You're just creating debt.

Speaker 1:

I'm just creating debt. Yeah, I'm just complaining there's too much debt. No, but I think it's like. So I read into this article as well. That says reframing tech debt. It's a very short article, it's not really groundbreaking or anything right, but they do two things that kind of struck me is one that they say like tech debt is a tool, so it's like how you should like it's not bad or good. Like it's something that, like you need to know. It's like that, like when you're buying for a loan or something right, it's a tool that you need to know how to manage. That really can be something very destructive if you don't use it. Well, but you know there are, there are uses for it.

Speaker 3:

And the other thing that they mentioned that just to understand well, like what are the uses of tech debt?

Speaker 1:

I think it's like maybe like when you don't know the whole, like you don't know the whole, uh context, right so, which you know is going to be a shortcut for like you can get Creating technical debt, creating technical debt.

Speaker 3:

Okay, I see what you mean.

Speaker 1:

So, and they also mentioned this is one thing that struck me a bit. The other thing that struck me a bit is that they mentioned that managing it well a lot of the times is more of a cultural challenge than a technical one.

Speaker 1:

Right. It's more about prioritizing the right thing, saying that we value this or where do we draw the line, and also dealing with if you go to a code base that has a lot of technical debt, however you define it Usually. I mean, just for the people that don't know what technical debt is, the idea is, well, most people think of it as code that you don't want to touch right Things, that now, if you really want to fix it and make everything the way it should be quote, unquote it's going to take a lot of work and it's not going to deliver value directly, right. But they do mention like, also, if you go into a code base that you feel like there's a lot of tech debt, put a lot of air quotes here to also assume good intentions, to, like, the context that you have today is not the same context that people had when they first wrote the code. Also, understand the organization change, the requirement change is something that today is a quick fix. Tomorrow may be different, right?

Speaker 1:

So there's a bit of a cultural aspect here and what I wanted to ask you, bart, is how do you see, like, when you talk about tech debt, is this something like? How do you approach it? Do you actually have? Do you draw it? Do you actually have? Do you draw lines? Do you have guidelines? Do you try to measure it somehow? Do you think that as long as it works it's fine, or do you? Is this something that, if you were leading a team, something that you'd be in the forefront of your mind?

Speaker 3:

so technical debt is is difficult because it's uh, it's a little bit of a vague concept, right? Yeah, like you could say. You could argue I'm working on a, let's say, my team is working on a TypeScript-based application. Everything is up to date. A week later, like 50% of your libraries are outdated.

Speaker 1:

You could argue like that technical depth, yeah, but I think that's like inevitable technical debt in a way, right, because you're inevitable. Yeah, this can always be updated.

Speaker 3:

You can always get updated yeah, I, I'd like to think a bit. I think it comes from lean principles, like the activities that you're doing as a team, like are they value adding? And that means like to the end user. Like you're building. If you're building an application like, does this bring something extra to the?

Speaker 1:

end user.

Speaker 3:

I think then that brings a lot of value, like that is value adding. Um, I think you also have non-value editing, adding activities and you can categorize them a bit like this is something yet you probably should not do, this is waste, like we're doing it just because it became a thing to do somewhere, like, but actually it doesn't bring value to anyone or anything. And you have like necessary waste activities, like necessary non-value adding stuff and I think technical depth, as long as it doesn't impact user impact Maybe just to you mentioned like the things that don't add value, and I think I know what you mean.

Speaker 1:

Yeah, but I think for someone that doesn't know what you mean, that can be very confusing. Like why do you do stuff that doesn't add value? Let's take a simple example.

Speaker 3:

Like I have a TypeScript based application and my libraries are out of date, I'm going to take an activity to make sure that everything is up to date again. Everything works For the end user. I didn't create any value. There's not that application does anything extra or anything better or the user experience better. Nothing changed. So I didn't add value to the applications, but I did something that was necessary. You could argue.

Speaker 1:

Yeah, yeah, yeah. So it's not like the value adding is really towards the user. The user is not going to see a new feature. He's not going to see improvement on speed maybe, but like, maybe now that you have the latest, you can have more features and I think that is like an important realization.

Speaker 3:

like there are two, these two type of things like value adding, non-value adding yeah, and to already make that distinction for yourself when you're planning work, and then for the non-value-adding stuff that you also challenge yourself like is this something that is actually necessary or is this just switching to a new technology because that is the data hype and that is the cool thing to do?

Speaker 2:

Yeah.

Speaker 3:

Because then maybe that is like the last category, it's non-value-adding and it's actually waste because it doesn't bring anything. Or you have the argument like need to switch to this technology because the community for the other one is declining and, like the support, will be less and like we really need to code.

Speaker 1:

yeah, I see, yeah, I think also I. The value adding thing I also came across when people are discussing productivity how to measure productivity and they said, well, the first is measuring roi, but it's a bit tricky to really measure roi, so then you have approximate roi and then, if you cannot, if you don't have that either, they said, well, the first is measuring ROI, but it's a bit tricky to really measure ROI, so then you have approximate ROI and then, if you don't have that either, then actually measuring value-adding tasks versus non-value-adding tasks. And the idea is like, if you're spending most of your time on value-adding tasks, then you're very productive. And I think for me, when I was reading this and what I was thinking is, sometimes you have deadlines and you take some shortcuts and I think you make some decisions that are very much value-adding, but you know that that decision will create work later. That is not value-adding.

Speaker 3:

Because you create a shortcut Exactly and it creates value for the end user.

Speaker 1:

Yeah. So, for example, I don't know, in your typescript project or maybe in your python project, you, you don't pin your dependencies right and you're like, yeah, I'm not gonna worry about that, all this dependency management, all these things, whatever, just so I can ship faster, and you ship faster. So, in a way, if you decrease the time and you ship a new feature, it's better, but then you know that you're probably gonna have issues with it later, something that you know like, yeah, but I still need to do this at some point. I still need to pin my dependencies, I still need to figure it out right. So to me that's a bit like and we always have deadlines, right, we always need to make progress.

Speaker 1:

But to me it's like when, how much am I willing to quote unquote sacrifice to keep moving that speed? Or should I slow things down? You know? Or, for example, maybe you have one developer that is working alone and no one's even looking at the code, but if the person leaves the company, then you know, then, out of nowhere, you have a huge, huge gap. Right Now, someone needs to take up the code. The person's not there, and maybe you want to put some reviews there, but then that also slows you down right. So it's like like this is maybe not tech debt in the definition, but it is like decisions that you make that reduce the amount of value that you add over time but at the same time you are safeguarding for the future.

Speaker 3:

Let's say, and to me it's like maybe we, probably there's not one answer right like this is how much I think what you're saying a bit less is like you should not go to only do the value adding stuff, but also the non-value adding that is necessary. Yeah, and I think it's like and you need to prioritize that in a healthy way.

Speaker 1:

Yeah, and maybe also it's like how, yeah, what's the? Maybe it's okay to take a few shortcuts, right, or maybe you don't have to have everything 100%, you know, but there should be a line.

Speaker 3:

But I think also they're like what is not that easy when it comes to prioritizing these things when you work with a large engineering team is that and maybe exaggerating stuff here.

Speaker 3:

But, like your typical engineer, maybe sometimes focus a bit too much on the lines of code versus what it actually brings to them to use yeah, I agree and like the definitions of what is value adding, like you should not keep in mind, like value adding is what is value adding for the end user, and there are other activities that maybe are maybe very important yeah that you need to prioritize somewhere in between as well, but they might not be value adding to the end user and like that categorization I think is an important one yeah, yeah, that's true.

Speaker 1:

And I mean, like you said, you can have something value-adding for the other developers right, the developer experience is better, but that's not really value-adding to the end user.

Speaker 3:

Take a simple example I find it important that there is a linting tool on the application that we're building with this virtual team, hypothetical team. Yeah, this is not value-adding. Yeah, theme yeah, this is not value adding. Yeah, it's like it doesn't do anything for the end user. Exactly so it's not value adding, but we do find it necessary, so we need to schedule it into our priority somewhere indeed, and I think like things like that.

Speaker 1:

And this is a concrete example, but I think we can also come up with other examples right is this something you want to say, I'm going to enforce it, because some people are going to say, well, this nating 2 is is slowing me down, because it's asking me to add type hints everywhere, right, and I think for me it's like it's always what I'm always kind of debating is like okay, this is something that I want to pay down now and not have this thing that could become a mess that if someone needs to change, then now it's like whoa, whoa, what is this? What is that? Oh, we, okay, you know, and I feel like that's a bit the the, the trade-offs, and that's when I read this and I read about the productivity article, I think that's. Those are the kind of questions that come to my head and I'm just curious, like, how you see this as well. Oh, yeah, I feel like, uh, I don't know if that was also a answer as well. I think, also, different contexts will require a different amount of uh focus.

Speaker 3:

But uh, there you have it require a different amount of uh focus. But uh, there you have it.

Speaker 1:

Last point, last point shoot, we're working on a newsletter. Oh yeah, we are working on a newsletter, that is true, we're working on a newsletter.

Speaker 3:

Uh, bit of context there. So we this is very much an experiment, so set your expectations. Set your expectations, it might never happen no, you mentioned on the podcast. Now it has to happen no, I'm explicitly saying it might never happen and everything's fine.

Speaker 1:

No, but like you know it's a one-way door. Once you create expectations, all our thousands of listeners are now going to be eagerly waiting.

Speaker 3:

But, uh, what happens? Like in preparation for the podcast, like we scroll a bit, the, uh, the webs that are out of the worldwide webs, yes, uh, for interesting articles. Uh, we put them in our, in our, in our draft of our show notes before the the that we go live and we go over these things most of the times before, sometimes before and what actually uh.

Speaker 3:

What we did, uh, in the last week is that we made a bit of a crawler that crawls all these news sources and actually already prepares a draft for us using some LLM technology with a lot of specific prompting, but it creates a first draft for us that we still need to fine-tune a little bit. While we were working on it, it was actually we were thinking like maybe it's actually cool that we just turn this into a newsletter and like a subset of these topics we will touch upon during the podcast, and when we release the audio version of the podcast, we release it together with a newsletter.

Speaker 1:

Yes, so I guess it's like we can think of it as us discussing a bit more of the highlights. We have a tldr, but if you want to, if you're curious, you want to know what's what else happened in the world of data and ai.

Speaker 3:

then you also have the companion newsletter there and I think, having had a look at the draft, it's a very good newsletter. To scroll to the headlines and you know like, ah, this happened last week. Because it's a. It's a field where there's so much news coming out every week and this allows you to skim all the headlines per category and to know a little bit, like okay, this is interesting, this I need to read more, in more detail, so it's a tool in your toolbox to keep up to speed.

Speaker 1:

Cool. So yeah indeed, if you want to be a newsletter friend of the show, well, stay tuned. I think it's time to call it.

Speaker 3:

Yes, someone is hungry, my stomach is acting up, okay all righty thanks everybody for listening thanks parts Bart's stomach for waiting.

Speaker 1:

All your sound vitamins you have taste In a way that's meaningful to software people. Hello, I'm Bill Gates.

Speaker 3:

I would recommend TypeScript, yeah it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here Rust, rust.

Speaker 2:

This almost makes me happy that I didn't become a supermodel.

Speaker 3:

Huber and Netties.

Speaker 2:

Well, I'm sorry, guys, I don't know what's going on. Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here.

Speaker 3:

Rust.

Speaker 1:

Rust Data topics. Welcome to the data. Welcome to the data topics podcast.

Data and Cloud Business Unit Conversation
AI Impact on Marketing and Society
Human vs Gen AI Creativity Future
Apple and Open AI Partnership Discussion
Pronunciation, AI Ethics, and Technology Trends
Privacy Concerns With Future AI Technology
Privacy Concerns and Tech Debt
Navigating Technical Debt and Value-Adding Tasks