DataTopics Unplugged: All Things Data, AI & Tech

#75 Developer Productivity in 2025: AI Replaces Engineers, Biden’s AI Chip Regulations, UV’s Killer Feature, and Doom in a PDF

DataTopics

Send us a text

Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.

In this episode, we delve into the big topics shaping our digital landscape:

Speaker 1:

This is how we do it. You have taste In a way that's meaningful to software people.

Speaker 2:

Hello, I'm Bill Gates. I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here. Rust, and usually it's slightly wrong. I'm reminded, it's a rust here Rust.

Speaker 1:

This almost makes me happy that I didn't become a supermodel.

Speaker 2:

Cooper and Nettix. Well, I'm sorry guys, I don't know what's going on.

Speaker 1:

Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here.

Speaker 2:

Rust Data topics. Welcome to the data. Welcome to the here. Rust, rust, rust, rust. Data Topics. Welcome to the Data Topics.

Speaker 1:

Welcome to the Data Topics podcast Doom. Hello and welcome to Data Topics Unplugged, your casual corner of the web where we discuss what's new in data every week. From doom to sins, everything goes. A very dark episode. Check us on YouTube. Feel free to leave a comment um linkedin all the works, or talk to us via email at data roots that data topics at dataio. Today is the 14th of january of 2024. My name is morillo. I'll be hosting you today, together with bart hi and alex behind the scenes making everything happen. She's waiting. No, she's not really waving today, but uh, she's waving now. There we go. One day, alex will join us on the pod one day. Um, how are you doing, bart? Good, yeah, how was your weekend?

Speaker 2:

um quiet, quiet, quiet is good, quite as good. Yeah, played in the snow oh yeah we don't often have snow here but it's not a lot.

Speaker 1:

I feel like, um, because I went to bed it was snowing. I woke up it was knowing there was like a good layer of snow, which is, uh, not very common. But actually last year the same thing happened like around the same time. I know because I went to, uh, tenerife or gran canaria, something like that better year.

Speaker 1:

Yeah, but like I remember, remember, I put the Christmas tree the day before and I think it snowed so much that the tree fell. There was a pile of snow or whatever. They didn't collect my tree, like the Christmas tree.

Speaker 2:

Oh, like that, yeah, yeah, you put it in the garbage outside.

Speaker 1:

Yeah, they put it outside for them to collect. There's in Belgium for people that are not aware there, come and collect your christmas tree, and this was actually yesterday, or in my neighborhood, um, so that's why I kind of know the timing for the snow. It was a bit the same, because I remember it's not a lot when we're leaving, so so, yeah, but yeah also the. I also noticed that these are getting the sunlight time.

Speaker 1:

It's already getting longer and, uh, still very cold, though it's good as the sun is getting longer, like the daylight is getting, because I wake up in the morning and it's like, oh, it's light. And it's like, oh, this is. It's like it's a little thing in life, you know. It's like, okay, it's going to be a good day, oh, yeah, cool. Last weekend, and maybe already segueing to maybe the first topic, there was the brussels motor show. It started last friday. It went over the weekend. Um, in my project, uh, we've also the car expo brussels, the car expo yeah so, yeah, it's an auto show, so different brands they come they.

Speaker 1:

They showed their latest models, their prototypes, to prove concepts and all these things. There are some games as well, uh, and apparently I didn't notice. But uh, quite big these days in the european car market yeah, I heard because, like it's not, like not all, not a lot of places they have all the brands coming together it used to be geneva.

Speaker 2:

There was also a car uh, exposition uh, which was really a bit like also the networking event where all the ceos of the car brands went and etc, etc, but it didn't survive, covet ah, really yeah, and the brussels car expo did, and so it's becoming a bit the de facto new event of the year for, uh, car manufacturers in europe.

Speaker 1:

Yeah, yeah so there are some interesting things. So I went there yesterday so I'm working on a project for an ai configurator. I'll get into that in a bit. I learned some things. So some people actually try, like they buy the cars on the spot, like they just see, like they see the car, they go in and they're like, okay, I want this Because they also have, like, better deals, you get a lot of discounts, right.

Speaker 2:

Yeah.

Speaker 1:

And sometimes people they say like, oh, this model, because, yeah, a lot of people go in, they open, blah, blah, they touch the stuff so they also can get another discount on top of that. So they all there are these things that happen. Tesla was there with the Cybertruck and the Cybercab. Oh yeah, it was there. Yeah, it was there it's not free, legally right?

Speaker 1:

no, so they didn't even have so. For some cars they actually had an iPad, which probably had you can click and interact. For the cyber truck it was just a paper, just with some information, some specifics and um, have you ever seen a cyber truck in real life? Not in real life now and you've seen like uh in uh images and stuff online I've seen one explode next to a trump hotel? Not really no, I didn't see that the video no, no, no I didn't see filtered fireworks.

Speaker 1:

Yeah, oh well, no, I didn't know. Segue, segue, yeah, did you like it? By the way, did you like the design of the cyber truck? I'm uh neutral about it, but would you buy one for yourself? No, yeah, if it's the same price as the car you have now, would you buy it same price? So what was the design?

Speaker 2:

this changes just the design. I'm not that sensitive to car design, so I'm probably probably the wrong person to ask.

Speaker 1:

I feel like I'm not very either. I'm very functional when it comes to cars, but to me it felt like if you're playing a video game, that person's internet is very slow so it's very pixelated. You see all the edges of the blocks and all these things. It's like a Nintendo 64, right. Yeah, something like that. It's a bit strange. They also had a cyber cab, so basically it's like a car but there's no steering wheel, so it's like two passenger seats, okay, cool yeah, so I also thought it was interesting.

Speaker 2:

I also like only two seats.

Speaker 1:

Two seats and then the screen. So it's a small car, uh there's, no, there's no backseat.

Speaker 1:

There's no backseat, but it was just like normal size okay, so you can uh extend your legs I guess, yeah, but it's like, I guess, supposed to be just a cab, like a Waymo, but a car just for this, so yeah, and there are some games and stuff. But there's also the iConfigurator Hub, so that's the project that I helped with Nice no-transcript, to try to do this with ai, so the first prototype mvp was also displayed there and it's text.

Speaker 2:

Or I mean, is it text or his voice or it's text?

Speaker 1:

the voice could be an easy iteration, exactly. But I mean the adding voice is just saying, yeah, now we have voice to uh voice, speech to text, right, um, and then you kind of go with the same flow as you're there so, uh, it was it was.

Speaker 2:

So it's like uh, I'm, uh, I'm a guy with a family of three kids, uh, I need, uh I've, only one car. Uh, what kind of car would you recommend? Is it like that?

Speaker 1:

yeah, exactly. And then you can say like, oh, I just want the cheapest, or I want something sporty, or I want something xy, I want hybrid, like I think that's, that's actually very important. You know like, do you want diesel petrol um?

Speaker 2:

and you get this like could it also be like? I hear this type of van or that type of van or that type of, uh, yeah, and then you say, okay, let's go for that one, and you can also configure it like I want that color, yes so a lot of the times if it's too broad.

Speaker 1:

So the way we set it up, if it's too broad you would ask some more guidance questions so you can narrow a bit. So actually in the back we have the possibilities, right. It doesn't come up with stuff from its head, so it's like you have already the possibilities pre-configured. If it's too broad, it will say, okay, maybe select one of these things, cool. And then sometimes, if it's narrow enough, you have, uh, like tiles that you can click on and then after that you can still customize it, add the sun roof or add this, or okay, I like this, but can you change this for that? So that is also possible.

Speaker 1:

So it was very interesting project. This happened, uh, yeah, still happening technically because the, the motor show, the brussels motor show, is still happening. And, um, I was talking to to jonas actually, and, uh, we're discussing how it took me back to my very first project at interviews, which was uh, for another car manufacturer, which was, uh, it was actually also that back then it wasn't called gen ai, but it was kind of Gen AI, it was a Q&A maker. So that's the first thing that I wanted to kind of reflect upon so a Q&A chatbot, or really a Q&A generator.

Speaker 2:

Text generator.

Speaker 1:

It was a chatbot, but it was very much like it wasn't really a chat, right? Like you ask something, you give an answer, Then you ask something, you give an answer. Then you ask something, you give an answer. It's like. It's almost like if you restart the conversation after every question and answer, it was almost the same. Okay, I see, Right. So I think nowadays with ChatGPT, the context and the collision of the conversation is way better. It's more of a natural dialogue, Exactly so the algorithms back then, but there was probably a lot of rule-based stuff right A lot of like deterministic tree traversal.

Speaker 1:

Exactly. So that's the first thing I wanted to bring up. Maybe I'll share. Put this on the screen. Yeah, for people that are not familiar, this is the auto show that is happening now. Like, as Bart mentioned, I should have put this before, but what?

Speaker 1:

I wanted to show is the q a maker, which still exists, apparently, um, oh, it's actually called it, yeah, q a maker and actually now it's called uh, ai services within azure, but before it was called lewis, which is like language, something, understanding, intelligent system, something, yeah. So it was called lewis. Um, so yeah it was. They had like a pretty ui, basically actually the, the work, the engine was kind of there. You just kind of feed questions and answers and you have metadata tags and all these things and then you just had the, the nice ui that actually azure provided, right. So basically you just have a whole bunch of questions, you have a whole bunch of documents that you provide and then the idea is that it would match stuff for you, right, and you had like the formats and all these things very interesting, um.

Speaker 1:

But I still remember, even back then, that there was a lot of um, it was a lot of challenges, right. If you say, like, what's the best company in the world, it would probably say the company that I was working for, uh. But if you say what's the worst company in the world, it also says the same company company, right, because it's just matching keywords. So, like it was very A few rules that are not set up correctly, exactly right. So you need a lot of examples in the Q&A stuff and actually like looking back, this is it's egg? No, it's re or retrieval, augmented generation. So there was no generation, but there was the retrieval part retrieval augmented, yeah returns yeah, something like that.

Speaker 1:

Do you have a? We don't have a? Sorry, uh, but yeah, but that was it back then. Right, um, and I think, yeah, fast forward now to 2025, gen ai, ai. I think it's much advanced, but it was an interesting reflection. You know, it's like this is where we were. Have you been to the motor show now A few years ago? Yeah, I think, even pre-COVID, to be honest.

Speaker 2:

Yeah, it was when I was in the market for a new car.

Speaker 1:

Ah, wow. And then, since then, you chose a car. You're happy with it now. So you went to the motor show to look at cars and like to to not have to visit uh 101 dealerships, yeah, brands and that's a nice thing, right like you see what's out there, what uh?

Speaker 1:

yeah, indeed, I think if you really like the, the cars, though, if you're a, if you're very fanatical, like yeah, it's cool, but at the same time I feel like maybe too busy for you. Like, maybe you want to go to dealership because you have a one-on-one attention, like, and probably if you're very enthusiastic about cars, you don't want to go to dealerships, take a chair, put it in front of the renault clio and really stare at it, stare at it, just silence, appreciate it?

Speaker 2:

yeah, no, but maybe like, go in, take it for an hour, stare at it. It's silence, appreciate it.

Speaker 1:

Yeah, no, but maybe, like, go in take it for a test drive. You know, maybe this Talk to someone that can give you all the information about this car without splitting the attention, you know. So you know, um pum pum. Now maybe more for the timely news bizdev thingies. I see here that the Biden administration proposes new rules on exporting AI chips, provoking industry pushback.

Speaker 2:

What is this about, Bart? Let me open the link because I want to say it's two days ago. Is that correct? One day ago, the 13th. So it's yesterday that the Biden administration proposed a new set of rules for exporting ai chips. Okay, um, there's a lot of uh.

Speaker 2:

I think the discussion is mainly based on um national security okay and then mostly aimed against China, from the US, okay, and it's a regulation that basically limits the amount of chips quote-unquote AI chips that can be exported, and it's very much like it impacts the big US companies like AMD, like NVIDIA, yeah, like it impacts like the big, uh, us companies like amd, like nvidia, yeah, um, and they, they have to limit their sales basically to non-allied countries and the biden's administration.

Speaker 1:

But trump is the president-elect, but he's not. He hasn't taken over office.

Speaker 2:

I don't know how easy it is for them to reverse it.

Speaker 2:

To be honest and like, like the title says, it's a proposal, so I doubt it's going to be final and yeah um, but it would say that it would mean that, for example, to non-allied countries, uh, it would limit the what they can order to 50 to 320 000 chips, okay, and if you want to go beyond that, you need to have some sort of license. There are also key allies and they mentioned UK, japan, there are a number of others. They get unrestricted access. Okay, I think there was a big backlash from the US landscape, especially from NVIDIA. I think NVIDIA was very explicit about it, yeah, that it would be bad for innovation. It would be bad for innovation. It would hamper the competitiveness of the U S chip generation landscape and the other point, I guess yeah, I think so.

Speaker 2:

I think, from the moment that you say you're not allowed to export, nvidia is not allowed to export this anymore, or in such a limited way that it basically handicaps anyone that wants to buy it. It creates a temporary pause for these countries like china, yeah, but it basically signals this like get your shit in order. You need to do this yourself. Yeah, maybe it will take them two years, maybe it'll take them five years, maybe it'll take them 10, but that means that after the time, they don't need us anymore. True, that's true.

Speaker 1:

I mean, it's a very, it's a very short term actually, yeah, yeah, yeah, I feel like, yeah, it's a bit short-sighted, right like in the, it doesn't fix the. Yeah, I see, and what do you think of the?

Speaker 2:

because you said it's, you speculated that it's a security thing well, I think that is what the how more or less I would describe.

Speaker 1:

Yeah and like the security thing. Is it like because wasn't there on the supply chain or something that, uh, the people were tampering with the chips? There was a story or something like that is, in that sense, the security, or like what was the security concerns of accepting chips from I?

Speaker 2:

think it's mainly uh the building up the ai abilities and potentially to be able to use them in uh, in uh settings where uh where they can be seen in an adversary, like in in warfare and cyber, cyber attacks, etc. I see, okay, I think that is a more explicit point that is being, uh, discussed in the community.

Speaker 1:

I think the last explicit point is also just competitiveness on ai right yeah, yeah, yeah, yeah, yeah, and you do see that, like we, I think last week we talked about, uh, the deep seek v3, which is a chinese model, which is a chinese model indeed, but like deep seek is a good example.

Speaker 2:

Like they use much, much less resources but in the end they're training on nvidia chips, right?

Speaker 1:

yeah, yeah, yeah, yeah it would hamper like if they, if, if they can't just order chips like that, like you can't really invest in right, like you can't really build the capabilities yeah, yeah, indeed, indeed, and uh, now maybe play a bit on the words, because you mentioned AI development and I saw here a topic AI development, but instead of developing AI, ai development teams. I guess that's what you meant, because I put a link here.

Speaker 2:

AI development teams. Yeah, there were two notable things, I think, this week. First, zuckerberg I think it was in the news a lot Most people will notice, but he uh announced that meta plans to replace mid-level engineers with ais this year yeah, but uh.

Speaker 1:

So what I'm reading when I see this is he basically wants to replace people with AIs. Well, I guess technically, in practice it's like I'm going to give you coding assistants and agents and all these things and I'm going to fire three people, because now you can develop as much as four people.

Speaker 2:

I think that is a step up. I think what he's also alluding to is with uh, with uh agentic ai. Is that at some point, uh he he tries for to really completely eliminate the need of some developers not all, of course, but yeah, I'm also surprised that not just to supercharge existing developers. What he's really hinting at is like to to replace yeah, and why do you think mid-level here?

Speaker 1:

Why not juniors?

Speaker 2:

I think what he just means here is like it's more than just junior skills Interesting, it is mid-level skills.

Speaker 1:

Interesting. What do you think of that? Do you think that's possible?

Speaker 2:

Well, we discussed a bit Boltnew last week, which is like this LLM that allows you to very easily build front-end and back-end applications. Focus on front-end, but you can do back-end. I think what this shows is that, with the right setup, enchanting AI on co-development will get you very far.

Speaker 1:

Yes.

Speaker 2:

That is a bit, and of course there's a lot of discussions on how maintainable is it, et cetera, et cetera, et cetera, but that we are today much closer to having agentic AI that can do code development based on just describe the feature out out. As to this uh code base, we're much closer to that, like way closer than we were two years ago, for sure. So to me, like that approach like you have an existing code base, you hunt an agenda guy to build a feature on top of that, be it front, on the back end or whatever it will come yeah, yeah, that's true.

Speaker 1:

I think so. I think well, and I think there's another article that I wanted to link with this, but I agree, I don't know if you can go as far. Well, we also talked a bit last week. I want to say that the expectations for people to be productive will increase. And maybe if you read the Mark Zuckerberg's announcement, it's a bit like that as well, right, like?

Speaker 1:

if you're going to replace mid-level engineers with ai. Basically what you another way less, uh, clickbaity. I guess way to think of it is like you're expecting the people you have to be more productive so you don't need as many people. Well, yeah, you could phrase it like that right I feel like it's just in a way, it's like we're kind of saying the same thing, but I feel like one is more catchy um the other, the other thing that I linked there is a very similar one.

Speaker 2:

It's from uh microsoft, um, where uh microsoft basically announced that they will uh form an internal dev focused, development focused ai organization and it will be aimed at building AI solutions, but also aimed at AI development within Microsoft to basically fast-forward AI-supported co-development. So they're doing the same thing. I think the way that they report on it is in a much more in a smarter way, in a more formal way, in a bit more thought, in a in a smarter way, in a more formal way and a bit more thought through way. Yeah zuckerberg is basically saying, uh, all the people that are building these skills for me, I'm gonna fire them.

Speaker 2:

And because it will be cheaper but they're more or less working towards the same goal.

Speaker 1:

You should see this yeah, yeah, indeed, I feel like yeah. In a way, it's a bit different ways to say the same story. Yeah right, one is more like more responsible, maybe like we're working for dev focus, the organization like let's adapt more to this, and the other one's saying, yeah, adapting means letting people go yeah, yeah do you think this is the future?

Speaker 1:

do you think this is gonna? Do you think this is gonna stick, I guess? Or do you think this is gonna stick, I guess, or do you think this is feasible for any organization? Can you reflect a bit on?

Speaker 2:

well, we're here talking really about code development. Yes, um well, like I was saying, I think we are much closer to that today, especially if you start like I have an existing code base and I want to start building on that with new features, and then if you have tools that do code generation, that also can execute that code, that can see at tracebacks, that can build tests for you, et cetera, then you're very close to that. And I think with stuff like Bolt and stuff like Creatorxyz, we see that today, but it's only the first generation of those tools.

Speaker 1:

Yeah, that's true.

Speaker 2:

So to me it's that, if you follow I think up to six months ago I was very skeptical on this After Bolt, bolt, follow up I was like, I think, up to six months ago. I was very skeptical on this, but but that's like bolt, really like change a bit my viewpoint on this.

Speaker 2:

Um, and I think if we just see this as the first iteration and we're three, four generations further, yeah we're very close to this, but and I and I think the the big difference for an engineer is a bit like what we typically do is, then we tend to be, we try to be very opinionated, like this is building good code and this is what we do, and I think what the focus will shift towards if at least you adopt these tools. Right, if you adopt these tools, the focus needs to shift towards very clearly expressing, like these are the features that I want to test build, these are that you also want to test build, that you also want to test them and on what functionality you want to test them, and to really very clearly express that yeah, and leave the code generation a bit up to the two.

Speaker 1:

Yeah, yeah, indeed. No, I agree, I was also thinking, linked a bit to what you were saying. If I'm reviewing someone else's code these days because I guess also I'm thinking about this, because I review the ai generated code in a way right, but uh, I've always wondered, like, what are the things I should pay attention to? Should I have to understand every single line, like every dependency? Do I need to understand everything you know? Like and I think today is more I look at something. I can kind of get what it's like if you have a function right, do I get what the function is doing? Yes, okay, then I don't understand all the inside of it, right, because I know that if I need to change something, I know it's going to be there. And I feel like, in a way, you kind of start looking more at, uh, how the pieces connect and like, are things entangled or not entangled? You start thinking, like, if I need to make a change, is this a change that I'm just going to have to make here or I have to make in different?

Speaker 2:

places right and you want to test functionality.

Speaker 1:

I know, to test the functionality. But, like I guess, like if you have a function that you know what the function is doing, you have tests for it. Like the insides it matters less in a way, because even good. But you know, like, yeah, I'm seeing a function that it's very, maybe very vega, like every function should do one thing and only one thing, right? So if it's a nice function that has a very good scope of what it's doing, has a good test coverage, even if the code inside is a bit shitty, it's like you can always go back and change it if needed, right, like you can refactor it right.

Speaker 2:

Like yeah, and then typically well often people express this let's say the outcome is good, but then it comes down to performance.

Speaker 1:

Indeed.

Speaker 2:

Like it really depends on the use case whether or not performance is a thing right, and if it becomes a bottleneck you fix it.

Speaker 1:

Yeah, indeed, indeed indeed, indeed no, I fully agree. I fully fully agree. Do you think there's any caveats to this? Like Meta and Microsoft are doing this. Is there any reason? Why? Are there any exceptions, maybe Like the startups? Or do you think it makes a difference the type of product that people are using or the size of the organization or the skill sets of the developers? I think.

Speaker 2:

maybe I think the tool chain that you need for this is not there yet. Honestly, like I said, like Bolt created the first generation, I think we probably need to be on the fourth generation for wide scale adoption. But, for example, like what I think a lot of developers are playing with, is GitHub Copilot or Cursorai? Is it ai? I don't know.

Speaker 1:

Cursor.

Speaker 2:

AI-focused IDE and that is like still miles away from what bolt is doing, for example. Yeah, because like um cursor or hit the copilot you need to have. You need to specify like I want these files for my code base in the context when I ask a question I want or run to edit the code. Um, it has very limited ability to actually execute the code and fix based on that. It is very limited ability to actually execute the code and fix based on that. It is very limited ability, if any, to integrate with a database and to uh, to apply migration stuff like this.

Speaker 2:

So it's very limited um. And then, if you look at things like bolt that are much further advanced, they are still very much a proof of concept, more or less like it's narrow right it's narrow and also also in the sense that like it doesn't integrate nicely with kit, for example, with version code, like you can build your proof concept, but it's very, very hacky, tacky to get it into a versioned repo and stuff like that so it's not like it's.

Speaker 2:

It shows the direction that we're going, but it's not ready for wide scale adoption, while at the same time, like these companies like microsoft, like Facebook, like Meta, like they built have trained huge LLMs in-house. I mean, they're skills-wise probably miles ahead of most other companies and so they can probably more easily build these tools in-house as well, to make it very specific to their own tool chain that they have going on. So I think there is a competitive advantage that hopefully we'll get closer through time when we make iterations on these.

Speaker 1:

Yeah, yeah also, as you mentioned as well, I feel like cursor, copilot, they're going really from the developer side and cursor is really going from the functional application side right and I feel like ideally they meet right at one point. Like you can have a bit of both worlds right like you have. You can get started quickly with something like bolt, but at the same time you can still have a bit of both worlds right like you have. You can get started quickly with something like bolt, but at the same time you can still be a bit opinionated, like I don't, because I think bolt you said mentioned uses super base yeah, the integration with super base, right yeah so it's like, but if you say you can still be, it's not as going to be as narrow.

Speaker 1:

You can say I want to do this and I want to do that and you can do this, and then, like, you can maybe go shift a bit more towards the developer side and maybe do change it a bit more to your liking, instead of having to problem over and over. You know, and it kind of shifts it back and forth. But, uh, interesting, um, we'll link to this before we cover the the other business dev topics um, developer productivity in 2025, more ai, but mixed results and I need to share the different tab. Um, I, yeah, I, I looked, I read through this article and this is from january 2nd, so not that old um. But basically, you're just looking at the developer activity 2025 and no big surprise. I guess the ai and I assisted things are there, right. So maybe just to I'm just going to skim through the subtitles.

Speaker 1:

So, for example, they mentioned new security risks emerge for AI, which I wanted to ask a bit of your opinion. I do think this is relevant, but I don't know how relevant it is today, to be honest, because when I read this, I think of AI will pull in a dependency that has some security vulnerabilities, right, and because you're not really vetting the code, then now your code is less secure, right. But to be honest, I feel like I'm not sure how big of an issue it is if you have a developer that is reviewing this stuff, or if you have a developer that says write this using this framework, because that's the framework that I know that I like and the one that I'm using on this project. Do you think this is a relevant uh concern?

Speaker 2:

so the concern here is, like ai is gonna generate a lot of code for me, specify dependencies that my and this code or the dependencies might have. Security for our vulnerabilities, right?

Speaker 1:

yeah, that's, that's what I'm, that's what I'm thinking um, I agree to some extent.

Speaker 2:

I agree when, when you use something for code generation, um, like really to build an mvp, like to from a to z building like a minimal application, I agree to it. Uh, when you compare this to having a single very experienced developer, building is. Yeah that's interesting, Then I would. If the outcome needs to be the most secure application, I would put my money on the very experienced developer versus AI.

Speaker 2:

If it's, we're going to build this thing over the course of the next year with a team of 25 people and no one knows the full code base and maybe, if we're in the JavaScript world, all these things that we pull in now are going to be fully outdated at the end of the year. I'm not sure if it makes a big difference security-wise. Maybe right To be honest, Because you can do a bit of, let's say, patching of security vulnerabilities by including some scans in your CI et cetera, stuff like this, which you can do, whether it's AI-generated or person-generated, whatever.

Speaker 2:

You can have these safeguards in place. So I'm not sure if you look at, because from the moment that you're working as a team on a big code base, there are very few people that have a full view on it. Right, I think so too. Depends very much on the context, of course.

Speaker 1:

Yeah, I think that security could be well. I was thinking a bit more as you were also discussing, I think, security maybe also, I don't know if you have SQL injections or something like maybe there's need to be dependencies, right, know if you have sql injections or something like, maybe there's need to be dependencies, right? But I guess that I think people say security a lot because, like, you have something that is generated from another place and there is security is always a the big risk, right, like no one's gonna say no to security. But to be honest, like I don't see, like in practice, I don't see how using ai will make it a bigger risk.

Speaker 2:

Well, I think, like the example of dependencies, I think is a very good one. I think if you used ai today, it will, by default, probably increasing in future but improving the future, but by default will will pull in our data dependencies. True, that's what you see right like. So that is a very clear something. That is not okay. Yeah, it should be better. Um, but again, they're like you. I think you can have safeguards in place that within your ci that you check a bit on. Are there any high risk out of dates?

Speaker 1:

but I also feel like the risk if I, if I ask uh chai gpt to write a function for me and it brings more outdated dependency, it's still up to me Like I feel like the accountability still is on the developer that accepts this code, right, and I feel like to say like, oh yeah, but Well, but that's like you have a lot of different, like it's a very wide spectrum just from, like when you discuss this, from the security risk of using AI.

Speaker 2:

Like you have people that go full out of AI and I only prompt yeah, slash boltnew. Or you say I used hit the copilot a bit as a fancy autocomplete. Yeah, which are two completely different things when discussing this context, of course.

Speaker 1:

But I think, in this context, I'm looking at, like developer productivity, right, so I'm thinking of he's a developer and he's being more productive because of AI. So it's not, um, I don't know. Like you're still expected to code, right, like, like the idea is to to assist you and be more productive rather than replacing you. That's, that's a bit how I'm phrasing this. That's how I'm looking at these things, because I fully agree with you, right, maybe there are people that, uh, they they just want to prototype something quickly, or ambC, or maybe they don't have the technical skills to do something in JavaScript, and maybe that's fine. But when I look at this and I think of security for developer productivity, I don't know, I feel like that kind of ability is still with the developers, right, the AI is supposed to make you more productive.

Speaker 2:

But that's what I would say today as well.

Speaker 1:

Yeah, that's what I would say, I agree with, and also for this year at least right, because again thinking of 2025.

Speaker 2:

But I think the simple fact is and you can't really go around it like if you, instead of writing every line yourself and being conscious of every line in other words, even though it takes a lot, much more time like you are conscious of every line, versus now these 300 lines get auto-generated and you read through it, you skim through it, but you're less conscious of each individual line. That's true. So there might like, objectively, there is probably a big risk versus a very experienced developer, but that assumes that the developer is very experienced.

Speaker 1:

But I agree with it's a good point you're making. But then I think for me the question is, from skimming the code, how many security vulnerabilities there are, like you know, because I feel like, again, like not just you scanning right, but there's also linters, there's also CI, there's also this, there's also that. So I feel like the risk is very okay, maybe not negligible, but I don't think it's not something that really concerns me. Not necessarily concerns me, but I don't think that's something that takes a lot of mental space for developers that are using gen ai these days if you use it as a fancy auto, complete, not no, we are okay, but then, moving on, next thing, it says observability.

Speaker 1:

We need to shift further left and I guess, to be honest, I didn't really understand this. But uh, they do mention that the gender, the ai generated code, becomes a bit of a black box. So, yeah, you need to increase observability. So I'm not sure if there's anything you want to comment here, but and what is the statement?

Speaker 2:

How would you interpret this?

Speaker 1:

They just say, like the code now becomes a bit of black boxes, because people don't fully understand what the code is doing. It's just like a box that you plug it in. So you have to increase observability on the developer to chain. But I'm not sure exactly what this means, to be honest.

Speaker 2:

But yeah, I'm not sure if this is what they're hinting towards, but I do think a bit what I was saying. I think, from the moment that you're not writing every line anymore, you can be less opinionated on how you want your code to be structured, but you need to be much more conscious on how do I make sure that what is generated also works and works well. So that means, uh, having the right testing in place. That means, uh, having the right uh observability in place so that you actually have let's say, you're building a web application and you see some features are very slow that you have in place, that you can monitor with application process monitoring, that you can monitor where is the bottleneck and that you can quickly drill down to that and then improve that function. Because you didn't write that function and even though maybe if you wrote that specific function, you would have known function functional wise like this needs to be fast.

Speaker 1:

Yeah, I see. So it's like. It's almost like now we understand a bit the code, more based on the observability metrics, because we are less familiar on the code itself.

Speaker 2:

Writing the code itself like yeah, um, exactly, and because of that, like you need to be like logging, monitoring, um, early validation, like it needs to be much earlier on in the process, where normally, like you, like, maybe not the best practice, but what happens often is you develop a proof concept it works, okay, let's go for this. And only then you start thinking of these things. Yeah, and well now, and I'll just give an example of something that I count on myself like you get an error working with bolt, like you get an error, it doesn't function, but it's not clear what is happening. So you ask you prompt to build in logging, uh, at specific functions, so you get a trace back, so you can reason a bit with the model, like what is going wrong. But this logging becomes much more important early on I see what you're saying Versus when I'm writing the code myself.

Speaker 2:

Just to understand, like what is going right or what is not going right?

Speaker 1:

So basically, it's almost like Because you didn't write it, you need to pay closer attention to the metrics too.

Speaker 2:

You didn't write it. To the metrics too, you didn't write it.

Speaker 1:

But it's, but you do want an overview of how does this go with. Basically, it's like almost like signal probes throughout the code that you need now because you understand less than internals or maybe you're not as close. Yeah, it's a good point. I you know, I can see that, that I kind of agree and I think maybe we should um say that again.

Speaker 2:

I think there are still a lot of people that are highly, highly, highly skeptical about this. Yeah, yeah like highly skeptical. Like the thing on the hacker news. You see a lot of discussion on this and I and I agree with all points, but to me it's a bit like, if you see the evolution that we've made so far and how quickly it is going, I think you, if you, you, you should not ignore this. Yeah, I think that is a that is a reality.

Speaker 1:

It's better to adopt this this way of thinking as, and even if you don't use it yourself, but to be able to do it right yeah, maybe I'll jump a bit, uh, a bit ahead, because I think there's something that, yeah, there's a lot of what you just said. One of the points is that everyone will need to upskill um, which I think is kind of what you're saying, right, I think the gen ai is not, uh, this. These tools are not here to. They're here to stay right and if you're not adopting them, you're falling behind. So I do think teams need to upskill. They also mentioned, like not just in the um. I think they mentioned not only on the the gen ai part, but also like organizationally, yeah, but um, all in all, I, I, yeah. This, I think, is very like you said. A lot of people are still skeptical. I think this is very commonplace, that you need to adopt these things. It's like you're not going to be coding on text editors still right, they're tools and they're here to help you exactly.

Speaker 1:

So, um, now going back a bit, bouncing back a bit up again, the next thing they mention is building at scale will be more complicated. I think it's also we touched a bit upon that like, maybe you can move very fast, but because you can move so fast and so easy to add code, that if you have something that is big, maybe maybe it's not gonna be maintainable, right? Maybe because, yeah, like, maybe you, you, you were too quick to accept all the co-pilot suggestions, but now you need to make a change and copilot cannot help you anymore and you don't know what to do, right? So that's also something we discussed a bit internally on our slack, right?

Speaker 2:

like, uh, yeah, I think there is also like there are a lot of components to building a scale. But I think if you because what we've discussed so far is mostly building MVPs right, I think typically when you're in a large corporation, you have this code base with 20 years of legacy, which it will probably be a large code base. I think that is also like it's still challenging for most IDEs to have a very, very large code base. In the context, that's true.

Speaker 2:

I think with Cursor and with GitHub Copilot, you still need to specify, like this is the scope of the code base that I want to have in the context. I think, again, we will see improvements there Based on your query. There will be an intermediary step to determine what should I keep in my context to answer this. So we will see improvements there. So that challenges the size of it. I think another challenge is the requirements of a specific operation that has probably tons of regulation and compliance measurements is very specific. Right Typically involves a very long QA process to get a feature to production and I think this requires maybe it's not a challenge of the tool per se, because the nice thing with GenUI is that you can inject this into context like what are all these requirements?

Speaker 2:

that's like maybe it's easier than the training a junior on all these requirements, uh, but it's this, this, uh, non-deterministic approach will feel very like. It will feel very risky to, probably to people that will manage it, that are and responsible when it comes to these regulations and compliance matters. Yeah, that is so, that is. I think it requires a bit of a different way to look at development.

Speaker 1:

I think so too. I think, um, yeah, they also mentioned like in the end, like maybe the, the reviews, have become a bigger part of the development part the cycle right there's also ai reviews. Right, maybe you can actually have, but everything needs to be taken with a bit. I mean the humans do be on the driver seat, yeah, but I do think that, uh, yeah, the tools are going to be all around and even on the reviews and even all these things well to me.

Speaker 2:

I think the the ai review because mentioned there, like in the air review, is also a very interesting one. Like it's uh, because we all I think you and me both have worked in a very corporate environment as well. We know that, uh, that, for example, pr reviews are very often formality. Yeah right, and I think then, for in those scenarios where pr reviews are not taken seriously, an ar review will add a lot of value because you can specify all these requirements in the ai review I think also the.

Speaker 1:

I think sometimes the reviews become a formality because it's it's yeah, it almost becomes personal, right like. But I think if an ai saying that you're going to shit, then it's not me right like, it's not like you should don't be mad at me, you know. So I think there's also a bit of that. I think even the CI test and the LinkedIn, I think it also helps a bit with that. It's a bit less, it's more neutral. That's why it's easier to say, ah, it's best practices, it's not what I want, it's what the best practices are. Think it could help.

Speaker 1:

One thing I also saw on github that whenever there are issues, there was like a bot that will search and I don't know if like uses ai for sure, but I don't know if he uses the gen ai part. But he would also go through the similar issues that exist and link stuff like maybe this one like almost like a rag kind of thing, yeah, and say, yeah, this is what the issue is. These are two links that maybe are the same, so maybe consider closing this issue already so we don't have duplicates. Yeah, um, so not only on the ai review, but also they were adding this.

Speaker 2:

So I thought it was very interesting, and it's also and maybe another thing I think is also interesting to see how to tackle this, but probably something probably that will improve it over time as well. It's like it's a bit of a ai cogeneration. I'll take an example of bolt is a bit of an unguided missile. Normally, if you say I want to build this feature, you build that feature. If you prompt I want to build this, it interprets it a bit not exactly like that. I had an example where I said I want to add Google SSO authentication to my username, username password authentication mechanism and Bolt did it very flawlessly, but it was a side effect that it basically did. Oh, I added this Google SSO authentication but I removed all the other forms of authentication because you just only asked me to. So it's a bit this. Sometimes it's more than just the thing that you want to build. Yeah, and that's maybe linking back to the.

Speaker 1:

What is in the text here is that this review cycle will become longer, even though the development cycle will be much shorter true, true yeah which yeah, I, but I think, I, I do think that it will be the case sooner or later, and I also even wonder if, when you're hiring, if you should focus more on people that understand, that can read code, understand code and write perfect code, because I think that's where the job is going to shift a bit more towards. The next point here is teams will be organized differently, which I think alludes to the previous points from Meta and Microsoft. That's why I wanted to link this article here, which yeah.

Speaker 1:

I don't know if we need to discuss more, because I think we discussed in the previous ones, but I agree there will be changes Linked a bit to the Meta. Junior developers will be most vulnerable. I think we also mentioned this a bit last week how Bolt is a very powerful tool, but I also think it's very powerful especially to you, bart, because you know where the pitfalls are and you know what things is, you know what you want to do, you understand you can reason things, but if you're more junior and it's like oh, this works, like I wanted to go from A to B and it goes from A to B, it doesn't matter that as well. And also I think they also mentioned here that, like computer science curriculum includes a python class or two, but, uh, probably someone that is a junior python developer will not be as knowledgeable as Claude, right, so there's also that. Do you have anything you want to comment on this?

Speaker 2:

No, I think this is something that we've touched on a few times. I think that is a very fair point. I think the challenge for I think there's also like a challenge for the education system, right Like this needs to become part of the curriculum.

Speaker 1:

Yeah, I think that is the. I think even what is cheating right, like if someone is using PHPT to do their coding assignments, is it cheating?

Speaker 2:

Well, the whole discussion, of course, but yeah, it's not.

Speaker 1:

But yeah, I do think the tools are there, People are going to use it. You need to adapt Okay. Think the tools are there, people are going to use it. You need to adapt okay uh everyone. We need to upskill. We already covered burnout will still threaten developers. So I think this is more um, yeah, talking about. People are expected to be more productive, so burnout will still loom.

Speaker 1:

Let's say um, yeah, I'm not sure if there's anything you want to add there, pressure to automate everything will increase. I think with a I think that's actually. I think it's true. I think with ai and people seeing the possibilities of automating things with ai, people are going to start questioning more like why are we doing all these things? So I do think there will be a.

Speaker 2:

It will come to attention to automate stuff and I think we're still it will come to attention to automate stuff, and I think we're still, let's be honest, a little bit early. Um, because a lot of tools do not have the, the easy access to integrations with genii, but a lot of things I mean.

Speaker 2:

Just a stupid example on our, on our payroll, we were doing a like it was an ad hoc task, but like to fill in a very, a very uh manual excel with information that came from various sources yeah like from the moment that you have access to these contexts uh, like these, to these sources in the context, and you can actually easily alter your excel with an llm and the performance is good enough, like these by default become yeah. Jenny, I support a task and then instead of two hours, it takes five minutes.

Speaker 1:

Yeah, I remember a bit Nico when he was here, like how he was also a bit not complaining, but he was a bit. People push the LLMs too much Like that. Even simple things, people use lms. But I also think it's a bit the power of it right like you kind of have this, this tool that you can like automate excel. You could do this before by scripting.

Speaker 2:

But now the mlm is like super easy.

Speaker 1:

Yeah, you just it's super easy, just say do it right um, which I also think is very powerful, right, because you don't, you don't need it to be like you're not trying to replace people, necessarily, right, but if you just say people just look like okay, okay, okay, okay, that's much faster than having to type everything else, so I agree. So AI's wish list for 2025. Just going through this quickly Documentation and code analysis. So basically they're saying they want AI to help them more with documentation and code analysis. Technical debt cleanup, code testing Earlier, easier provisioning of cloud infrastructure, and I think that's it. Any of these four things speak to you?

Speaker 2:

Anything that, Jenny, I doesn't tackle that you wish To me as a wish list for 2025, all of these things are valuable to documentation, code analysis, tech, debt cleanup. What were the others? Code testing, Code testing and easy provision of cloud infrastructure. I think you could have said this for the last 10 years.

Speaker 1:

Yeah, but I don't know if all the cloud infrastructure doesn't really resonate with me. I feel like the other stuff I think yeah, but I also feel like technical debt cleanup. I don't think is going to really happen.

Speaker 2:

No, but I mean these are very generic things and of really happen. No, but I mean these are very generic things and like of course they are important and they may be even more important with gen ei, but like these are very to me good feeling, very logical things to say in a wish list so I mean I agree with them.

Speaker 1:

But yeah, that's true. Okay, then we can move on we can move on.

Speaker 2:

I think the maybe the easier cloud infrastructure you said you didn't agree. Um, I do agree with that, like as an easier provision of cloud infrastructure. Why do you didn't agree? Um, I do agree with that, like as an easier provision of cloud infrastructure. Why do you don't you agree? You think it's easy enough?

Speaker 1:

I just feel like if it's too easy, you're gonna miss other stuff, right like you some like if you make it too easy, there are gonna be side effects underneath that you're gonna miss. Like I feel like you need like it's a bit like it is hard because it is hard, right like you need to know what the policies people have. You need to know how this policy will impact this. You need to know what policies people have. You need to know how this policy will impact this. You need to know that you cannot simplify it. Someone needs to know these things.

Speaker 2:

Yeah, I think you cannot simplify it when you want to have a very generic cloud environment where you want to be able to do everything.

Speaker 1:

Like a big AWS or something you mean.

Speaker 2:

Yeah, that's why people go for the big cloud providers, but for a lot of, let's's say, more smaller, specific solutions, you go to flydo, you go to render, you go to like these type of things right like you say this is good enough, and it actually takes care of everything that I do yeah typically for large corporations that have their own um data lake, for example. Like this wouldn't be enough, right, like you don't have enough controls, but like for for smaller scale applications.

Speaker 1:

That is often definitely good enough yeah, but then I think jenny, it's not gonna play a role right like fly that I know it's not specific to jenny yeah, but that's what I'm saying jenny I like it's not gonna help with these things but uh, the other, the other three, yeah, maybe. Well, code testing I think is already here technical, that cleanup. I don't think it's happening. I don't think it will happen with any eye it will uh create a lot of technical depth.

Speaker 1:

I feel like you would add more right because you don't know what's happening exactly and the documentation. Cornellis, I agree, but, uh, I also think it needs to be guided. And that is it. That is it for this. Um, do you want to change gears a bit?

Speaker 2:

part uh, depends on to what gear you want to shift. You have a preferred. I'm typically all for changing gears with you, but you're always surprising me with what gear you're gonna go to.

Speaker 1:

Um do you want to talk about tech a bit?

Speaker 2:

uh, I thought we were talking about tech no, like uh, stuff from the tech corner.

Speaker 1:

Okay, yeah, go go. All right, so maybe what do you got? Maybe we'll start with our very own valatka you mean lucas? Yes, well, lucas valatka mr valatka yes, that's.

Speaker 2:

It's the guy um sir valatka sir valatka, yeah, it takes a more need to be knighted to be a sir. Right, that's true, you do but you are. No, you are lord right, I don't think officially, but I did get a, get a.

Speaker 1:

No, you got a certificate, certificate yeah yeah, as a birthday gift you need more efficient than that, so maybe the best. What's the backstory? But maybe just quickly.

Speaker 2:

Oh, it was for a birthday. You can get, you can gift like these, uh, I don't know what is it like a, like a square, uh, something yeah, of scotland square and then, uh, because it's part of some, uh, some heritage the land, you, you become a lord. Yeah, don't think it has much actual legal value, but it's a nice story, so everyone but, if you run into Bart, everyone needs to say Lord, bart.

Speaker 2:

Let's go to the blog of soon-to-be-sir Walatka. Okay, uv has a killer feature you should know about. Yeah, and I didn't know about it.

Speaker 1:

You didn't know about it. Yeah, I thought oh, this is going to be. I think you didn't know. I thought we talked about it as well, Specifically with specifying the Python environment Maybe not the Python environment, but maybe explain.

Speaker 1:

Uv is Python-aligned package manager, let's say so. Uv can be compared with Poetry, pdm, hatch, etc. So that's UV. One thing that UV has it's also a PEP is that if you have a script, you can actually add dependencies to that script, right? So you don't have a whole Python project, you just have a script. You just define the dependencies as a comment on the top and you can run it with UV.

Speaker 2:

But that's already a bit further than this, right, like. It's one step further, like what you're describing is, there is a PEP out there that says if you have this py file, this script, at the top you can add some comments where you can specify this script needs these dependencies, this Python version. And then when you run it with UV, uv implemented this PEP, it will actually set up this environment for you on the fly and run the script with the dependencies.

Speaker 1:

Yes, pep, it will actually set up this environment for you on the fly and run the script with the dependencies. Yes, exactly. And uh, not only that, not only the dependencies, but also the python version, and the python version, exactly and the python version. So actually, that's also what lucas is alluding to here. So if you're doing some ad hoc scripting let's say python 3.2, and you want one python and this and this, and you want to pull a dependency, and yeah, so, um, typically what you would have to do is to pip, install pandas maybe.

Speaker 1:

Uh, ideally, you create a virtual environment, you need to activate virtual environment, install pandas and then run python. Uh, but yeah, if you have to install the new python version, then it goes another step. Right, you have to pyenv, install 312 make sure that you're using 312. Create a virtual environment using that python, activate the virtual environment using that python, activate the virtual environment, install pandas and just run python. But with uv and I think it kind of goes along the same lines of what was happening before um, uv can create that virtual environment with the python and dependencies on the fly for you, right? So this is a bit different from what we're saying before, like you're not necessarily running a script. Yeah, so. So this is a bit different from what we were saying before, like you're not necessarily running a script.

Speaker 2:

Yeah, so this is because we're showing a screen. So what Lucas is showing here is you can type uv, run, and then you can do dash dash, python. So you specify what Python version you want and he's saying 3.12, so he specifies Python, dash dash with. So he specifies dependencies With Pandas I want to run Python Exactly, dependencies with pandas, I want to run python exactly. And then the repl starts so with. With one line in the cli, you get basically an environment that you specified.

Speaker 1:

Yeah, I think it's used to play six commands that you have to do and two tools. Well, one tool probably most important.

Speaker 2:

What he's doing now is like starting a repl with a specific uh environment, probably most relevant when you're doing some ad hoc analysis of something right, yeah, I just I talk, want to quickly look into this, this csv or whatever in this, in this example of the pandas. Yeah, um, because of this is, if this is something that you would repeat, you would probably add this, like like we were discussing to the script as a comment to the script or maybe even start a project, or start a project maybe because I know how you feel about the script, or maybe even start a project, or start a project, because I know how you feel about the scripting.

Speaker 1:

But yeah, indeed, and also, yeah, uv will use 3.12 if you have installed. If you don't have it, it will install for you. Same thing with Pandas add that Python version so it kind of takes care of all these things. Indeed, as he mentions here, easier to remember and no trace left behind happy scripting. So, yeah, also shout out to lucas, one of our colleagues here at data roots, and if you want to have a look at other his post, we'll also include this blog post there and his blog post is trending on hacker news yes yes, that's fancy right.

Speaker 2:

Yeah, it is right. He's like he's probably now going around to his friends. Yeah, I'm trending on hacking news. It's like, yeah, what you do I'm a.

Speaker 1:

I'm a blogger.

Speaker 2:

Part-time machine learning engineer you know, I'm not trending hacker news yeah, we need to check this linkedin after maybe already changed it, right trending hacker news.

Speaker 1:

Uh, contributor, exactly yeah, and uh yeah, when I'm not, when I'm not trending, I just yeah. I help companies. Yeah, um, cool, so maybe on the same lines as scripting. I think I talked to you about this sometime, marimo, I don't know. To be honest, marimo is oh yeah yeah, it's, it's not.

Speaker 1:

I mean it's, it's comparable ish to Jupyter notebooks. So in a way you can also say, yeah, it's also for scripting, also for exploration. Um, I tried it a while ago. It's interesting. But basically, in a nutshell, tldr is like they try to reimagine what notebooks could be. So notebooks we mentioned the repl before which basically you just have one line, you, you write python code and then you have the output.

Speaker 1:

Jupyter notebooks are very similar, but you can save that, which is basically just have one line, you write Python code and then you have the output. Jupyter notebooks are very similar, but you can save that which is basically just a JSON, marimo, it is a bit like that. It is way newer than Jupyter notebooks, but they are reactive by nature. So actually, well, you can turn it off. But for example, like on the gif that I have here, if you have x is equal to 2 and then you actually change the value of x wherever x appears, irregardless of the order of cells.

Speaker 1:

So it just kind of keeps track of the references. It will update the values above as well. So basically makes it quote, unquote, make sure that whatever, like there's. No, I don't even know if there's any python, but like variable shadowing kind of thing, right. So whatever you see on the notebook is actually what is there. It's like it's almost like an application now, because if you change that value, everything else will get trickled down yeah, and if you compare this to, uh, traditional, well, jupyter notebooks, it's very much.

Speaker 2:

You run this and you see the output, the output and stays consistent. So you have the danger, if you don't execute all your cells in a linear way, that you create side effects by not respecting the exactly, the order, exactly and here you're saying um, okay, I don't necessarily need to do this in a linear way, because if I change a variable somewhere, everything that depends on that will update. Yes, which is well very similar to if you are used to working with frontend frameworks like. Svelte Reax like yeah.

Speaker 1:

Exactly so. That's kind of what the premise is. So it's actually like the notebook is really an app. If you look at the file underneath, it's just a Python script that every cell is actually a function.

Speaker 2:

Okay interesting.

Speaker 1:

And then it returns the outputs. All the variables are there, so it kind of keeps track of everything. So if you're actually committing and having a pull request in the end, it's like Python scripts, right. This reactivity doesn't need to be automatic, you can also just set it as a stale. The downside a bit is like if you're doing compute intensive things, you change one thing. Now we have to wait all the other things. So it's a bit of a yeah, uh.

Speaker 1:

But they also have the, the marimo, so that's the, the jupyter notebook competitor, let's say. They also come with some um, with widgets, so you can have sliders if you want to change stuff. They come with the graphs and all these things. So if you want to, they also have a way. You have the classic quote-unquote like this, like notebook view, but you can also turn this into a powerpoint presentation, not powerpoint, like slide presentation or like an app, so every cell becomes like a bit of a tile that you can add to your application. So it's a interesting there. There are some things that, yeah, in the end I just kind of went back to jupyter notebooks, to be honest. So I tried it. I was like, yeah, this is interesting. But I think, if you're doing a report, if you're having like a little app, I think this could work well. But, um, yeah, another thing that is interesting why not go for this? Um, I think it was like let me remember, because one thing like async doesn't work, and I was trying to write some stuff async.

Speaker 1:

So I was doing stuff on the notebook and I always had to go back to a script to run the last part. I also think sometimes rerunning all the cells sometimes it gets a bit in the way. So I was really just looking at some results, right. So I was really just doing some looking at some results, okay, right.

Speaker 2:

So it wasn't something like, uh, if I was building an app or a report that I really wanted to show something interactive, I think this would be very, very nice well, I would be a bit uh cautious about when adopting something like this is, like jupiter is very much established, right, you see it everywhere, everybody knows it, and like, if it would be compatible, what it generates is, uh, ipyte notebook files, but it does not, right? No, like, if it generates ipyte notebook files, then you could, you can test it out and still be able to migrate back to jupyter. Yeah, here that is harder right?

Speaker 1:

yeah, you kind of put it ah, yeah, another thing too, that, uh, I was just looking here and I remembered they do have a um, uh vs code like extension, but it didn't work really well. So I think the one thing that I really missed is like, yeah, now I have the ide with the ai assistant, but if you have the web browser, you kind of lose all that. That was a big, that was a big hit as well. So there are some things like that that they're still working on. But I thought the premise and the exercise of reimagining notebooks, I thought it was very, very valid, right. One other thing that I thought is maybe a side like a fun bit you can also every notebook. So if you open on the browser, everything comes with a token, right. So it's a bit more secure, let's say, but you can also expose it and you can actually run with WebAssembly.

Speaker 2:

Ah, yeah, okay.

Speaker 1:

So people could just, with the link and all these things they can actually go and run in your browser. So I thought that was a nice, nice, fun touch. All righty, I see also, indeed, we spent a lot of time on the previous thing, my pad. Maybe Do you want to change gears again, bart? Um well, maybe you're very sketchy, you want to doom this?

Speaker 2:

big leap. Uh, yes, let's doom this um I'm just looking here.

Speaker 1:

What else can we cover before we call it a pod? What is this Doom thing? I saw it before, but I'll let you explain. I thought it was pretty cool do you know Doom? I know Doom. You should play Doom. Where do you want to play this, bart? But have you played it? I've played a bit. But also last week we talked about the gallery.

Speaker 2:

Ah, yeah, that's true. Yeah, yeah, yeah, that's true, that was fancy, right, that was very fancy. What was it called? Again, I forgot the name. It was basically Doom, but instead of shooting monsters, you were walking around in the gallery, drinking wine and collecting cheese, looking at art, it's very nice, but I think you're too young to have played actually played doom when it came out right.

Speaker 1:

Well, I think I played it already as a I say a retro, thing.

Speaker 2:

Okay, okay yeah, this is the way to call me old, but what is now there?

Speaker 2:

and I think it's actually a reaction to something that came out earlier, a bit earlier we saw Tetris in a PDF, but what I'm going to put on the screen now is Doom running in a PDF which is mind-blowing. It's bananas, it's bananas, it's very cool. So you just opened a PDF link. I think you need a Chromium-based browser to do it. Okay, because it uses the PDFium engine and apparently the PDF engine of most browsers not necessarily all PDF readers, but the PDF readers of most browsers. They do support a very, very limited set of JavaScript.

Speaker 2:

Wow, and he leveraged that in order to basically get a text-based Doom running in a PDF engine, which is crazy, and he renders the images, the world, basically by converting the graphics line by line to to ascii characters. It's crazy. It's crazy and uh, I think but I'm not 100 sure I mean you need to. It really depends on like this, security wise, would not be able to function without user interaction. So that's, I think that's also the reason why there are like explicit buttons. You need to interact for it to be able to do something. I see, I think the javascript engine that is in a pdf is it's. It's it's very limited in what it can do. You need to have user interaction, a bit like in a browser, like audio doesn't just start playing without user interaction. Like there are a lot of of safeguards in place and with pdf it's probably much, much, much trickier. So it's really cool to see that it can even run. Doom has run on a lot of places, even on a pregnancy test it's, but apparently it can also run in the PDF.

Speaker 1:

This is cool, right? I think some people were asking, like why? But I also think he's like, just try to push it a bit. You know, yeah, why not, right? I mean, why not? And it's like maybe someone will look at it. Oh yeah, maybe this is useless, but I have this idea, which is actually nice, you know. So, yeah, I thought it's pretty cool. Have you played it? Actually on the pdf, I played a bit on the pregnancy test.

Speaker 2:

No, but the pdf. I didn't have the pregnancy test with but, uh, the the pdf is very easy to play. It's just pushing the buttons.

Speaker 1:

Yeah, it's nice, it's cool. Makes you think as well, like how the compute power, how it advances right before you needed something else. It's just on the PDF, it's just on the browser, it's just there it's there.

Speaker 1:

Okay, maybe last thing to close it off, if that's okay, unless there's something you want to cover. Yeah, let's go. And I saw this. It's a bit of a food for thought corner. This is a meme from yeah, it's actually I don't know. It's a picture of a news. You want to describe what you see there, bart for the people that are just listening.

Speaker 2:

It's a picture of a news article like in a in a physical newspaper, a picture of a woman and x which says there's a quote I want ai to do my laundry and dishes so that I can do art and writing, not for ai to do my art and writing so that I can do my laundry and dishes yes, so the fool.

Speaker 1:

There is a tweet. She says yeah, the the issue of ai is direction, because now they're saying that it's AI is shifting more towards the creative arts. But that's the thing that quote unquote gives people more pleasure and AI should be optimized to allow people to do what they want and to know um, yeah, enjoy life in a way like automate the boring stuff, let me do the stuff that I like, but that's not what we are seeing, according, well, what I don't really according to her, um, maybe.

Speaker 2:

Why don't you agree, maybe, um so I think it's true that ai is like generative. Ai, specifically, is very active in the art scene. Yeah, if you call this maybe already a sensitive statement, because a lot of people say that's not art, but in the creative scene let's maybe put it like that in the creative scene yeah but it doesn't stop you from painting, right? True it's you choose whether or not you want to use that as a tool.

Speaker 1:

It doesn't doesn't inhibit you to do anything with yeah, yeah, yeah, I see what you're saying in the creative space, yeah that's true.

Speaker 2:

Maybe as a professional it does. Maybe as a professional you're forced to to pick up these tools, but as a individual, as a hobby that's how she's describing it you choose what you do, that's, and I think, when it comes to doing the laundry and dishes, I think we're getting closer to that as well.

Speaker 1:

Yeah.

Speaker 2:

And I think we've and actually maybe something that we need to discuss, but I think there have been a lot of hype is maybe overstating it but a lot of more news on robots in AI, and I think the interesting thing of Gen AI, when you look at the field of more autonomous things that also can do something in the real world, is that the way that you need to program these things is much less deterministic like it's much less rule-based, and it's very hard to do things rule-based because if I develop a robot that does the dishes in your house, I don't know how your house is going to look like. So I'm going to try to imagine something and I'm going to build tons of rules.

Speaker 2:

And if you have something with an LM layer in between, you can be a bit more descriptive on what you want to get done versus specifying all the rules together. So I do think we will see advances there as well.

Speaker 1:

Which I think is interesting because before the Gen AI boom, I feel like people will look at reinforcement learning for these things right. I think even you mentioned as well that there was a little reinforcement learning project, that you replaced the thing with LLM right and yeah it worked. It worked, I mean, it was easier right to to, to get to a good point.

Speaker 2:

So and it's probably not one or the other right I think it's gonna be a combination. I think so but what I'm trying to say is like, like the whole uh evolution that we're seeing also brings us closer to getting it to do our dishes yeah, yeah, yeah, yeah, no, uh, true, I agree, I agree.

Speaker 1:

One other thing I was thinking when I was looking at that um, again a bit food for thought. The jetsons, you know, the jetson part the animated series.

Speaker 2:

Yes, yeah, of course.

Speaker 1:

So basically, uh, do you know jetsons, alex? Okay, never mind. Uh, so basically like a family from the future, right? And one thing that I always thought it was interesting is that only the husband worked. Well, this is an old show, but maybe I don't know, but his work week was an hour a day, two days a week, and the hypothesis was that in the future, machines will automate everything and all the stuff will be so efficient that people don't need to work as much, right, and so people can actually spend more time doing the things that they like. So that's why I thought it linked a bit to the previous one.

Speaker 2:

But, um, yeah, like that's not how we see things evolving, necessarily I also don't even know if that's something that people really I think that's optimistic outlook.

Speaker 1:

Let's, let's stay optimistic, right yeah, I think so too much doom and gloom already but I also think that I don't know like you know, like ikigai, you know what ikigai is the way to look at work the japanese yeah, it's not necessarily the way to look at work, but like it's what?

Speaker 1:

like they say, yeah, something like that is like, uh, like um, the one of the the ideas for having a, a happy, long life is to find purpose, and a big part of it is also in your work, right? So, yeah, that's why they say that finding a work that you find purposeful and something like this will lead to a better quality of life and a longer life. So I also feel like I do think that maybe the, the ideally the work would shift to something that is more enjoyable, but I also don't feel like the not working is not the answer, because I also feel like we need a purpose and we need, like we need to feel like we're contributing somehow and not just being existent.

Speaker 2:

Yeah, yeah, I see what you mean. Yeah, but I fully agree with that.

Speaker 1:

Yeah, yeah, I think yeah, so I was thinking like when I looked at the jet scenes.

Speaker 2:

It's like one and a half workday a week, I think is maybe should be more like the ultimate, like, let's say, the optimistic way of looking at all this is that, uh, people in general, and not just the chosen, chosen few, like the wealth, will increase and people become more wealthy and more at ease in life, right yeah, I think that's the, that's the optimistic view, for sure and if, even if you cannot work for whatever reason, that you're not poor?

Speaker 1:

yeah, you have a good life. Yeah, you have a good life.

Speaker 2:

You don't have that the optimistic way of looking and thinking that machines will make everything efficient, automated and less manual labor is required and do you think ai, in three years, will take a step towards that direction? I think within three years we will see the effects of what is going on now in terms of the effects in the job market, that some stuff will become automated and I think how the world reacts to that is the big question mark.

Speaker 1:

Indeed, we'll see. Indeed, we'll see, we'll see, and with that I think we can call it a pod. Unless there's something you want to plug, part no, and that's. I think that's it for today. Thanks y'all thank you you have taste in a way that's meaningful to software people hello, I'm bill gates to sell to people.

Speaker 2:

Hello, I'm Bill Gates. I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here Rust, rust.

Speaker 1:

This almost makes me happy that I didn't become a supermodel.

Speaker 2:

Cooper and Ness Well, I'm sorry, guys, I didn't become a supermodel. Cooper and Netties Well, I'm sorry, guys, I don't know what's going on.

Speaker 1:

Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here.

Speaker 2:

Rust, rust, rust, rust. Data Topics. Welcome to the Data Topics. Welcome to the Data Topics Podcast.

Speaker 1:

Are you Alex?

People on this episode