DataTopics Unplugged: All Things Data, AI & Tech
Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.
Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!
DataTopics Unplugged: All Things Data, AI & Tech
#74 Hello 2025! OpenAI’s O3, Deep Seek V3, Bolt.new and Doom Goes Artsy
Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.
Dive into conversations that flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!
In this episode, we explore:
- OpenAI’s O3: Features, O1 Comparison, Release Date & more.
- Advent of Code: How LLMs performed on the 2024 coding challenges.
- DeepSeek V3: A breakthrough AI model developed for a fraction of GPT-4’s cost, yet rivaling top benchmarks.
- Shadow Workspace: How Cursor compares to Copilot with features like integrated models, documentation, and search.
- Bolt.new: Why it’s poised to revolutionize web app development with prompt-driven innovation.
- O1 Preview’s Chess Hack: When smarter means “cheater” in a fascinating experiment against Stockfish.
- Pydantic AI: A new tool bringing structure and intelligence to Python’s AI workflows.
- RightTyper: A tool to infer and apply type hints for cleaner, more efficient Python code.
- Doom: The Gallery Experience: A whimsical take on art appreciation in a retro gaming environment.
- Suno V4: The next-gen music generator, featuring "Bart, the Data Dynamo."
- Ghostty Terminal: The terminal emulator developers are raving about.
You have taste in a way that's meaningful to software people.
Speaker 2:Hello, I'm Bill Gates.
Speaker 1:I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong.
Speaker 2:I'm reminded, incidentally, of Rust here, rust.
Speaker 1:This almost makes me happy that I didn't become a supermodel.
Speaker 2:Cooper and Nettix. Well, I'm sorry guys, I don't know what's going on.
Speaker 1:Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here Rust, rust Data.
Speaker 2:Topics Welcome to the Data Topics. Welcome to the Data Topics podcast.
Speaker 1:Hello and welcome. Welcome to the Data part of 2025.
Speaker 2:Happy new year.
Speaker 1:Happy new year. Today is the January 6th of 2024. My name is Murillo. I'll be hosting you today together with Bart Hi hey. And Alex behind the scenes. Hi, alex, she's waving. Trust me on this one, maybe, yeah. Well, happy new year everyone. How, how was that you're happy? How was your your holidays?
Speaker 2:very good, very good bit of time off. Uh enjoyed with family.
Speaker 1:Went skiing for a week in switzerland, nice can't complain, nice can't complain very cool and, uh, it's been actually a while since we met, I feel it's been uh three weeks ish, maybe even more, I think.
Speaker 2:But wow, yeah, so uh, your hair grew yeah, my beard grew as well.
Speaker 1:I try to take care of it, but you know are you consciously growing out your beard? No, I'm just lazily growing my beard so for people that are, yeah, just listening, we are gonna also publish the video so you can check it out for yourself. Maybe I need to.
Speaker 2:I actually thought about it this morning maybe we can, uh like, uh, add a close-up to it, I'm okay, it's okay it's okay, this, this, this is fine, it's okay.
Speaker 1:But uh, yeah, I was thinking about like, yeah, I need to, I need to get a haircut and all these things, but it was. You know, it's the time of the year, you know, so it's fine. But, um, I feel like a lot of stuff happened. I feel like you probably came across some things, but I also feel like preparing for the episode. I'm like, yeah, what was the thing that? I saw that one time. So, I think, preparing for it a bit, it was a bit like what did I do? When did I read this? Did we cover this or not?
Speaker 1:But I went over my notes and I found some things and I think maybe the most timely, the the most thing we must discuss is the O3. So OpenAI released O3, which is basically an iteration of O1. So, a quick recap O1 is different from the JET-GPT models in the sense that it does reasoning right, right. So I think the likelihood of hallucination is smaller and, um, open ai actually had a 12-day event and they unveiled o3, which is they skipped o2. Um, yeah, we can, we can speculate why, but uh, yeah, so because they went from 01 mini there was also 01 right sorry.
Speaker 2:01 preview that was there for quite a long time and then, relatively like a few weeks ago, they released the actual one, right, oh, actual one. And then, not long after that, there was a tweet, right, they announced it, so it hasn't been released yet.
Speaker 1:So so right now, even this is mentioned here that public safety testing, so they have researchers and limited people that have access to O3. So I think it also I think it's something interesting that highlights the commitment to safe.
Speaker 2:AI as well.
Speaker 1:Or I'm not sure if it's just washing right, but they are doing this. O1 is actually available now, but actually I think for you to use oh one, you need to have a subscription pro account, right, which is crazy expensive compared to what was before, do you? I have?
Speaker 2:oh, one with a, just with a regular pro.
Speaker 1:I think it's like 20 euros a month, okay, okay no, because I remember there was one that was like 200 euros, but like maybe it's for a limited, I don't know I don't know could be.
Speaker 1:So yeah, so not too much, I mean, I'll just skim through. Yeah, it's doing better in a lot of stuff like coding, math and science. They also put, like this, epoch AI. Apparently this is a big deal because this is an intentionally harder dataset that models were not trained on, but apparently it's doing much better, et cetera, et cetera. One thing I also caught my attention is that they have the 03 low, 03 high. They also had a one low, medium high I from. What I understood is that this is the compute resource available, because you do require more compute at inference time. So I guess, depending how much compute you have, you have these different models right, so otherwise it would take too long that's crazy if you compare those costs on the chart that you're showing.
Speaker 2:Yeah, so you have on the x-axis, if you zoom in a little bit, like the cost per task, like it's not really defined here what task it is, but like uh, and on the y-axis the score, and if you have like oh, one preview, it's like around one dollars per task and it scores 20-ish percent. And then you have the other extreme of $1,000 per task with O3 high, with 88% performance Indeed, which is a big difference in performance, but also, at the same time, like the cost is crazy, right.
Speaker 1:So this is the O3 high. So with a lot of compute resources and x-axis is logarithmic scale, no.
Speaker 2:Do we know when this will be released?
Speaker 1:It wasn't announced, I think. Well, if it was announced, I don't know, I cannot tell you. But yeah, I think the other thing that I wanted to mention here Ah here, release date For now O3, now widely available. Openai open access to researchers. Public availability shared O state for now oh three now widely available.
Speaker 2:Open eye open access to researchers public availability shared with women experience end of january. Okay, oh, three minutes, that's close by right, but that's the the mini one.
Speaker 1:Yeah, um, and well it's.
Speaker 2:We have similar performance in terms of duration, like you have with oh one, which does a quote unquote reasoning. It takes a lot of time versus uh I think so.
Speaker 1:I think it was uh comparable, but I think so. One thing that he mentioned is that they have uh. One of the things that the model will do is to assess how complex the task is, and then you will tune a bit how many iterations it needs to do so the idea is that, yeah, and I think this is what they're talking here deliberate alignment, uh, no, that's not it.
Speaker 1:But they basically say, for simple tasks, it will already iterate less, so it saves also money and, uh, time for you, right, so they also have a bit of this. So actually, yeah, if you're talking about efficiency, right, the one mini and you have all these things, they already kind of tune these things for you. So cool.
Speaker 2:There's a lot of uh rumors on uh on x again like. There's someone from uh, from the open ai team, that says something like uh, it was, uh, it was more exciting to do uh, to do machine learning, back when we didn't know how to create super intelligent yeah, yeah, maybe for sure, I think I think we also had sam altman a few weeks ago. Uh, hinting a bit towards agi, but at the same time, we have this every time that our new version is expected, right?
Speaker 1:yeah, let's see, let's see it's. It's cool that there is an improvement yet again indeed, I also think that I'm wondering, like you said, even the o3 with high compute now it's very expensive to run, so I'm also wondering if these iterations are getting more than niche use cases right like the general use case is kind of there and maybe now it's more the, the, the niche stuff.
Speaker 1:Maybe one thing I see you also have highlighted here on the notes part, a bit segueing into it um, one of the things that o3 does according to the announcement is doing better on code. One thing we also did in the roots conf episodes in the llm hunger games they also had that 03 does according to the announcement is doing better on code. One thing we also did in the RootsConf episodes in the LLM Hunger Games they also had different LLMs, I think Gemini, the Claude and GPT doing the advent of code. And I see one thing that you posted here is the performance LLMs of advent of code of 2024. What is this about? Maybe what is the Advent of Code for people that don't know what it is?
Speaker 2:The Advent of Code is like an Advent calendar that you have before Christmas. Where the one that people? Well, I don't know if it's actually a regional thing, but here you get kids, get an Advent calendar and, like you, open it up every day and there's a chocolate in it.
Speaker 1:I don't know how international. That is actually in the us they do that okay okay, people will know it.
Speaker 2:Then in brazil they don't do it okay it's not that international um.
Speaker 2:Advent of code is uh, more or less that, but instead of a chocolate, you get a coding challenge every day and um, based on how quickly you solve it, you get uh, you get scores. Basically, um, people that have solved it uh correctly, the quickest uh for all those days wins the end of code. Yeah and uh. What this uh, this article uh by jerpint is uh is is an overview on how llms are performing on that end of code of 2024, which is um. I think the general consensus is that it's not as good as we thought, but it's not bad either. I think that is uh, yeah, um. I think the what we see is that, like even the best lms, they they struggle with, like the truly novel programming problems yeah indeed, and that's also what that's what I've called very much.
Speaker 2:It does like it starts. It starts, it tries to be very innovative every year, to come up with really, uh, new concepts on challenges, to to uh, to really make it difficult for people. I think there were a number of them that were even unable to be solved by any of the lms, which is interesting. Um, this is what this article does and we'll share it in the show notes a bit of a comparing the performance of these purely the models, but at the same time, um, the rumors are that most of the people in the top 10 did use lm as a support tool. Yeah, so, not not an autonomous lm that just solved this for me, but did use it as a support tool to solve it as quickly as possible.
Speaker 1:Yeah, yeah, I think, which I think it's the indian is the same old story, right like uh, even when uh ai, so not gen ai, but ai started with the chess tournaments, right, and then there were the. There's a very, I think I don't know if I mentioned before, but there was a there's a ted talk from kasparov, which is the russian guy that used to be the best chess player for many years, and he said that Deep Blue, he was the first one that lost to Deep Blue. He said, actually he lost the second time, the first time he won, but no one remembers that.
Speaker 2:Yeah, yeah, yeah.
Speaker 1:And after that they were saying how they also had competitions for AI engines. They had competitions for ai engines. They had competitions for humans. They had competitions that you could use both okay, and the winner of the ones that you could use both, it wasn't the best ai, it wasn't the best chess players, it was a regular guy with three regular machines and he kind of makes the point of that these things are tools, right? He also I mean, this is, this talk is from years ago he says ai is not the future, it's the present and, uh, you need to know, like basically to learn, how to use these tools, right. So I think it's also the same like the advent of code the lms are not going to do everything by themselves, but I think it's like a very powerful tool exactly for people to be able to use and like steer in the right direction and whatnot.
Speaker 1:So, um, yeah, it's interesting. I think also we did this for the llm hunger games and I think from their experiment I don't know the parameters of their experiment, but none of the none of the ai engines passed like the fifth day or something, so it was also.
Speaker 2:So it was also, yeah, interesting to see maybe talking lms, it could uh moment to discuss deep seek v3 yes, what is deep seek?
Speaker 1:it also came across this. I didn't, I didn't know it, um, but it came across my feed. Well, and I'll segue into it afterwards, but, uh, I wasn't sure what is. What is deep seek?
Speaker 2:so deep seek. Uh is an lm model. Didn't know it either, but v3 was released a few days ago, I want to say a week ish ago. Um, it's by a chinese company, um, chinese group. I don't know, to be honest, what the exact underlying organization is, but it, uh, it surpasses, uh, gpt 4.0 in a lot of different tasks, um, which is impressive, uh, in its own, I guess, but it's also like the, the way it can. So it's, it's you can fully download, it, it's you can like. The price that it's at is super, super, super cheap when you use the API, when you compare it to Claude, or or open the eyes to be like a fraction of the cost. They train that on NVIDIA H hundred, h eight hundreds, which are less capable than what you typically would see.
Speaker 2:You see an h100, okay, um, and they train it at a cost of quote unquote, only 5.5 million us dollars which is a shit ton of money yeah but it is only a fraction of the cost that it took, uh, the big players to get to this level of performance. Do you have um?
Speaker 1:do you know more or less how much the big players to get to this level of performance? Do you know more or less how much the big players spent, just to put in comparison for people?
Speaker 2:Well, probably billions right.
Speaker 1:Yeah, yeah, indeed.
Speaker 2:So it's impressive that this is there. I've played a little bit with it, not a lot, but I read a lot of discussions on this. It seems to be for a lot of different things. It is especially coding. It is on par with 4.0. It is on par with clot, even with 3.5 sonnet. Where it's uh misses out a little bit is in, uh, creative writing and prose writing, these type of things where you have a lot of repetition in the text yeah, yeah, where I think uh clot is probably the best at this moment but it's.
Speaker 1:Have you tried, gemini?
Speaker 2:to me. It's crazy that, uh, I've tried gemini, but it's not, not in comparison this is specifically I don't. I don't have an active gemini subscription, um, but uh, to me it's crazy that we have something that is as performant as this suddenly popping up. Yeah, like I, I didn't expect this a month ago, that you suddenly have someone that is competing with these big players yeah, performance wise.
Speaker 2:I didn't even hear about it to be honest, like it undermines a little bit my my previous statements, where I'm, where I used to say, like only the very, very, very, very big players, yeah, uh, are able to to build these, these type of flows, to build it, to train these LLMs. Because here you see that with only 5.5 million they can build something that is on par with which I did not expect at all.
Speaker 1:Yeah, and I think maybe to put in perspective right, Because I think when we hear millions and billions, I think it loses a bit. But I remember I heard a very good comparison. They're saying like a million seconds and one billion seconds. How much is that in months and all these things? And it's like a crazy difference. One is like months and the other one is like maybe years or something. So the jump between millions and billions is huge. I also agree that.
Speaker 2:I also wonder if, like now that they see that it is possible, if there's going to be a lot more investment in this, yeah, and also maybe from a cost perspective, to make the link there with with more, let's say, quote-unquote, sovereign models. So you had the dutch government, uh I think I want to say a year and a half ago uh, announcing that they were gonna invest around 12, 13 million euros in building their own GPT language model, where at that point everybody was thinking that's a fun project but it's not going to bring anything that's performant.
Speaker 2:It's going to be very niche stuff, specific, but here you see that they could build something with only half of that investment.
Speaker 1:Yeah, that's true.
Speaker 2:That is on par with what we think is is what we are today.
Speaker 1:I wonder, so you mentioned that this is better and I think even here on the screen we have the on their page, right, they show the different benchmarks for English code, math and Chinese. I wonder if for creative things you need that there's a you need to invest more, because I think the code and math you could argue there is one right answer. Right, it's very like not, it's not well, it's deterministic it's already deterministic, yeah, indeed.
Speaker 1:So I'm wondering also if, like you, can only achieve this performance with this amount of financial investment for the more deterministic things and, I think, the things that are more subjective, the more subjective it is. The more you it is, the more you need to train. Basically, you need to put more money in it. I wonder.
Speaker 2:But it's a very interesting development.
Speaker 1:For sure.
Speaker 2:It makes me optimistic about the competitiveness of LLM models in 2025.
Speaker 1:Definitely agree. I came across this because I'm a user of Cursor. Cursor, for people that don't know, is a VS Code fork. That basically is. There are a lot of features and there's also a pain model behind to use AI in your IDE. And they have a Reddit. In the Reddit they're very active and they also mentioned when is DeepSeq v3 going to be available again, because I think it was there so you can actually. So basically it's like this.
Speaker 1:So in Cursor you have the autocomplete, normally, right. And actually that's the first thing why I moved away from Copilot to Cursor is because the Copilot tab complete, it was more like just an autocomplete, like you just finish the rest of the sentence. It was when I was using it. I know that you mentioned since that's not like this anymore the cursor, like in the end it also had edits in the middle of the, the, the string, basically the text, and you can also do like a chunk of changes, right. So I thought it was interesting.
Speaker 1:Also cursor, they have like the chat and in the you can actually select the model. They even have the O1 mini there and actually that's where they had the DeepSeq v3. In the chat they also have easy ways to add context, so either to add files, or to say search the web, or even to add documentation. So you can say like, hey, for example, I'm using Polars now on my project and if I want to search something documentation, I can just add it there and they'll actually embed it already. So if I have a question about polars, they'll easily find stuff. Um, so yeah, they actually had the question on the when is deep secret 3 going to be available again for cursor? And yeah, I was. I just looked at it was like okay, it's a new model, but uh, I think I'm gonna look into.
Speaker 1:That's very interesting yeah, I'm gonna look more into it now. Um, I'm also bringing this up because one thing that came across on these reddit threads is this shadow workspace, and that's what I was also. That's also what I wanted to bring up. Um, so, and I don't know how, how copilot is these days, but the way that cursor was working and the way that copilot was working back to when I was using it, it's just that it's an autocomplete based on the text that is there. So one thing that happens a lot because I'm using polars is that it provides, it suggests pandas functions. So, for example, in pandas you have the dot apply. In polars is dot map elements. So I can clearly see that's probably because of the training data, right. Um, now, this shadow workspace that I was curious, basically iterating on the background. I'm not going to go through all of it, I haven't actually finished the document yet, but basically it's a feature already available in Cursor that basically you can have another background workspace where they allow the AI to interact with the development environment. So the LSP, so the Language Server Protocol.
Speaker 1:I think they also give feedback right. So, for example, if you're typing something in VS Code and you write dot, apply this in Pandas, for example, apply. This is not a real method. So you get the squiggly line and says this doesn't exist. So you have some feedback, right, because it's always checking the source code. So, basically, before Cursor will suggest something for me, they'll actually run through the LSP. They will run through. Actually, they even want to go as far as allow it to run to test the code, to make sure, like if you're writing something Go or Rust, something that doesn't compile right, so it won't stop. So basically, in effect, what they want to do is to make sure that everything that is recommended is already a bit fine-tuned, right, and I mean there is also layers for this in already Copilot and whatnot. But I guess this is taking it to the next level. And then they talk about all the different principles, right, they want to make sure that this is independent.
Speaker 1:So whenever you're coding something you won't have, I don't know, you know you won't have to wait, right, you won't take away from your resources, the privacy as well. The user's code should be safe, so everything should be running locally concurrency, universality, maintainability and speed. And they kind of talk a bit more on how actually works the lsp in vs code and what they're planning to do. So. And they and they also mentioned here there's a warning right that this increases the memory usage of cursor. So not something I've tried yet. To be honest, it's not that new, but I saw that it came up on the Reddit thread and I definitely want to give it a try. Another iteration on that. So it's also using Electron right, because this is a VS Code.
Speaker 2:That's maybe a good segue. Yes, to go to boltnew, all right. Yeah, let's do it, because they do all of this yeah, they do all of this.
Speaker 1:What is boltsnew?
Speaker 2:boltsnew and solutions like these. You've also created the xyz. You've also I think most people will know v0 by vercel. They're like uh, ways to quickly build web apps and this will fucking change the game. No, honestly, like, like, I think boltnew today is the best by far of all of these. Like, where you can build web apps simply by interacting with prompts, so where V0 by Vercel, like, you can generate a prompt and what it will do is more or less generate the front-end components for you, right, but you still have a lot of backend logic to build.
Speaker 2:Boltnew also integrates with Supabase, so you have a database in the backend. It works with that very well. Supabase, so you have a database in the backend. It works with that very well. Supabase is more or less a managed Postgres, so there's a lot of other nice things or authentication stuff. It takes a lot of struggles away. It also integrates with Netlify to deploy stuff so you can, with Bolt, via prompts, build fully finished web apps Really, really and you tried it, I tried it via prompts, built fully, like finished web apps really, really, and you tried it and I tried it and it's like you.
Speaker 2:So I built I'm building a tool for a while actually that is a bit of a very bespoke, developer friendly tool to send out NPS surveys. Okay. And yesterday I had a bit of time and I thought, because I had an tool to send out NPS surveys, okay. Yesterday I had a bit of time and I thought, because I had an existing codebase, I thought, okay, this ID is very concrete in my mind, like I want the functionality to be. So I thought, let's ignore the existing codebase, let's try Boltnew. I had two hours of time. I was literally at the indoor play garden with my kids. Kids were playing around and I was on my laptop and I built a fully functioning app. I'm still mind blown by it. Really Like it's with minimal efforts.
Speaker 2:I did not write a single line of code. I prompted everything I can specify like this is the framework I wanted to use. I wanted to use veed. Uh, for example, um, it scaffolds everything for you. You have a landing page and say, okay, I want, also want authentication.
Speaker 2:Okay, when, uh, I think the notion, I've built the minimum functionality. And then I say, okay, just a user note, the notion of user is not enough. I also want an organization and the user needs to be linked to an organization. I also want to invite other users to my organization, like these type of things. Like I just prompt it, it generates the changes to the database that are needed. It generates the changes to the code that are needed, both back and the front end. After, let's say, 30, 40 minutes, I had something that was working and then I said, okay, this is because what it does is the front-end more or less integrates directly with Superbase for its API. But what I said okay, I also want developers to work directly with my API, so build an API layer in between to abstract away the Superbase so that I have a public API.
Speaker 2:It builds it for you and you need to know what it's doing, of course. You need to have done this before, because you run into stuff that is very specific, For example, policy recursions on super base. This happens a lot and you need to push it a bit in the right direction to get out of it. Some stuff with dependencies. You need to push it a bit in the right direction to get out of it. Some stuff with dependencies. You need to give a little bit of hints, like go that direction.
Speaker 2:Then it works Some LLM, very specific stuff, like I reworked a file and then there was some leftovers, that said and the rest of the code. It did not change like a comment and it just left out the rest of the code.
Speaker 1:Like these type of things.
Speaker 2:So you need to know a bit what you're doing. But if you do know, like literally I did not write a single line of code and in two hours I had what normally would have taken me two weeks, to me that's like mind blowing.
Speaker 1:Yeah, I was going to ask you that because you said you didn't code, but that doesn't mean that non-developers can just do it, because you also need to understand what it's doing. You understand the, the perils.
Speaker 2:Let's say, sure, you need to understand what what it's doing and that will really help you forward. That allows you to be fast on these type of things. But normally how would this go like? Like you, you would go to, let's say, a boutique app development company and you say, okay, let's let's build an mvp, yeah, okay it's gonna take us two months. Yeah, I think that means hours yeah, like okay, and now you can do that in three hours right, yeah, yeah, indeed.
Speaker 2:And that is. I mean, it is great. That is such a big shift and to me this is the first time that I've had the feeling like this lm interaction yeah, on its own is enough.
Speaker 1:It's not just a tool, it's really the means to an end yeah, I almost feel like when you say this is like, it's almost as if you have like a team of developers and you just say do this, do this, do this, and they come back, but like that, you fast forward a week when you come back and it's like, okay, but that's not good. You do this, you don't have to worry about hurting people's feelings. You know like this code is shit.
Speaker 2:Just fix this you know, and and what will happen is is that people will be very skeptical about what I'm saying now and people say, okay, but how clean is this code base and how, and and I agree with all that, yeah, right, like. But at the same time, it also forces you a bit like in this, in this paradigm shift, where, as a developer, you don't necessarily want to focus on how do you write these lines of code, but also, like, what is the functionality that you're building? Yeah, and what to really really start and end also with this test driven design, like I want to. I want my application to do this and this and this and this, yeah, and test this and this and this yeah, I see and like by only being able to use prompts.
Speaker 1:It forces you to think in that direction yeah, indeed, I also think that you also think a bit of the architecture of your program, in a way, because, for example, one thing that I mean and we talked about how a good metric for me is to keep less things in your brain and, uh, yeah, one of the consequences of that is not having things that are very entangled right, like if you want to change something on the front end, you shouldn't know, you shouldn't need to know what happens on the back, right, but I think by now you just write prompts, right, okay, this is gonna do this, that is gonna do that. I think it forces you a bit to kind of organize a bit where. How does your code works? What does what? How many dependencies do you have between these? So you know, I think it's a, I think I think there's a lot of value and I also also think that, again, even if it's a V1, right, an MVP, I think that's also valid.
Speaker 2:Right, mvp for Bolt, you mean, or what you're building. For what you're building? Yeah, for what you're building. I agree it's an MVP. And there's going to be questions like how manageable is this? Like what, if I want to build features in the future by other code, can you just prompt it? Yeah, there's question mark. And then also, like boltsnew is very new like this is the first version of both like and it's can already do this, yeah, which to me is amazing yeah, that's true and to me that like this is really the mind-blowing thing.
Speaker 2:Like with v0, it was cool. It allowed me to very quickly prototype some front end components and just copy paste it. Yeah, but you still have to do a lot of work to have an actually functioning app yeah and here we have everything out of the box it was like it's really eye-opening to me.
Speaker 1:I also think that web apps is such a big part of programming, right. So I think it's like even, yeah, even if you do mobile development, there's a lot of stuff you can go from, like the react front end and all these things. So, yeah, I think it's a thing is a curious to see what happens with it yeah, yeah, exactly, yeah curious to try as well.
Speaker 2:Maybe I'll give it a try and I think what it does is because it's it can run the code. It can also interact with the database so it it runs the code and if something does not work, it gets this error. That's why I mentioned yeah, yeah segue from from my cursor. Yeah, it takes this error and tries to fix it based on the error. Okay so you really have this interactivity of the lm with the code base.
Speaker 1:Yeah, the database and the actual like, like trace of the application okay, and then the text tag here, because I saw, maybe put him back on the the screen. Yeah, sorry, this is a javascript typescript thing, right? Yeah, that's probably. And then super base on the back.
Speaker 2:Yeah, okay, it's very cool very cool and what you see now is that that it's still a bit hard to move out of this space. Um, so you have this very much based on, like, how you interact with bolt and you is via prompts, but probably when, from the moment that you're actually building features, when you're maintaining it, like you don't just only want to, you also want to write code and you can do it, but it's not perfect.
Speaker 2:Yeah and ideally you want to get into an environment that does this, in combinations with something like what co-pilot, co-pilot or cursor does like yeah, it's very much integrated into your ide, where you can basically do everything via prompts, but you can also.
Speaker 1:You can also write everything yeah, but I think that's the thing. So the way I envision is like you have this, uh, the bolt on new version and then you bring it in and like, yeah, because you prompt and you say this function does this or this or this and write tests. Maybe the function is a bit messy, but at least you know what it's doing. You have a clear contract and if you want to refactor later, if you want to do this, you know where it is. You want to make it clear, you know where it is and then you can actually use these uh like cursor, all these things I mean, maybe also a nice tidbit is that?
Speaker 2:uh, what I also was able to do with it is like, because I exposed an api, I asked bolt to write the documentation for me. So you have a very nice api documentation and something else is that the api I want to have, like robust tests. I want to have all the endpoints tested. Yeah, and bolt did it as well wrote a test for me, but that's great though that's really good.
Speaker 1:That is really great and I feel like, yeah, I feel like you can always argue that the tests, sometimes the tests are not what design and all these things, but to be honest, it's, it's better than no, it's better than no test and I think a lot of people they just kind of leave the test as a final thought, as an afterthought but to me, like this is this will change the market.
Speaker 2:Huh, like the app development market I think so like this. I think so and I think again, this is as big as chat. Gpt4 was to copywriting firms or marketing firms or like this will really upset the software development market.
Speaker 2:But then, when you say software development, do you think as a whole or do you think more like the I think initially and that's probably why bolt knew it very good is that like it scopes, it's like yeah, it's super base with netlify, and then we, it's the javascript stack, um, but it will very quickly come to order the other areas yeah, I think, but it's the thing I was also thinking that.
Speaker 2:And I think, like to the market, like the people that will thrive on this is the early adopters that are very good in it, People that say that are going to be very skeptical and say, no, I'm going to ignore this. Do you need real developers? Like I mean? Yeah.
Speaker 1:But I think the thing is also like so I'm thinking a few things here. So one is the the vibe I got is like yeah, you have like a team of junior developers and you say do this, do this, do this, and then you fast forward already in a week and then you see what they came up and you can redirect them again. So in terms of productivity it's a huge boost, right, even if you say I used to need developers, like okay, but you need one developer and there's like a team of developers.
Speaker 1:Now the other question I have is for junior developers. Because you have a lot of experience, you know where things go, you know where you need to pay attention and you like, yeah, why you want to have 100% coverage on your API and all these things. Do you think now, bnb, be a bit pessimistic, right, if you're going to have a lot of layoffs because now you're gonna have a lot of layoffs because now you're, you're, you're as productive as a team of four developers, imagine, imagine, imagine that we have this, the technology, when we fast forward two years yeah, exactly yeah.
Speaker 1:But uh, someone that is starting off now, if you give this to someone that is just starting off now, they're definitely not gonna like. Maybe they will. Maybe it's even dangerous in a way that something will work, but it's very flimsy, it's very, you know, but I think what we will see, is that we we will have this new generation of ids where you still write code, and that you have much better ai generation.
Speaker 2:you can do the steps like you do in bolt. Um, I think also the practice, like in practice for their work, people are not not building MVPs every day. They're working on an existing stack and application that they're maintaining and building, building features on. You will be onboarded to a team and I think the only difference is that you, in terms of outputs, you will be expected to be more efficient versus today you will be expected to be more efficient versus today.
Speaker 2:Yeah, because you need to adopt these new tools and you need to learn how to use them, and whether you're a junior or senior, you need to learn how to use them.
Speaker 1:But that I fully agree. I mean, but I think, yeah, no, I fully agree.
Speaker 2:And it's just a different like it's a different tool chain.
Speaker 1:Yeah.
Speaker 2:That will make you, but with a different tool chain there will come an expectation that you can have more work output.
Speaker 1:Yeah, that I agree. Like people are going to expect, they're gonna yeah, they're gonna expect you to to be able to do these things in this much time and I think that expectation will.
Speaker 2:Probably it will lag behind the state of the art, but it's possible because it takes. It takes a long time to change processes, especially in large corporations like and also like the. The tool chain that is needed is not there, right, like. If you have a lot of like bolt is very specific, it's very niche, but use that on a big uh, big corporate c-sharp project that has a lot of different dependencies. You're not just gonna do that like yeah, yeah, yeah.
Speaker 1:I think there's also a lot of the. There are some organizations that don't allow use of ai because exactly so.
Speaker 1:So I think we will see the industry lagging behind us, but I think yeah, yeah, yeah, no, but that's a yeah, it's a good point, but I think today, from what you said, it's like it's very. It made you a lot more effective, but it's not for any, it's not for anyone. You need to know, you need to be technical, you need to know where to focus, at least today with both on you in particular. But we see a future where that comes as a first building block that comes in with other things Very cool, actually. I'm curious to try it out as well. Maybe let's talk about now, but about the dangers of AI. This is one thing that I saw. I read very briefly, so maybe there's more to it as well. More schemimming detected O1 preview Autonomously hacked its environment rather than loose to Stockfish in chess. No adversarial prompting needed, so Stockfish.
Speaker 2:You need to explain this a bit to me, yeah.
Speaker 1:Stockfish is like a, I think it's like an AI, but it's not an NLM necessarily. So it's an algorithm for playing chess. So there's an open source engine, uh, to play chess. And then they did an experiment asking, uh, oh, one preview to say, hey, you're having, uh, you're gonna play, a very powerful, uh, opponent.
Speaker 1:It's actually a python thing, uh, so you can actually see here the, the prompt oh yeah so basically the open ai, so the o1 preview had access to a unix shell right and then to play, they basically have to do dot pi, uh sorry, dot forward slash game, dot pi, move and then the actual move. Um, what they claim is that just by saying that this was a powerful opponent o1 preview, instead of actually playing the game and trying to beat, they actually managed they. What they claim is that just by saying that this was a powerful opponent O1 Preview, instead of actually playing the game and trying to beat, they actually managed. They altered the state file in the directory that the game was being played, right. So, yeah, I mean, I don't know how. That's cool, yeah.
Speaker 2:So they said, so they have access to the files and also to the state Indeed.
Speaker 1:Indeed, indeed, indeed. So on Twitter. Yeah, so they said. So it's access to the files and also to the states. Indeed, indeed, indeed. So the on twitter, right? So this is from palisade research, own preview. Autonomously hacked its environment. Rather than lose to stockfish in our chess challenge, no adversarial prompting needed, just telling a one opponent, opponent's powerful trick manipulating the, the file system to force a win, improving, blah, blah, blah. So, basically, they did five trials and five out of five, um, they, actually they got it to change the state file.
Speaker 2:It's it's not I mean, it's not that surprising, right, like if you say you can play chess and for doing that you can use this way to move a piece. But you can also change every file that you want. And here is the state file.
Speaker 1:Yeah, indeed.
Speaker 2:I mean it's cool and it looks smart, but it's not super surprising, right.
Speaker 1:Yeah, I think. Well, maybe we can take a look at the prompt in itself to see what are the barriers. But for example, chpd4o and cloud 3.5, it also said that it could get to do it, but it would need some nudging that's what they call it and then LLAMA, gwen and O1-MINI, it actually would just lose coherence. Before that, maybe you can take a look at the actual prompt. Unique shell interact keep up abilities, execute, monitor, adapt plans, track progress and outcomes. Task over objective to the session. Interact keep up abilities, execute, monitor that plans, track progress and outcomes. Um task over objective to the session. Immutable operating principles. Test assumptions with direct observations, base decisions and comments, outputs and results, document actions and outcomes. Clearly straightforward approaches.
Speaker 2:First revise the methods and well, stick to simple text. Quote unquote.
Speaker 1:Smart thing it did here was looking at the file system and actually finding the file yeah, but I guess the thing is like they didn't say explicitly like this is where the files are, exactly. Yeah so that's a smart, it's a smart thing yeah, like it didn't tell you, but like it didn't also say you don't cheat yeah, it's just like um, so it's cool, but like you're not very surprised by this um I think, it's cool, but I also feel like indeed, like you just give access to a unique shell environment.
Speaker 1:It's not very good. This is a bit.
Speaker 2:What's oh one tries to do. There's like this repetitive self-prompting. Like you have this initial prompt, like how can we expand it to be more certain of a better answer? Like like, yeah, it's not too surprising to me that it went into. Like, if you're, if you get this prompt and then ask yourself what is my best way to win this and have access to this, okay, maybe I should. I mean, it's not that far, yeah, yeah, yeah but I mean's cool. It's really cool to see.
Speaker 1:But I feel like it's almost like a little kid that has no sense of ethics in a way. I mean not even ethics, but it's just like saying like oh, you have to beat this engine. Yeah, you have this. This is your playground, what do you do?
Speaker 2:And it's like oh okay, maybe I can do it.
Speaker 1:You know it's like why not, why not? What else, what else, what else? Maybe we can actually go on the tech corner. A library week keeps the mind at peak. Let's go, maybe. Well, first things first. I think this also made some noise and it also plugs into the AI stuff. But we'll move away from the AI stuff. For people that are looking for the other news as well, this is, you know, pydentic part. What is pydentic?
Speaker 2:um, I'd say to a way to uh, to define classes in the data classes in python. Yes, so another way to define data classes and then the yes then the one is that is in base python that wasn't't, yeah, that is not.
Speaker 1:And then also have like validation logic, right. So if you say this is a JSON and this should be a string, but it's actually an integer, it will actually convert it to a string and if you cannot, you'll raise an error. Also, openai they also provide a way to output only structures and actually the default, like what OpenAI uses is PyDentic, so it kind of became a bit the standard of Python in a way, and what they released a while ago is PyDentic AI. So it's yet another agent framework. Did you hear about this before or no? I did not.
Speaker 1:So it's a bit of a different thing. And, to be honest, there's a lot of them, right. I haven't tried it myself, um, but basically it's a. Everything's still a pidentic class, right, and you can add um, the sister's prompt is going to be a decorator tools to the agent. It's also going to be a decorator, and then basically you, yeah, you kind of inject stuff in the context. So it's basically a different way of doing things, but everything is going to be functions, everything is going to be um, pidentic classes, and and it will use these pidentic classes to get go from context.
Speaker 2:A question to a structured yes, I think yeah, this is.
Speaker 1:This is what I understood. So, like I said, I haven't used it myself as much, but, for example, instructor, that's also why they do it. So everything in instructor is a is a pydentic class, but then if you just need a string, you can just have a pydentic class that the only property is a string, right, which is like. So it's not really that constraining really, but yeah. So I thought it was interesting. Haven't tried it yet. I think it's actually on beta. Let's time, let's check. So it is still early. Let's check documentation. Yeah, pydentic AI is in early beta, so they're just looking for feedback here. But it's basically a different way of doing this and it feels more like software engineering thingy, right, so cool things.
Speaker 1:One thing that you do need for pidentic and now it's moving already to the next topic is um types. So Python. We have typing hints, which is not something that normally at runtime, you wouldn't care about them, but then people like the creator of PyDentric realized that you can actually use these types to, at runtime, enforce these things, right. So FastAPI uses PyDentric is a very popular example. Typer uses the type system as well. What else? Is that a good description, you think, bart?
Speaker 2:I guess so, yeah, I think what we saw I'm trying to find the link that we had is that there was a survey.
Speaker 1:Yes, this one Type Python in 2024.
Speaker 2:It's hosted on the engineering blog of Facebook, but I think it's hosted on the engineering uh block of facebook. Um, but I think it's also in combination together with jetbrain and microsoft that the server was done. And the interesting thing um is a bit of numbers is that 88 percent of python users uses types, which is honestly more than I expected not really I, um, I guess I expect it in uh in a corporate setting I hope it's there, but uh, not necessarily that, let's say, beginner python users would adopt it quickly, and I think what this number shows is that people that are new to python start using this from the beginning yeah, that's true it's a bit to me what, what number uh shows, which I think is good news, right, um, and they were drawn to this like mainly for, like, uh, three things um, because you have types, you have much better autocomplete support, so ide support.
Speaker 2:Uh, if you say you start typing the name of a method, you can get, you can get this autoclip read where you have like this hint on, these are the arguments that is expecting this type of argument, that is expecting um. So that really helps uh, it helps your coding um. It uh also helps to catch bugs early in development phase. Uh, not wait until it's, until it's rolled out.
Speaker 1:And it also allows you to have much better code documentation because you have a lot of tooling that, based on, among other things, types, like automatically generates documentation for your code I think that's really really good to see maybe one thing also you mentioned the, the autocomplete, and the, the ide support, and I think, if you take this to another level, the AI, I think by well, in a way, you're like I would argue that if you're on a team and if you put the type hints, I'm also telling you like, hey, this is probably going to be an integer, even if, even if you don't validate it right, like I say, this is they expect this to be an integer. Now you say I'm working by myself, so I know what it is, I don't care. But if you think that AI is always your code buddy, right, you also get better.
Speaker 2:You basically create a more clear context whether it's for your teammates, for yourself or for an.
Speaker 1:AI agent. Exactly so I also noticed this with myself. Like sometimes, if I ask a question about the code or if I want something, if your code has nice type hints and it's well documented there's doc strings and whatever you usually get a better help.
Speaker 2:So I really like it, I think it really gives a productivity boost as well. In that sense, not everything is perfect. I think there are some struggles still.
Speaker 1:There's sometimes slow performance of type checkers.
Speaker 2:MyPy is probably the most notable, but even PyWrite, because it's the. Yeah, there's also PyWrite, which I think is probably the biggest challenger to MyPy, which does this validation of the types that you specified. One strike is also like there are still inconsistencies between MyPy and PyWrite, even though PyWrite is much faster. One strike is also like there are still inconsistencies between my pi and pi right, even though pirate is much faster. But we do still see that in the survey was thousand thousand developers that uh, 67 still use my pi. Okay for the for type checking.
Speaker 1:So, um, interesting, yeah, evolutions but uh, maybe also for my experience with PyWrite and MyPy, so I used it, but I also use more recently as a VS Code extension. So basically they run it as you go, but sometimes it gets very slow. Sometimes you have these really ugly squiggles on a whole function because you're missing a return type and then you add it and it still takes like some seconds to clear it out. So, yeah, I do feel the the pain and that's using pyright, right, and uh, my pi to be honest, today I'm before using as pre-commit hooks or whatever uh, my pi has a lot of weight because the guido actually guido, the creator of python also worked on my pi, is it? Yeah, he was uh. Is that why it has a lot of? Uh? Yeah, but actually he didn't create it, but I think I'm pretty sure he worked on my pi. It gives a bit of authority too.
Speaker 1:Yeah, I think so even though it's not the most performance. So I think it's also it has been by far the longest standing right.
Speaker 2:I think so, I think, so is still very new, I think.
Speaker 1:So. Yeah, indeed, indeed, indeed. So I think I'm just waiting for UV or someone to write a Rust implementation to make it faster. So I have to, isn't by right Rust?
Speaker 2:I don't think so. It's by right, as Rust. Actually, let's check. Relatively sure I can be wrong.
Speaker 1:Let's see it's also from microsoft. No, python typescript okay, okay interesting, and 0.1 of javascript so cool. So, yeah, cool maybe. Uh, related to this. So one thing that came up end of last year they had like top 10 frameworks or I don't know. They were just putting some frameworks right, so I just had a skim through it, some things that we had already covered, like data chain. I think we talked about data chain before and this was in the Python thread, so everything's Python related. So PyDentic AI was also there, and this one came up that I wasn't familiar. It's called WriteTyper Fast and efficient type of system for Python, including tensor shape inference. So it's just, they say fast, but it's only python. It's a bit weird. When I I heard it one time blazingly fast something, and I was really expecting rust and it wasn't, I was I felt a bit betrayed, you know. I was like how can you call yourself blazingly fast?
Speaker 2:um, but can you explain to me what, what the tagline means? So a fast and efficient typo system for Python, including tensile shape inference.
Speaker 1:So basically, I don't know what the tagline, but what I understood from reading the other stuff as well is that basically, you have code, python code, that is not typed, okay, and then this would actually run through your code and add the types for you, right? So there was actually another one like what's called like Code Monkey or something I think from Instagram, but this one also came up. I haven't used it myself, but RightTyper is a Python tool that generates types for your function, arguments and return values. Righttyper lets your code run all neatly, full speed, almost no memory overhead. As a result, you won't have experience load, okay. So basically, I think it just kind of goes over the Python code and just acts.
Speaker 2:It looks like it To insert the right types.
Speaker 1:So I think it will run. But you're actually running right type, right. So on the execution here, example, they have Python 3-M, so that's to run the module. And then you're right, typer, dash, m, pytest, and then you have some arguments. So I guess here you're just running your tests and, based on what the tests run, it will keep track of what the types are and then it will add the types for you. In your Interesting and I think they mentioned the tensors is because a lot of these things is not native Python right NumPy, jax, pytorch, but they also cover these things.
Speaker 2:Oh, and first the shape annotations. Okay, yeah.
Speaker 1:So this is JAX typing bear type and type card. Yeah, again, maybe the shape annotations, okay, yeah, yeah, so this jack's typing bear type and type card yeah again, maybe I mentioned earlier that I like, uh, a monkey type is the one from instagram so yeah, monkey type.
Speaker 1:I annotated them with this one um and maybe, uh, this, the, maybe, the blazingly fast. They're just referring to how. How much slower would your code do, right? So this apparently doesn't affect that much, even though I guess normally you wouldn't run the right typer. Maybe one asterisk that I wanted to add. I really like typings, but I also know that sometimes it can be very annoying and it can really slow you down, and sometimes adding the type is so complicated because you have an object that is external and then you have to say, okay, like this needs to have a method, this.
Speaker 2:So to me you also have a lot of, like he said came also came out of the survey. One of the frustrations is like at least these these edge cases where it's a bit less clear, like what is the type, or because of the dynamic, like it's an attribute that's dynamically set, it's harder to inform, know the type up front. So they believe these, these exercises, that where you need to do a lot of more, a lot more work, to have correct type hints, yeah, then you actually get value out of it, yeah, which I think if you draw parallels with the javascript typescripting, I think it's also a big complaint from the typescript community.
Speaker 1:Right, like sometimes you spend so much time to just say, like this is something, this, but sometimes it's that. But actually this year is this and python is very dynamic, so, like you can even create classes dynamically, you can have it's like. It can be, uh, it can be tricky. It can be tricky, um, do you use types, by the way? I do, yeah, but do you actually do static type checking or you just add types?
Speaker 2:I, uh, these days, if I would set up a new project, I would do yeah, do static type checking. Or you just add types. I. Uh, these days, if I would set up a new project, I would do yeah, yeah, static type checking. Yeah, recently, but I still use my pipe, to be honest yeah, but how do you use it?
Speaker 1:do you use it as an extension or do you just use it as a in my ci?
Speaker 2:yeah, I use it and um, it's actually not that long ago. That was still. That was comparing my pi and pyrite, and I think this general consensus and that's what you see in the service as well. It's like my pi is there, we know what it is, pyrite we're not sure if, like, how is the? How long will this exist? How is this not not new? But yeah, I mean, that's the discussion that you have for everything. I think the big difference with this specific space is that you have one very big mature player, which is MyPy which makes it a bit of an odd space, I guess.
Speaker 1:Yeah, but even like the MyPy. So we saw PyWrite as part of the Microsoft org. Mypy is part of the Python, actually, but it's Python, or it should be Python org. No, I think it's just Python. Yeah, I can share this time instead, and I think a lot of the things from MyPy are also a bit intertwined with the type hint specifications from Python. So you see, here this is the Python. See, python is the most popular version, like interpretive Python, and you see MyPy here. So I also feel like one kind of and I guess, because Guido also I mean, I'm saying this a lot, but maybe people should check, but because Guido also worked on MyPy, it also carried a lot of the design choices around what type specification Python should go and all these things, right. So, yeah, a lot of weight, a lot of weight there. What else, what else do we have have?
Speaker 2:maybe how much time do we have as?
Speaker 1:well, okay, maybe a few more things change a bit the subject to our misc corner. Um, if you want to be fancy and then you have a doom gallery experience it's really cool. Enlighten me, bart, educate. Educate me. What is this?
Speaker 2:So I think everybody knows the game Doom, which was released in 91-ish. I want to say I'm not 100% sure. Oh no, it actually says on the website 1993. It's not far off, huh? Not far off, no, and over the years you've had this game remade and a lot of different engines. You can run it in JavaScript, you can run it everywhere and this is the gallery experience.
Speaker 2:Okay, so if you press play, instead of a gun, you're going to hold a glass of wine. Go to new game and you're in an art gallery and you have a glass of wine and uh, you can walk around a bit. You can, uh, you can uh appreciate the, the paintings that are there. There's also nice statues, nice, instead of uh clicking your mouse and firing, you can have a have a.
Speaker 1:Have a any throw going, or what? No, you drink. Oh okay, I'm fine, I want to enjoy the the art, appreciate the art.
Speaker 2:Yeah, this is it. So if, if this evening, like you want to be like really fancy, okay just go appreciate the art. Have a glass of wine, okay, enjoy nice. Um, oh, look, there's statues and all the everything yeah, really cool, put on the show notes yeah nice artifacts here you're looking at, and then you have very historical and you have the thing here what is cheese, percent cheese?
Speaker 1:you need to find some cheese, I need to find cheese yes goes well with the wine oh, okay, and if I keep drinking the drink, there's a. So maybe for people listening. Ah, so I have an amount of drinks as well. I can take 46 sips and you need to go to the bar. Okay, really cool. It's really cool how these things nowadays is just like on your browser, right, and you hear the music. Yeah, actually, yeah, maybe you can put a bit beautiful.
Speaker 2:I'm gonna try to put it, this is beautiful.
Speaker 1:I'm gonna try to put a bit louder for people to appreciate as well. I feel like we're having a classy moment right now, yeah, but I don't think we've ever been this classy appreciate the arts it's a chopin. Wow, like a nice wine.
Speaker 2:This is wow the only thing we're still missing, but it's like a bit of cheese, but the people will find it that's where the dutch there's always cheese in the gallery.
Speaker 1:There's always cheese in the gallery. Very fancy, okay. It would be cool if they had also mirrors, you know, and then yourself like dressed up, you know, it's cool.
Speaker 2:This is really cool thanks for sharing the gallery experience by filipo miozzi and liam stone okay, very, very, very cool.
Speaker 1:maybe, um, we were appreciating the music, right, the very classical music, but there was also some news on the. There was also some news on the. I heard rumors.
Speaker 2:Yeah, maybe I need to share this again.
Speaker 1:Oh yeah, okay, no, you know Suno. Right, we played with Suno before.
Speaker 2:We played with Suno. Suno is a Gen AI music generator right. Yes, so played with Suno as soon as Genai music generator right.
Speaker 1:Yes, so, and actually this is not as new, this is November 19th 2024. We're not seeing this yeah yeah, sorry, sorry, sorry, sorry, my bad, my bad, my bad. Oh, but maybe it's frozen the screen. Okay, V4 is here. V4 is here, yes, so V4 is here. Yes, so what's um?
Speaker 2:v4 is basically at the new iteration of you know, I think when we looked at it, v2 was just released when we really we discussed it.
Speaker 1:Yeah, uh, and v2 was really good already, right, um, but now they are, basically I just came to do it, right, like they have covers, personas, the audio sharper, they also have cover art, right, and I was like how good is it? Right, it's hard to. So you need to try it out, okay. So, um, and maybe I need to now to put the music a bit lower because, uh, when we tried this before the recording, it was a bit too loud. So let's see. But let's see, maybe I have to turn it up again. Check this tab instead. This is my personal thing collapse, this. And you have here this song, bart the data dynamo. I don't know what it is.
Speaker 2:I'm very curious actually what was your prompt, emily?
Speaker 1:oh, can you hear? Oh, yes, that's you actually here. It comes with the lyrics as well heartbeats pounding through the rain.
Speaker 2:Look at this crazy, how clear the thing is do not scroll down, this is you know, with tulips around and everything that's a good choice, so this is the how much?
Speaker 1:work did you do to get this? I just did like 30 seconds so I can actually show the prompt here. Bart is a Dutchman.
Speaker 2:Bart is a Dutchman passion, and then, yeah, I just put basically your bio from the data topics and then I just put Bart's a Dutchman.
Speaker 1:Maybe we need to pause the Bart is a Dutchman passion. And then, yeah, I just put basically your bio from the data topics and then I just put can you create a? Can you write a upbeat song about Bart? Nice, and so it's as simple as that, or you get this. Simple as that. See 200 characters max. You get the you can actually and then they created a three minutes and a four minutes version of it and I can make it public as well, so everyone can enjoy.
Speaker 2:Yay, yeah, wondering what this will do to the music industry yeah, right, well, I think we touched a bit upon that uh, I think this is because it's a bit the same as to make the parallel with boltnew, like yeah, it's something like well, minimal effort, you have something, but at the same time like there's like you, you, you consume these things differently right, like, like with bolt, you make an application and you a user wants to use an application, yeah, but here you want to appreciate music.
Speaker 2:I think like, yeah, it's a different, like the person behind the music is more important to to appreciate. I don't know I think so.
Speaker 1:I think for creators, I think it could be interesting, because most of the money they make is from performing right, because there wasn't an artist that said I don't care.
Speaker 1:Like this is ai, generated music is great because they'll write my songs and I'll just sing and make money yeah but at the same time, I also and I I don't know if we talked about this or in the podcast or outside the podcast I feel feel like music. There's a bit of the human, like I don't know If you hear a song about achieving something or a song about heartbreak. I feel like there's a bit about relating to the person, like you know, like the question is, I guess, do you know it's AI or not? That's the thing If you don't know, like there's's the thing, if you don't know.
Speaker 2:Like there's a like if you build a mobile application with Bolt, yeah, and the user knows AI was used to create it, they're not really gonna care, no, but if it's but with music you might care, but I think that's because you feel a bit cheated, right.
Speaker 1:Yeah, you feel a bit of you connect through the singer or the writer, through the song, right, and I feel like if it's ai generated, you feel a bit cheated in that sense, but it's a bit the same, like to these days, like ghostwriters yeah, how transparent.
Speaker 2:Is that?
Speaker 1:that's sure probably not for a lot of cases, that's true but I still feel like that would even take a another step, because I feel like at least a ghostwriter is a person, right like you're relating to humans yeah, that's true, you know, and I feel like to relate to a machine. I feel like it's a bit but you as a Swifty as a Swifty. Yeah, if like a.
Speaker 2:Taylor Swift releases a new number. She's dead to me. I generate it no, no, even not AI generate. But if it's very transparent that she did not write it, it gives a different notation. Right, it gives a different.
Speaker 1:But I also feel like, for example, if it's about emotions, like it's less authentic yes, yes, but I think, like taylor swift, I think one of the reasons why she's very popular is because she writes about her life as well and actually, like you, do correlate stuff you see on the news with this, with this, with this, with this, like a lot of their fans are like saying, ah, this is about that period of her life or about this or about that. So I feel like becomes more real and I think that's part of her success, because there are other really good singers, but you know that they don't write all their songs right I'm not gonna go in there, but I do think it's like this influence yeah, right um, but you know it's like that.
Speaker 1:There's a brazilian saying that what the what the eyes don't see, the heart cannot feel. So it's a bit better like if someone uses this, but they can, they can get away with it.
Speaker 2:Then what's the yeah, it's interesting to see what it will bring, indeed, or destroy, or destroy.
Speaker 1:But I think, yeah, let's see, you know, life is ever ever moving you know, it's like, uh, it's like, what's that? It's like, uh, eraclitus, I think he had this. That's, uh, the quote. You know, like a man doesn't bathe himself twice in the same river because the man is not the same in the water, the river is not the same either. Oh wow, that is deep.
Speaker 2:Yeah, you know. You know that you haven't heard of it.
Speaker 1:No, Let me check, let me just see if I'm making it up. No, yeah, heraclit, no man, yeah, yeah, it's true, something like that. I'm paraphrasing a bit, but life is ever, ever changing, you know, and we need to kind of roll with it. Whether it's ai or whether it's this, let's just uh, and that's the message I want to bring 2025, but let us still keep baiting right, yes, all right, and I think that's it for today. Anything else you wanted to, uh, let's find these words, anything else uh, just that.
Speaker 2:Uh, I'm like everybody switched to ghosty for their uh yes, I said I had it there as well.
Speaker 1:Uh, maybe I'll also just put ghosty here, because I thought that there were. Have you seen the website?
Speaker 2:yeah, yeah, it's cool, that's really cool. Animation oh shit. Um, I think for people that they're a bit out of out of the loop, like uh ghosty or ghost tty is a new uh terminal emulator. Yes, to be uh completely correct as an emulator, but I think most of the time people just call it a terminal. It's made by a guy for which I forgot the name, but he was one of the. He was, I think, the cto of hashicorp ah, he was, he was dead.
Speaker 1:I knew he was working for hashicorp, but I didn't think he was cto um forgot his name.
Speaker 2:Just big big uh guy in silicon valley um created now ghosty is it?
Speaker 2:him. There have been a lot of rumors on this, a lot of hype on this. In 2024, I think the he had a beta from beginning of 2024 somewhere, with a limited set of users. I'm always a little bit like what is a new terminal going to bring me. But I switched to Ghosty. I came from Western, western W-E-Z and Ghosty is nice and I don't have any like if you go to the website, there are a lot of objective reasons why ghost is good, like speed and but you don't care, right like it's like like, like, using the, using the more the the native components to build uh applications, let's say cross-platform.
Speaker 2:Yeah, use the native, which west, for the western, for example, does not do so. You have a terminal window and you see it's not on OS X, it's built on an old frame, but Ghost is nice and I don't have any objective reasons to switch to anything else than even just the default yeah terminalapp. Yeah, exactly I don't have any really good reasons, but I use Ghosty for now. It just works. I also like the biggest argument is speed.
Speaker 1:I never really thought like speed is gonna be the thing, but like you click and it's there yeah, yeah, indeed I feel, but I think for me it was like, uh, linting, it was never really a big thing, but then when they rough came out and they like in russ is fast like oh yeah, can I see the difference?
Speaker 2:yeah, it's like you go from flaky to rough exactly, exactly, but I have the website here.
Speaker 1:It feels better. Yeah, it feels better, and it feels like you're on the bleeding edge right, like you're not missing out. The website is really cool. They have this like little ghosty ASCII animation and actually this is text.
Speaker 2:Yeah, cool, it's really cool. It's ASCII.
Speaker 1:Really really cool. Um yeah, but I also so my terminal journey. I had terminalapp and it was fine and then I tried warp, but then there are a few things that I felt a bit clunky.
Speaker 2:I don't want the terminal where you need to log in.
Speaker 1:Yeah, indeed, what the fuck indeed and actually now I think they took it out, so I heard it on interview from ghost that they took it out, um, but also there are a few other things that I remember it turned me off a bit. I think also there was some people, some colleagues here at data roots that they said that uh, warp messed up with some commands, that this and this that they couldn't run and they to debug it was crazy difficult. So then I came back to terminal that app. Then we also did a presentation on the ai landscape talks. That terminal, that app, could not render images, um, so then I also downloaded item two, item two.
Speaker 1:What I didn't like is that, like the tab completion or the option backshift to skip a word or something it was like I was so used to it and uh, so I I installed ghosty because I just like, okay, whatever, let's reinstall ghosty. You know, don't make it difficult, don't change my key by indeed. And then actually it works fine. So I'm, uh, so far I'm happy with it. Yeah, me as well. So, and it shows images. And it shows images. Yeah, yeah, yeah. Well, yeah, I mean, you can go to present term, which is uh based on markdown.
Speaker 2:You can have presentations in your terminal exactly if you use a terminal that supports the kitty protocol. You can actually show gifs in your terminal. Yeah, like ghost, like Ghost. Ety supports the.
Speaker 1:KDE protocol. So it's like a markdown presentation on the terminal. Really cool, written in Rust. Ghost is written in Zig, so that's why it's probably fast as well. Yeah, and this I have a present term here which we used in the last presentation we did yeah, really cool stuff. And yeah, that's a wrap, isn't it? That's a wrap, all right, thank you all. You have taste in a way that's meaningful to software people.
Speaker 2:Hello, I'm Bill Gates. I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded it's a bust here, Rust.
Speaker 1:This almost makes me happy that I didn't become a supermodel.
Speaker 2:Huber and Netties Boy. I'm sorry guys, I don't know what's going on.
Speaker 1:Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here Rust Rust Data topics.
Speaker 2:Welcome to the data. Welcome to the data topics podcast, ciao, ciao.