DataTopics Unplugged: All Things Data, AI & Tech
Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.
Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!
DataTopics Unplugged: All Things Data, AI & Tech
#67 The AI Race: ChatGPT's New Web Search, Meta’s Llama AI Scaling Efforts & Python 3.13's Upgrades
Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.
Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!
In this episode, we cover:
- ChatGPT Search: Exploring OpenAI's new web-browsing capability, and how it transforms everything from everyday searches to complex problem-solving.
- ChatGPT is a Good Rubber Duck: Discover how ChatGPT makes for an excellent companion for debugging and brainstorming, offering more than a few laughs along the way.
- What’s New in Python 3.13: From the new free-threaded mode to the just-in-time (JIT) compiler, we break down the major (and some lesser-known) changes, with additional context from this breakdown and Reddit insights.
- UV is Fast on its Feet: How the development of new tools impacts the Python packaging ecosystem, with a side discussion on Poetry and the complexities of Python lockfiles.
- Meta’s Llama Training Takes Center Stage: Meta ramps up its AI game, pouring vast resources into training the Llama model. We ponder the long-term impact and their ambitions in the AI space.
- OpenAI’s Swarm: A new experimental framework for multi-agent orchestration, enabling AI agents to collaborate and complete tasks—what it means for the future of AI interactions.
- PGrag for Retrieval-Augmented Generation (RAG): We explore Neon's integration for building end-to-end RAG pipelines directly in Postgres, bridging vector databases, text embedding, and more.
- OSI’s Open Source AI License: The Open Source Initiative releases an AI-specific license to bring much-needed clarity and standards to open-source models.
We also venture into generative AI, the future of AR (including Apple Vision and potential contact lenses), and a brief look at V0 by Vercel, a tool that auto-generates web components with AI prompts.
Let's do it.
Speaker 2:You have taste in a way that's meaningful to software people.
Speaker 1:Hello, I'm Bill Gates. I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here Rust, rust.
Speaker 2:This almost makes me happy that I didn't become a supermodel.
Speaker 1:Cooper and Ness. Well, I'm sorry guys, I don't know what's going on.
Speaker 2:Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here.
Speaker 1:Rust Rust Data Topics Welcome to the Data Topics.
Speaker 2:Welcome to the Data Topics Podcast. Rust Data Topics. Welcome to the Data Topics Podcast. Hello and welcome to Data Topics Unplugged, your casual corner of the web where we discuss what's new in data every week, from llamas to AI licenses, everything goes. Check us out on YouTube. I don't even know where we are else anymore, but we do have a video version of this, so feel free to go there check us out. We also share the screen here and there on the episode, so feel free to have a look or have a look at the show notes. Feel free to leave a comment or question or send us via email.
Speaker 2:We'll try to get back to you in a timely manner, but no promises there. Today is the 4th of November of 2024. My name is Murillo. I'll be hosting you today, joined by the one and only Bart, hi. Woohoo, he's back. Actually, yeah, but I feel like, for the people listening, I'm not sure if no, maybe, yeah, it's been two episodes, right that Bart hasn't been there. Two released episodes, no. Yeah that's true. Yeah, yeah, yeah, and a recorded one and one of the recording has to be released right indeed, indeed, indeed, indeed.
Speaker 2:So, uh, what's your absence?
Speaker 1:you want to explain yourself, mister I went to uh kratz to do a uh, so an island in greece to do a trail run. Basically nice how?
Speaker 2:was it it was fun yeah, did you beautiful island? Did you? Um, how do you say your performance? No performance. I don't know how did you place in the run? I was, uh, eight place oh okay, so the seventh loser, basically.
Speaker 1:Well, exactly maybe next time, bart just kidding, that's kidding.
Speaker 2:How many people? Eight, no, just nine. Just kidding. No, I'm just kidding, I'm just, I'm just jealous because I couldn't do it.
Speaker 1:I couldn't do it, if I'm being honest, but it's really nice, like uh, it was uh, like you can do a lot of elevation there. Like it goes from crap, goes from sea level to 2500 meters, wow, yeah, I didn't know it was that high actually.
Speaker 2:Wow, but it's crazy, like, because you said like, oh, it's really fun, no, what did you say in high, actually, Wow, but it's crazy, like, because you said like, oh, it's really fun, no. What did you say in the beginning? How did you describe it? I said it's really nice, it's a beautiful island. No, no, no. You said it's really nice because you have a lot of elevation, but to me that sounds like the opposite. You know, it's like fuck.
Speaker 1:That is true?
Speaker 2:Yeah, they say that. No, never mind, I'm not going to go there. So cool. We also had Halloween last week.
Speaker 1:Did you do anything special? I went trick-or-treating with the kids.
Speaker 2:Nice, nice, nice, nice, nice. What about you? We went. It was a long weekend, right? So me and my wife, we rented a sleeper van. We went camping as well. Oh, I heard this. Yeah, you heard. We'll talk later about it. It was fun. It was fun, it was a different experience, it was cool.
Speaker 1:Where did you?
Speaker 2:go In the German community of Belgium. So we didn't go far, because also it was the first time. We're like, yeah, let's see how it goes. And so we rented a van with the black sheep van. So they have a lot of different ones and you can just like the sleep prevents. They have like a kitchen and some, some of them have like a shower and bathroom and stuff. Um, so we just rented one and went to the german community. It was good weather. It was a bit like it didn't rain no, it was good weather to camp I thought it was a bit cold, but I'm from brazil, so what do I know?
Speaker 2:But yeah, it didn't rain. It didn't rain. We were able to do some hikes For November in.
Speaker 1:Belgium, it was good weather.
Speaker 2:Yeah, that's true, that's true, but I think the best day was Sunday, when it was sunny, but that's the day we're coming back, so it was like okay. But, yeah, it was good it was good.
Speaker 1:What do we have for today, bart? I see here uh gpt search. What is this about? Uh, chat gpt release a new functionality which basically is a search functionality, right like how you would uh use google. Yeah, you can now use chat gpt and you can basically like. There's a small icon which you can click and you're showing it. It on the screen now. It's like a globe with search next to it. If you click that, it will actually search the interwebs instead of just relying on its Training weights, exactly, yeah, what I understand is that they leverage Bing quite a lot to do this.
Speaker 2:But it's all under Microsoft umbrella, because Bing does that too.
Speaker 1:No, Well, Bing is just a search engine.
Speaker 2:But it does also use LLMs behind and all these things.
Speaker 1:Right, yeah well they now also have an AI version to that yeah.
Speaker 2:Have you tried it?
Speaker 1:I tried it over the weekend and I guess it works quite kind of okay.
Speaker 2:Better than Google.
Speaker 1:Just recap that Google had the like things that you would uh, that you, I think, where you before you had a lot of hallucinations like, if you have a specific search, like I want to know uh, which, uh which, shops are open on Sunday that also provide this service, like very specific, yeah, right, like you, you save a huge amount of hallucinations. Um, and now it's just uses actual search results to come to uh an answer. Yeah, I actually use it. Like it was very random, like when I tried it, it uh, it was just released and I was looking for uh climbing areas in the Ardennes that had a lot of routes that were suitable for kids. So, very specific, right.
Speaker 1:If I would have done this before, I'm sure there would be a lot of hallucination.
Speaker 2:And now it's quite okay. Oh, great that's good.
Speaker 1:When you go to the page that you're showing now. You also have different types of widgets.
Speaker 2:Sometimes they show a map and stuff like that but I couldn't really reproduce that, to be honest, like so yeah, for the people that are just listening, on the announcement page that we'll put on the show notes they have some like tabs kind of, so they show different examples for weather and then there's like icons for weather stock. So you have the classic time series plot thingy, sports with the, the game schedules, news and maps. But you can get it to work well with the example search I did.
Speaker 1:I asked can you show this on a map? And didn't work. But maybe you need to have a specific prompt to to show results on a map, right?
Speaker 2:yeah, I think one thing that I I'd like using chat gpt's for, almost for like brainstorming. So actually when we're camping we're like, okay, what are? Like me and my wife, we have two dogs. Um, it's a rainy day, can you give me 20 ideas of things that we can do around here? And then, yeah, like a lot of times it gives some hallucinations.
Speaker 2:But if I say 20, even if 10 are non-hallucinated I think it's also good, but a lot of times they still say I need to check with the local establishment to see if they accept dogs and all these things and I think maybe this could be, uh, this should be more, should be yeah, yeah.
Speaker 2:So it's cool, really, really cool. Um, maybe also one thing I did use chat gpt for, while we were on the chat gpt topic, um, last week I had a Not an issue, I guess, it was a bit of an issue. So it was about networking stuff, right, and I'm not I think I'm not super comfortable with networking and all these things. So I just kind of said, okay, there's an issue, and then someone I'm just putting the conversation here, I'm not going to read through it all, but basically that was an issue Someone was trying to deploy, maybe before that setting the scene, someone was trying to deploy um two applications in one vm, so it's just a vm on the cloud. Okay, uh, back end and front end, so like past api and streamlit, right, um, they're using docker compose and when they're trying to deploy, both addresses were working, so you can access the documentation on the FastAPI, you could access the streamlit. But whenever they tried to talk to each other, something was going wrong and just this connection refused, okay, and I was like, okay, I had to kind of solve it and I wasn't sure where to start, right, and I was like Googling stuff and I was like, okay, I had to kind of solve it and I wasn't sure where to start, right. And I was like Googling stuff and I was like you know what? I'm just going to ask ChatGPT, so also as like a learning tool, right, and this is the chat that I was showing before. So then the first thing I just kind of said, hey, you're an expert, blah, blah, blah, and then it gives me a blob about ports and addresses and all these things there were some questions about, like the port, because sometimes the port was specified and sometimes it wasn't. So then it explained a bit some things like that. And then here it kind of explained exactly what I mentioned here with some dummy information, right. And then it gave me a whole bunch of examples, like, of possible issues, right. And the first thing they said is the course, cross-origin resource sharing configuration, which I've heard a few times but I never quite understood. And then they say firewall, group settings, protocol mismatch, and I was like, ok, this is probably not correct, not correct, but let's try this one, right.
Speaker 2:And then I try some some things Right, like, for example, I try to go in the VM and curl the endpoints and I say, well, that's working. Does that prove that it's course? And then I said, oh, actually, no right, because this only happens on the browser. Blah, blah, blah. And then they also mentioned to go in the developer tools. I tried that. I couldn't see anything.
Speaker 2:But then I also went on the browser and I used the, the yeah, basically the console stuff, right, to do a curl request and then I got the course error and then I said, does this confirm the issue? And then it was like, yes, this confirms the issue. So I was like, okay, now I need to. Okay, how can I solve this now? And then he gave me instructions for the fast api. You can do this, you can admit aware, blah, blah.
Speaker 2:And it actually worked right and I just thought it was a bit of a.
Speaker 2:It was a bit of a mix right, like it did instruct me, quite a.
Speaker 2:But I also had to use a bit of what I already knew to try to confirm the issue and then take steps there.
Speaker 2:And I was actually super happy.
Speaker 2:I don't think I would have been able to do it without something like Chachapiti to just give me ideas, to just bounce back, to just like rubber duck my way through it kind of I mean more than rubber duck really, you know, yeah, I thought like from it was really like wow, like that that's what it was, you know, and something that I yeah, like you know, I don't know it was it felt very magical. You know, it was just like oh, this is super cool, um, and I think it's also mixed a bit with like what I knew already and trying some different things to confirm. And I also think chat, gpt is really good for this task that you can kind of verify yourself right. And I think coding is a very good example of these things because it gives you something and you can try and either it will work or either it won't work right. And I think for things like I don't know if you're sick and you ask ChagPT, it's probably not a good example because you cannot verify the things that it's saying right.
Speaker 2:So I think there's a big use case where we talked about before, about brainstorming, like giving ideas, but also about these verifiable questions, that if it hallucinates, and also low stake a bit. So I think a doctor is also something that is high stakes For coding. Maybe it's lower stake, like if ChagPT gives me a command to drop a database, maybe I wouldn't just use it, I would do some more research. But I think for like low stakes thing that you can verify yourself, chatgpt is a really good use case for it.
Speaker 1:Yeah, and especially like how you use it here, like very iteratively right, Like not just generate me the answer, because that is often wrong.
Speaker 2:Yeah, indeed, indeed. But I feel like whenever you hit a wall, sometimes it's good to just kind.
Speaker 1:Yeah, to get a bit of feedback. You would normally reach out to a colleague.
Speaker 2:Exactly you can with a lower threshold ask something like ChatGPT. Faster response as well. Right, If you send a message, sometimes you have to wait.
Speaker 1:But I think, like how you describe it, a bit rubber ducking approach, that really helps. Yeah, that's valuable.
Speaker 2:Indeed, and I think, even if the ChatGPT hallucinates and it gives you something way off maybe that's some, that's something that is way off gives you an idea of something that is more relevant.
Speaker 2:Maybe, maybe so, for example, when I was uh doing something with rust and then they said, so this, this, two traits, and I was like, ah, that actually is not right, but maybe I should look more into the type system and maybe this and maybe that, you know, and actually it gave me a new perspective, you know, and I think sometimes, at least for me, I get so focused on one thing, like, okay, I think that's the problem, so how to try to solve this, this and this and this and this, and I couldn't get it. And then it's like, well, what if this is not the problem? And that is the problem, and having something that just like a parrot, that just says something, uh, helps me a lot. Okay, yeah, yeah, so quite happy with it, not, yeah, I think there's other ais, right, like findcom as well, that gives more um, because it also searches the web, right. So that may be better, depending on what you're trying to look for, because I also think that this was very generic, right, it was like web standards and stuff like that.
Speaker 2:So it's only when you use it consciously to debug or something it works.
Speaker 1:I think what it doesn't work is when you use it consciously to debug or something it works. I think what it doesn't work is when you use it as a, as a bit of a shortcut yeah.
Speaker 2:Yeah, yeah, yeah, yeah, yeah, yeah, yeah yeah.
Speaker 1:Yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah. Something like javascript and html intermixed yeah and I just asked gpt to to add a feature to it, change it, and it didn't work at all, but you knew what that?
Speaker 2:you knew that it wouldn't work when you read it I didn't read it, like it was a big component like I just tried to copy paste it just to see it just as a test to see what it gives.
Speaker 1:Yeah it doesn't work at all, yeah it was way off, but then I tried it with cloth and it was actually better. Oh really, yeah, I think the performance was better, but still not not very difficult but I think, as again, like if you do this a bit more consciously, not trying to have a shortcut, yeah, like it works to to very easily adjust something that you're not like. I don't, I never write liquid right like this helps me to very quickly uh make adjustments by by interactively asking stuff.
Speaker 2:Yeah, true, I also think that I don't know how specific liquid components are.
Speaker 1:Well, the problem, I think, with liquid is that there's probably not a huge amount of open source liquid out there, right, exactly, there is on Python.
Speaker 2:Yeah, yeah, there is on Rust, but it's also like what is in the training set right.
Speaker 2:Yeah, indeed. Yeah, I also wonder these things, things like if I'm asking something super specific or something that I feel like it's very new, um, maybe I don't ask hbt because I know that I need the freshness you know, like the fresh information, um, but yeah, I think indeed, it's kind of like knowing what to what, to prompt where, right, right, but I think it's a valuable tool. But, yeah, I wouldn't ask ChatGPT for something like Python 3.13, right, because it's very new. Well, with search you could do it, right. Ah, that's true, with search you could do it, but actually that's what I was using findcom for right, which is a bit the search, but I think findcom for right, which is a bit the search, but I think findcom, I think they brand themselves as like for developers.
Speaker 1:Let's see yeah, findcom to me is a bit like perplexity, but then for developers what do?
Speaker 2:you mean perplexity for developers like perplexity.
Speaker 2:Yeah, like it's just advanced search ah, yeah, yeah, yeah, yeah, but that's the thing so here. So I'm just showing the ui for findcomcom Just put your question here but it says from idea to product, but they also have like a playground in code, so it does feel more catered towards developers, right, so that's what I was using, but maybe it's like chat, gpt search will actually fulfill that need now, right, but yeah, maybe, as I mentioned, python 313, python 313 is out since october 2024, so it's been a little while, but I don't think we got the opportunity to talk about it. It made some noise on the python community for two main reasons, I would say the free threaded mode and the just-in-time compiler. I think those two things they made a quite a bit of noise. Pep 703 and pep 744 um, why is this a big deal? Do you should I can I throw you under the bus, or should I just take a crack and then you can correct me?
Speaker 1:um, it's, uh, it's a big deal because it's probably should be a performance enhancer the free threaded, or probably both. Yeah, this should both um, the free threaded is. Uh is maybe a bit more specific, because python has always had a global interpreter lock, which means that you're that if you do multi-threading, that you're basically limited to a single core.
Speaker 2:Yeah, so basically, the global interpreter lock basically locks objects, right. So if you have a value you can think of like a shoebox, right, there's something in that shoebox, and then if you have multiple processes trying to access or change what is in that box, then you have an issue of what to do. So the lock basically says I have that box, no one else can touch it. But that also means that only one thing can run at a time. So Python is known for having this issue. Yeah, and yeah, like there are workarounds, right, like python's written in c.
Speaker 1:Some people write some stuff in other languages that don't have this limitation, and but it's not really an issue like it's a certain design choice which allows that to be a very efficient garbage collection and yeah, that's true, that's, um, but because of this, indeed, like, like, you had this where you can, you can discuss whether or not that was true threading or not, um, or that it was actually single threaded. But, uh, when you use multi-threading, it was, it was scoped to a single core and now, with free threading, you can basically thread across multiple cores. Yes, um, and you had ways around this with multi-processing and stuff like this, but, uh, this is the first time, um, that we can do this. That should, uh, especially for multi-threaded stuff, it should speed stuff up and the, the multi-processing get you mentioned.
Speaker 2:Basically you start different Python processes, yeah, so basically each process will have its lock, but then you just run it in parallel. Basically, exactly.
Speaker 1:So this is today an option. It's not on by default. You need to enable it and it's there. Let's see what it gives. Yeah, it gives. Yeah, the big question is that there is no guarantees for uh, backwards compatibility on all the libraries that were not set up, but I think the community today has a very good view on what the impact will be. This is a bit of a let's see what it gives. Yeah, the benefits are. Benefits are more than downsides?
Speaker 2:Yeah, indeed, I did hear that just because free thread and mode is available doesn't mean that your code is compatible with it. So people need to change their code to keep these constraints in mind. And I also heard that this is not a final thing, like it's still experimental, right, but I think python has a yearly release cycle kind of, so this is still like half baked, but it's a bit on purpose because they want people to get their hands on early. Yeah, file bugs and all these things, right, um, so, yeah, very cool. I haven't I haven't heard a statement, let's say, a personal experience with the free thread of python, but, uh, it looks cool. What about you have you? Do you know anyone that tried this or any first thoughts, experiments, disappointments, maybe?
Speaker 1:I haven't tried it myself, yet and you know anyone that tried.
Speaker 2:Do you have you heard any? Any statements? I don't know if a statement is the right word, but, like the for any testimonials, um, no, no, yeah no, yet, aside from the online people and the just-in-time compiler. What is this, uh?
Speaker 1:uh, it does a code optimization just in time, and it should make sure that your code runs faster for certain scenarios. Again, that is also again not on by default. There might be backwards compatibility issues. Let's see. Yeah, To me these things are most likely. Both, by the way, are a bit more suited for low-level libraries. You're probably not going to get much of a performance enhancement from your typical use case. Your typical use case is not a huge optimization algorithm, right?
Speaker 2:Yeah, yeah, yeah.
Speaker 1:Where this is really key, these last-minute enhancements. Yeah, I also feel a bit like that If you're building a data pipeline, you're probably not going to benefit from this right, but maybe the underlying libraries that you use too Indeed, but I think that's a bit.
Speaker 2:The Python is a slow language, blah, blah blah. But the things that need to be fast, people figure out a way already to make it fast. I feel Like machine learning is very compute hungry, right, but most of the libraries underneath underneath they work with c++ or c, right. So, um, yeah, but I agree, I mean, I think it's. It's not bad, right, like it's not. There's no downside.
Speaker 2:Let's say this is experimental, it's an option exactly so foster is better indeed, and I think it's a bit of an experiment, like people say, like let's do this, let's see what gives. Yeah, right, um, and yeah, let's see what gives. I think typing hints was a bit the same, right? They just kind of put it there and then they all found these like very cool use cases for it. So so I'm excited, and maybe, uh, just talking about, uh, jit compilers, so just in time, I think the most famous one is a pipe I pipe pi, right, um, which uh, basically takes your, your code and right before it runs it will compile to something that is very specific.
Speaker 2:So an example is if you have X is one, so basically it's a number then the computer thinks it needs to allocate certain memory and then afterwards it realizes that it's 1.227, whatever, and then you have to allocate another place. If you can actually scan your code one time, you can allocate it once and then it can make it faster and all these things. Well, I'm not an expert in these things, but that's how I I understand them, but, um, so I think it's cool, let's see. Let's see what gives as well, because I also think even pipe has a lot of trade-offs, right? So that's what held people back from implementing python, but uh, now it's like an alternative python.
Speaker 2:Yeah, implementation right, indeed, indeed, indeed. So yeah, maybe I don't want to get too much into it, but like python, I guess, is like the language syntax, right, the way you write code and how you understand it. There are different implementations that try to comply to this. Pypy is one of them, but I'm not sure if it follows everything. So I think the core is there, but maybe, if I don't know, like a Walrus operator, I'm not sure if it's supported, right. So there's some different implementations in the different languages.
Speaker 1:And I think the typical one that everybody uses, if you just get started with Python, is CPython, cpython, cpython it's by far the most popular and that's written in C.
Speaker 2:But then this PyPy is written in RPython, I think, which is something that looks a lot like Python itself. There's also a Rust implementation of Python. There's a NET implementation of Python. There are different ways, right? So basically, you write a program that reads a file that looks like a Python file and then you, yeah, but that language underneath can be different things.
Speaker 2:Some other small things that I came across Python 3.13, what didn't make the headlines? So a lot of people made a lot of noise about these things, but I thought, well, I'll get to it in a bit. Yeah, basically there were some changes to the PDB, right, which is the debugger thing, right? What does PDB stand for? Actually, completely forgot, sorry, the PDB. I know it's for debugging, yeah, but basically there were some issues on the REPL that made it nicer to work with PDB ShootTail, which is something to work with your file systems. There was also some fixes there, small concurrencies, uh, what I wanted to bring here?
Speaker 2:The new annotation syntax allows comprehension, comprehensions and lambda, so this is type annotations, okay, um, now the annotation change that nobody asked for. So if, if you go here, you see class name. And now in classes you can also add type hints, right, so not just functions. And you add it with this brackets syntax. Here you have the star operator and then on the function you actually have a walrus operator thingy. Yeah, actually, I'm not sure actually if this is the type hint or is this the type hint. Anyways, for the people just listening, basically have two classes definition nested and then you have a whole bunch of names and a whole bunch of stuff with if statements, lambdas, walrus operators and all that, and apparently this is valid python code now in python 313.
Speaker 1:So and what is the hint that it gives me?
Speaker 2:I have no idea, because again, so the type hints, they've been relaxed to allow comprehensions on Lemtos. Basically, and actually the bug ticket is exactly this example. They're like, oh, this doesn't work. They're like, oh, this is a problem, we should fix it. And then they fix it. So I thought it was a bit yeah.
Speaker 1:I think that's when the but you can basically say if, let's let, because you're just showing an example of a class, but if I understand you correctly, you can have a function, and that the, the output of the function is a, is a lambda, I think so.
Speaker 2:So a generator, I guess? Okay, I guess, or maybe just to say like, or maybe just to say if this, if you give this, then I'll give you that, or I'm not sure exactly what's the use case, sure, when I would use it because I don't know, that's not really a type right like, it's something that generates yeah, to be honest, I'm not sure either.
Speaker 2:Um, I also came across this because of this reddit post as well, and I think they also mentioned I don't think I have any news for this except typefire, which is pretty sweet, but it's a pity that a person read blah. But I think it's also a good example of, okay, it starts to add more things to your brain, right, like to understand this. You're probably gonna have more of a headache than if you just didn't add types at all, right, but uh, yeah, they're making it pretty, pretty flexible, so hope it doesn't get as far. I don't see this in any code base, but it's. It's something that is there today, all righty, um, maybe something quickly as well. That I also saw talking about python 3.13.
Speaker 2:We talked about uv a lot in the past. Um, I saw this on linkedin. So sebastian ramirez, the guy from fast api, um, he mentioned now that uv support dependency groups. But what I wanted to just highlight is that the, the pep, was accepted on the 10th of october and 16 days afterwards it was available on uv. So I guess it's like. My first thought is like uv is really trying to be almost a synonym to python standards, right? So everything that gets accepted, they will make a push to to add it there, um, which actually I think is a it's. I think it's a nice way to go, in a way, you know, like if you become the Python standard, then I think you're pretty like, you're not, like no one's going to accuse you for being too opinionated, right, because you're just following the community guidelines. Really.
Speaker 1:And what are dependency groups?
Speaker 2:There's a definition. So I think it's like if you have, normally you have the dependencies and you have the dev dependencies, but then you can have something like test dependencies. You can have something like it was already there right, but for uv I don't think so. And there was no standard. So, for example, poetry implemented it. Okay, there was no standard there was no standard exactly, no, exactly finalized.
Speaker 1:So the pep Okay, because this already existed, then If this library with the Postgres backend, for example, and instead of the Postgres I want to use the DB backend, exactly. And now there is a formal definition of DB.
Speaker 2:So I think it existed already. Indeed, many tools implemented because there was a need, but the community didn't agree on what to do, right, maybe something on that line, and I feel like I'm hijacking the whole pod. We'll get back to your topics. For example, are these log files right, requirements of txt and all these things, um? But turns out that actually there's no convention about this right. Even requirements of txt is something that someone just did and everyone just kind of did it as well, but it's not a pep. It's not like something quote-unquote accepted by python, right, um? One thing that it was shared in our slack, it was. It was a while ago as well. That was was shared. There was another PEP and actually let's see if this is.
Speaker 2:It was already rejected. File format to list dependencies for reproducibility for an application. So basically a log file, right? Is this the one actually superseded by no? Maybe this one yeah it's a draft.
Speaker 2:Basically, there's a lot of attempts to make this log file well, a standard for log file, and I think that's the biggest beef I have with UV, because they do have a log file but that if you use the UV log file, you're stuck with UV. If you read the poetry, then you're stuck with poetry. If you're stuck with Rhino because read the poetry, then you're stuck with poetry. If you're stuck with rhino because it's requirements of txt standard, right.
Speaker 2:But I do feel like if you've, if there is a log file format and uv adopts it and other tools start to adopt it, then I would have no reason to not go for uv, for example.
Speaker 2:That's, that's the only, but it's not even uv's fault in a way, right, um, so yeah, there's a lot of tries for this and one thing I thought it was funny in a way is that I went through discussions, the Python discussions, and they actually reached out to Poetry to talk about this proposal and Poetry already said from the beginning like we're not going to support this because the way we've set up our tool is too different and we cannot make any changes now. So, poetry, I think it got really popular because it was one of the first ones, but at the same time I think it made it very hard to change because they built on top of that and I do feel like Poetry is being left behind more and more because they don't comply to the standards, like the PyProjectoml from Poetry. It's not standard, uh lock file. They wouldn't be able to adopt it right.
Speaker 1:so well, it's a, it's a design choice, right, like they want to be to remain some somewhat backwards compatible. Yes, I mean they can say fuck everything. Like from now on we do it a different way, right, like they could say that's true, like there's also some value in the maturity, that there is stable stability, right, yeah. But I also, most likely, if you use uv today for a production project and you look and you want to upgrade it, uh, next year there's probably not going to be an easy upgrade part yeah that's true poetry probably will have.
Speaker 2:Yes, that's true, but at the same time. So I mean, I fully agree.
Speaker 1:The only thing I have a bit of, and the thing is like none of these things matter when it's your personal pet project, right? Yeah, you don't care about these things. Yeah, from the moment that you build something for a production environment, you want to have some stability in terms of years, not weeks yeah, I think the the issue for me is when you take that a bit too far.
Speaker 2:So I do think there's some changes. I do think poetry should have had a breaking change by now. The reason why I say this uh, like poetry, they used to have like dependencies and dev dependencies and then they also implemented group dependencies, but then dev dependencies became a group and then I see a lot of pyprojecttomo that has two poetry sections one for dash dev, dev, dash dependencies, and then one dot groups, dot dev. Yeah, and that's because, like, people update the tool in the middle, but like the actual like, I guess for me the thing is like they buy, they don't want to make it a breaking change, so they just kind of keep adding stuff to it.
Speaker 1:But I'm like, I'm not even debating that poetry is good or bad. He had another packaging tool. But there are two sides to this coin and to being fast and agile A Pepper's release and 16 days later it's a new thing. You can question how good were these reviews? How sure are we of this implementation? How clear was this implementation? Is this something that you can do one-to-one, or do you need to have some discussions about how the standard is actually implemented, like there's pros and cons to being very fast versus very stable?
Speaker 2:No, that's true. That's true, I guess. For me, my main point was I think it's good to have backwards compatibility, but I also think you shouldn't. I think it's good to have backwards compatibility, but I also think you shouldn't. I think there should be a limit to it, right? Like, if you want to change the API, there should be a breaking change, and you shouldn't just support two different versions of the same API, just so you don't make things breaking changes, right. So? But I agree, it's always a trade-off. It's always a trade-off. What else? What else is new? Maybe back to the ai stuff, bart?
Speaker 1:what the ice stuff?
Speaker 2:metal yama training to be bigger than ever. Is that what you said?
Speaker 1:oh yeah, there was an article on uh on wired the last week yes that um said.
Speaker 1:Actually Meta today has the biggest, biggest server park not sure if that actually directly translates to a server park, but at least the most amount of resources to do training of their Lama model versus their competitors, like the most notable, of course, being OpenAI. So the resource that Meta today has available is the most notable, of course, being open the eye. But so the resource that, uh, the method today as available is, as is the most um of all the big ones, which um questions a bit like what? How does the ecosystem look like? I think a year ago everybody thought llama was very cool but no one really took it really seriously as a competitor.
Speaker 1:Today the performance has become really good. You see it being used in the industry, in the community, a lot because it's way more open, you can build upon on top of it. So I'm wondering what Meta's position will be like two years from now I'm also wondering what's the end game here?
Speaker 2:because meta, are they profiting from these models, and how much? Because the, the models, are open source. Well, maybe we can talk a bit about that in a bit. But, like the, the weights are available and I think, uh, I think the only restraint is that, as long as you're not competing with meta, you can use them for whatever you want, kind of uh, well, there are some limitations on there, uh, in terms of number of users and stuff like that okay, yeah, but it's it's very permissive.
Speaker 2:It's very permissive, yeah and um, so I'm assuming they don't make a lot of money from these models, but if they have the largest cpu, gpu, rack I'm there's probably a lot of investment right, and is it just to train the models or is it to also use it in the meta products? Probably both right, yeah, probably both I like.
Speaker 1:It's. For me it's hard to like they. They have a lot of features. I think there is now a model. I don't think it's available in europe actually, but in whatsapp as well there's. Uh, there is a an ai model available. Um they we don't know how it is being used behind the scenes, for example, to optimize ads, stuff like that we don't know. Yeah, meta is very much driven on ads as well. Um, with all the resources they have available, I could see them at some point providing a service yeah, and a service for their lm.
Speaker 1:Um, let's see, they're still also very much on on the uh augmented reality side.
Speaker 2:I would see something like this generative ai also playing a role there yeah true, so let's um but I think it's like for me, this, this news, puts me a bit on the edge of my seat, in a way, um, because I feel like there's something I'm missing. You know, I feel like, yeah, maybe they will release some gen AI for, uh, augmented reality, or, yeah, maybe they'll release a proprietary model soon, but there's something that I feel like it's, you know, something's in motion that I don't see yet.
Speaker 1:Let's see. Let's see. I think it's always a bit difficult to a meta Like, like they. I think it's always a bit difficult with meta. They did a lot of the VR stuff, the AR stuff and they invested more versus what the result is today. I think that everybody is thinking at the same time. If anyone is going to pull off AR in the near future, it's probably going to be meta.
Speaker 2:Yeah, what do you think of that?
Speaker 1:Do you think there is a future for AR? I think so. Yeah, I think so. I think the question is like how will it look like and what kind of devices, what kind of peripherals? But I think it's just waiting for it to be available.
Speaker 2:And do you think, and what kind of who's the classical user of this? Do you think it's more for entertainment? Do you think there is a business in terms of working people to use? Because I also saw the Apple Vision. You know they had a huge screen on the wall and they had this and they had that, but where do you see that they are fits there? Maybe for you? What are the things that you say? Ai would be a good use case for this thing that I'm doing today um AR, you mean yeah, uh.
Speaker 1:I don't see myself using it okay today. Yeah, especially not with Apple Vision. I think Apple Vision is way too it's clunky, you know it's too much in my way yeah, yeah, yeah um, I think it's very cool if you, if you like these type of gadgets yeah, yeah, yeah, it's more. It has a very high gadget factor. Um, I think with the what's called again the glass that were introduced very recently by meta the, the.
Speaker 1:It's like the ray-ban thing, no yeah, but also another one, um, which is it's more or less comparable to apple, uh, to apple's apple vision pro orion maybe yeah yeah, but much more uh, accessible, uh. At the same time, you can. You can debate, like it's not much, much, not very far off, on the google glasses that there were 10 years ago yeah, yeah, indeed, yeah to me, I don't know.
Speaker 1:I think like yeah, so maybe I'm showing here on the screen for people just listening the orion, uh, announcement right I think what it wants you to do is to and today, the only way that we really have to do this is glasses is to add information to real life, right? Yeah, like what you do already with with if you have an alexa home, if you have a smart home, these type of things, like you can interact in a digital way with your environment. What we, today, do not really have yet is like have these augmented things? Like we have notifications, we have extra information popping up, yeah, or glasses the way? I don't know, do I like them to be the way? Probably not, like I think it's a privacy horror, but I do think we were going in that direction. I'm not sure how it will look like. Yeah, I think, and I think if there's a company that today has a huge amount of knowledge invested and capital invested in that, it's made yeah, that's true.
Speaker 2:I feel like, if the, if this becomes a big deal tomorrow, they are the leading one, right, right.
Speaker 1:They even changed the name company name to to reflect more a bit this If tomorrow they can come up with contact lenses that look normal, that you don't see, and people wear them. I think you will see yeah.
Speaker 2:Yeah, yeah. But I also think it's interesting because to me, I always associate this with gaming. I guess, like. So when I think of augmented reality, I think, okay, some gaming stuff, like for fun, right, that's also what we see today, like when you have the I mean, it's not really augmented reality, but like when you have the vr headsets, um, it's more for like game-like environments, right for vr, yeah, and uh, yeah, I think they pretty much are. They are advertising this very much for the everyday user, for someone like business. So in the video here there's like someone having a call and like floating in the middle of the living room, right.
Speaker 1:But if you could have that with contact lenses.
Speaker 2:But I feel like, even if it's like a sleek glasses, I could buy that. I mean these glasses right now for people that are just listening.
Speaker 1:They're not sleek eh.
Speaker 2:They're not sleek, they're like there's a google glass were better than this. Yeah, they look like uh, there was a. I saw on the saturday night live. I think they were comparing this with the, the minions. You know, the banana, you know, you saw that. Yeah, it's really funny, um, but yeah, they're a bit, they're a bit clunky, but this is much better than uh, than what we would see three years ago. Would you wear something like this, alex, because you wear glasses normally?
Speaker 1:No, they're too clunky.
Speaker 2:They're too clunky Okay.
Speaker 2:But if you could have, like you're wearing glasses now this functionality would be possible in your glasses, but then maybe it's just a matter of time, right, but that's what I'm saying. This is not how it's going to look, look like, but I think, if anyone is ready to do this, it's meta, yeah, yeah, but I do think, yeah, because, alex, glasses for people that are just well, it's not on the screen, but like you're just uh like a thin frame, right, and I think that's the the main difference, right, like, uh, the frame of the disc glasses are super, super thick. So, but yeah, yeah, true, true, I guess, uh, to be seen what happens I think also with contact lenses, would you see where your eyes are looking.
Speaker 1:Would it look?
Speaker 2:weird, it's a good question.
Speaker 1:Well, it doesn't exist in contact lens today, so no, but I guess because imagine the, the contact lenses.
Speaker 2:I guess, like it follows your eyes right like you get stuck to your. So I guess it's like because right now on the image you saw like oh, the top left there's this and the bottom there's this, and you're looking here and you're doing there, but I guess if it just follows your eyes it will always be fixed right to in your view pane. I guess it's hard to describe right like what you look.
Speaker 1:It would always be the top quarter, this will ever be possible, a contact lens, but I think with the orion, if I'm not mistaken. I think it projects it on your eyes actually, so it's only virtual that you're looking in a, and that's just for where your eyes look to, how to, to how it gets rejected is it?
Speaker 2:I'm wondering if this is okay for the eyesight if you just have stuff projecting your eyes well, that's what's happening all day long, I guess yeah, but like photons, did you always hear like, oh yeah, don't, don't watch tv in the dark because it's bad for your eyesight and this and that? But is it that's what? That's what I heard growing up? I'm not. Uh, that's what our mom stole this yeah, right, yeah.
Speaker 2:So if your mom, if you're listening, did you? Did you lie to me, mom? Okay, to be seen, to be seen, I think yeah could be very exciting, though I could buy into it if it didn't look clunky indeed. Maybe more on the AI tech, but this is from OpenAI, actually not from Meta. I saw this. This is experimental. It's called Swarm.
Speaker 2:Have you ever did you come across this at all? Part no, so experimental, slash, educational, so I guess not something for to be picked up tomorrow, uh, but uh, basically it's. Uh, how do you say orchestrating agents, handoffs and routines?
Speaker 1:so basically, oh, it's the agents talking against each other.
Speaker 2:Yeah, yeah so this is from OpenAI, so I think that's why I made a lot of noise. Openai released a package which is not just a wrapper around your models. Basically, it's so you can create multiple agents with different prompts and then you can have them interacting with each other. So this is supposed to help that. So yeah, for example, the example they have here, the, is like what's the weather, new york? And then there's a triage system and assistant, and then there's a function that says, well, transfer to weather assistant, and then you'll go to the weather assistant and then the weather assistant has a different set of prompts and expertise and functions and all these things, and it goes to 67 degrees.
Speaker 2:So I guess you can also think if you had a bot in the DataRoot's webpage, for example, and then the first step is to know what kind of question are you asking, so classifying what these intents are, and then this is a way to do it. So I think it's something really cool. I think it's something that is much needed, but I wouldn't use the they just because they are saying that it's experimental and educational, right, so maybe just something for fun, but you did see something like this.
Speaker 1:I think I read this on Reddit. There was a discussion on this.
Speaker 2:Do you know if there are any other quote-unquote competitors for this or no?
Speaker 1:I think the question is a bit about what is the definition of multi-agent orchestration. There are definitely multiple frameworks that do multi-agent yeah, that's true. I think the interaction between those agents is a bit the what the discussion is about true, true, true.
Speaker 2:So just something I wanted to share, because one wise man once said that a week keeps the mind at peak, um, and then, uh, maybe more on the ai, or maybe rag, to be more specific, pg, p, pg rag, uh, or pig rag, yeah pg rag.
Speaker 1:This this was released by Neon, the company behind the managed Postgres database, with Serverless right, that's the big Exactly, yeah, serverless. So they have a way to separate out storage and compute for Postgres. I use it a lot actually, neon, you like it, I like it, and the reason I know of this is that they sent an update mail, I think last week, where they said that they launched PG Rack, the PG Rack extension, and it's a Postgres extension and it allows you to do big parts of the Rack pipeline within Postgres. So you already have some of these things.
Speaker 1:I think the most notable is pg vector. Yeah, but it's just a vector database, it's just a way to hold vectors, um with pg rec. What you can actually do is that you can, uh, you can, convert text like html, like markdown, like these type of things, to regular text. You can chunk it, um, and then you can. Also, there is a local embedding model that they use, so you can immediately from that embedded in a vector space that you can then query, and all that in your postgres database so it takes a step further from just a vector database.
Speaker 1:It also process documents and chunks them exactly and retrieves them based on Exactly, yeah when, with just the vector database or just vector storage, if you? Simplify it like that it's just the storage of those vectors and the querying of those vectors. Wow, this is also going to everything up to that storage, all from Postgres, which is interesting.
Speaker 2:Yeah, interesting. Also, they're showing some examples on how to use it. You can literally just put the prompt here, the query, right, what is dot dot dot? How does it work? And then you can actually, in your select statement, embed it for query and then you put this in the query. That's how Imagine works. So it looks pretty cool. You haven't tried this. I'm assuming I haven't tried this. I'm assuming I haven't tried it. No, no, and neon is it like? Is this anyone with the like who can use this? Only if you're using neon, only if you're using postgres if you're using using postgres.
Speaker 2:Yeah, this is like a regular postgres extension okay, because it's on the neon docs, but since open source, I guess it's open source. Yeah, is neon open source as well?
Speaker 1:uh, it's a good question. I know that it's open source. Yeah, is Neo open source as well? That's a good question. I know that they have open source parts of their implementation. I'm not sure if everything is open source, to be honest, or maybe are they source open? Which is a bit, I think they're actually open source, but I'm not really educated on this. It's been a long time. I looked into it a bit when it was just released. That's why I'm quite sure that big parts of it are open source. But really cool.
Speaker 2:I'm not sure of the latest status uh, and maybe, uh, yeah, I mentioned open source versus source open. Um, I guess the main difference, that way I see and maybe correct me if I'm wrong here open source is something that is community driven right, so people can contribute with code and then it can get accepted and can get incorporated in their product. Source open is more like the code is there but you can see, but it's not like you're not going to interact with it necessarily. You can report bugs, but who builds it and maintains it is more the. Is that a good? I think it's a bit of a simplification. I think source available just means that you can maintains is more the. Is that is that a good?
Speaker 1:uh, I think it's a bit of a simplification, I think source available just means that you can look at the source, yeah, and maybe you can do other stuff with it. Open source is a definition, um, it really depends on the license that you're using, and it's maybe a good segue uh into the open source initiative, which is, I think, the most well-known organization that has has built a number of these uh of these open source licenses and manages them, has discourse on them to to let it evolve, and last week they actually uh, after, I think, more than a year of uh of uh research, they uh released version 1.0 of the open source ai license so maybe, why is this a big deal and why is this different from just regular open source?
Speaker 2:what's the tricky thing?
Speaker 1:regular open source. What we're typically talking about is source code, so just written lines of source code, which is different from ai models, because an ai model typically has it has source code, it also has data, it has trained weights. Um, it's much more, let's say, just with with just the source code, you can't really talk and speak about a model because it needs to be trained, etc. Yeah, and there is a lot of chatter in the in the community about what our open source models and whatnot. I think Lama very much states that it's open source.
Speaker 2:But I think they play a bit because, again, this is the first, as I know, the first open source AI definition, exactly, and I think people before, because there was no third party that was saying what is open source was not.
Speaker 1:It was a bit of like anyone can say it's open source because yeah, exactly, yeah, exactly. Well, and to be honest, like it's not. That uh, osi, the open source need should have as really like a legal standing and like, if you, if you use the definition wrong, you're gonna, we're gonna sue you. They don't do this right, but they, they try to uh create a community-wide consensus about what is open source and it has been used.
Speaker 2:These open source licenses have been used in legal settings as well.
Speaker 1:Yeah, yeah, that definitely has, so it has like a legal.
Speaker 2:it is a legal instrument as well.
Speaker 1:It's like it's not just From the moment that you use a certain license, there is a legal meaning to it.
Speaker 2:Yes.
Speaker 1:But it's not because I say what I do is open source yeah, yeah, yeah that you can say I have it or not. And from the moment that I say my code is GPL, yeah, and I do something that is not GPL, yeah, that is not okay, right.
Speaker 2:Indeed so. But just to say, like the, like these licenses they show, it's like there is a very legal, practical, exactly. There's an implication, there's a consequences. It has to be thought through like the, the, even the hashi corp that we talked, like the open tofu hashi corp, they, they changed licenses in the way that the stores were still available, but they're restricting some things and there was like very big legal implications from the moment that you, let's say, implement the licenses, which is what hashi corp did I think it was was g, it, gpl.
Speaker 2:I'm not sure.
Speaker 1:Yeah, I'm not sure, but from the moment that you implement it, there is, there is an implication, like it has implications, right, but from the moment that you just like you don't care, but, you just. So this is uh, this is interesting in the sense that, uh, I think, when just browsing through this, I don't think llama is open source can we have the hot, hot, hot hot?
Speaker 1:I don't know if it's really hot, but, um, I think, uh, the uh, the interesting things about this is that it's it says also way on how to. So you don't need to. You need to publish how you trained it. I read that as the source code. Okay, you need to. You need to be very transparent on what data you used, how you acquired the data. If there are ways to acquire the data, either for free or paid, okay, so that, even if you don't have access to the data set, that you could reproduce it with the right amount of efforts and that typically, like these open source models, they have also their weights are included.
Speaker 2:Yeah, what they mentioned here is parameters, right.
Speaker 1:Yeah, um, there are some the definition and I'm don't have it exactly fresh in my my my head, but the it says that users can freely use and reuse this without specific, uh, limitations. I think the limitation that Lama defines and again I don't know exactly how it is anymore, but it says something about from a certain amount of users, you can't use this anymore or you need to inquire. That is very bit of arbitrary. That is not something like. You need to show that you use this as a base right.
Speaker 2:Yeah.
Speaker 1:Then still everybody can use this. Yeah, but this is very arbitrary, like from that moment on what, you can't use this anymore or you need to ask for permission. Yeah, I see I think that falls out of the scope of this open source definition.
Speaker 2:This is really cool. I think it's much needed. I also think you mentioned that they've been for one year. You said they've been uh, something like that, yeah, which I also think is good.
Speaker 2:I think if you have something like this, like this is uh well, I think you need to take time to to study and see how it is and release before releasing something right, this is kind of like it is a one-way door in a way. Right, like, once you release there, you create expectations and you have this to kind of go back or modify. This may not be as simple, because people are going to build on top of this. So I also think it's it's nice to hear that they it was a well thought, yeah, you know, decision, and I do think it's much needed, right. Like you said, we we see a lot of stuff from models and all these things. I think there are a lot of people that want to do as well, but they don't know exactly what to do. Um, how to say like, yeah, this is proper open source and this and that kind of agree.
Speaker 1:So, even if this is a proper, proper way to educate the community on what is indeed, and what not?
Speaker 2:and also why not, indeed, indeed and I think, even if it's, even if people don't fully agree, at least it gives a common ground to discuss these things Exactly and that you can evolve from this.
Speaker 2:Indeed, indeed, really cool, really cool. Maybe. I also saw here that we can endorse the open source AI definition. So look at that. We have quite a lot of companies here. Probable. This is the company now is behind Scikit-Learn, ricardo Libre, big retailer in South America, bloomberg, mozilla. Really cool, really cool. All right, and now maybe to something a bit lighter topic, maybe still in the food for thought.
Speaker 1:A lighter topic maybe um, still in the food for thought. Someone used v0 by vercel and has some thoughts to share. Uh, yeah, just think it's cool. Okay, all right, thanks everyone. V0 by vercel is uh, it's like their geni, like a geni tool by vercel that allows you to, with prompts, it generates basically web sort of web components for you. Yeah, I think it leans very much towards React components and I actually, to be honest, I thought that you can only do React components. Maybe I can share my screen actually.
Speaker 1:Okay, there we go, I'll let you do it if that's possible. I thought you could only do React components and I was playing with this over the weekend and you can actually, but it's less performant. Like you can also say no, this UI find the React component looks cool, but please implement this with HTMX.
Speaker 2:And it does that to some extent as well, but it's probably because it probably uses a base model that has some knowledge of HTMX, but it's probably fine-tuned on the. Because Vercel is the company behind Nextjs. Yeah, which is also like. I don't know if they're built on top of React. It's built on top of React. Built on top of React.
Speaker 1:So we're looking at a page now and it says what can I help you ship and maybe generate a login page specifically geared towards a Brazilian guy, you were going to say this. I was just waiting, called Murilo, make it super fancy and sleek. I haven't tried this yet, so Is this free or paid? Well, what I'm using now is free. Okay, I haven't like Brazilian login, so it's generating. Bem-vindo, faça login para continuar. Can we get an applause? So it's generating. You see, benvido Murilo Fasa login para continuar.
Speaker 2:Can we get an applause? That was pretty good.
Speaker 1:Yes, and then the email already holds like a placeholder muriloexemplarcom. There's a password place. There's an enter button. How do you know it's a password? It's an enter button. How do you?
Speaker 2:know it's a password, it's just a senha. How do you know what that means?
Speaker 1:It's I get it from the context.
Speaker 2:It's very yellow with green.
Speaker 1:Yeah, yeah, yeah.
Speaker 2:It looks very Brazilian. Right, it looks very Brazilian and the Portuguese is perfect. Bem-vindo, faça login.
Speaker 1:And if you look just as the aesthetics, it looks really nice, right yeah?
Speaker 2:the aesthetics, but it does look very. I mean, I guess it's very modern yeah yeah, yeah, that's what. I was going to say. I feel like the style is very much like Vercel, which is the modern style. Right, like that's how you see the borders are round, there's a bit of shadow, like on the background. Right Like the font, it looks sleek, it looks really cool. I think it also it's TypeScript.
Speaker 1:It's TypeScript. Well, again, this is a TSX file so it's TypeScript, but if I ask it to just do it in JSX, it will do it.
Speaker 2:It will do it, I think, if you look at the underlying code, you would say yeah.
Speaker 1:If you look at the underlying code and the locations of that stuff, it looks a bit else to be trained on Shed's CDN, which is a typical, very modern, sleek-looking component library and it has the same bit of principles, right Like the Shed's CDN.
Speaker 2:from what I remember is a bit like that copy-paste philosophy which is kind of what they're encouraging you to do here.
Speaker 1:I'll make it even better, make it even more sleek brazilian. Wow, it's gonna be like samba football like the brazilian times 10, let's see what it is and now it starts making this very brazilian let's see what it does first.
Speaker 2:But uh, one thing I also like I think is interesting like the, the web page is in portuguese, yeah, but, um, the logging is all in english as well. So I also, because one time I was writing something for like portuguese, and hola morello, ah, this is so funny. You know, jogadores, no, it's like a football player, like it's just player, right, but like e-mail, do jogador is like. And then murilo at canarinho canarinho is like a canary, which is like a bird, which is the mascot of the brazilian national football team.
Speaker 2:So it's like and then it says in the bottom novo, no samba.
Speaker 1:Like new at samba, join us at blah blah blah, there's actually a palm tree, but you don't see it. With the overlay, we can hold on.
Speaker 2:We can change this. Can we take this? Hold on we can do this.
Speaker 1:Let's remove this and there's a small palm tree. Yeah, jumping.
Speaker 2:There's a bit of sun on the top left, so if you do something.
Speaker 1:Brazilian time 10 times 10, you end up with football players. Yes it's no yeah that's the conclusion of today it just becomes more stereotypical.
Speaker 2:Right, it talks about samba about football just to dance, you know? So, yeah, so it becomes more stereotypical, but yeah.
Speaker 1:You want to one-up this?
Speaker 2:Let's see how racist you get.
Speaker 1:Make it a thousand times more Brazilian. I wonder if, at one point, you would just say but would you be more comfortable filling this in versus a regular login form? For sure, Credit card yeah, I'll do it all If you do A-B testing.
Speaker 2:Yeah for sure Interesting. I like how they said import coffee, sun, palm tree, music, umbrella, flag feather, so you can already tell what it's going to be. Oh wow, it's a lot of animations.
Speaker 1:Yeah, they have a lot of animations also I see they went with carnival.
Speaker 2:Now what's the feather thing does? It doesn't mean anything, I'm not sure actually. Ah, email the sambista. So usually you dance samba at carnival. So that's why, like within the carnival, sambista folia is all like kaina folia. Wow, this is a novo, no bloco. Oh, this is funny. So it means like new at the block, because usually in carnival you go like in little blocks, you know. So it's like this is this is great.
Speaker 1:There's a football, football in the background as well but aside from I mean aside from the Brazilians, it looks good, right, it looks good like with minimal effort it looks good yeah that's what I really liked.
Speaker 2:I didn't expect it to be so mature in terms of uh this almost makes me feel like I could do something like this, right. But I'm also wondering how, because I guess like yeah, it's the same thing when you take a template and try to modify it, right like as soon as you try to get the two things to interact.
Speaker 1:I don't know how easy it will be to put everything together, to me the challenge with Gen AI when it comes to visual stuff and it would be a good test to do this here as well but especially with images and videos and stuff, it's like to generate something in the same style, so you have this consistency. I think that is very hard If you now say, do a login page and then do a homepage and then do that page a home page and then do that page.
Speaker 2:What did?
Speaker 1:you pass in as a input. Can you like say well with images and maybe with with something like v0. It would be easier, because a lot of this visual aspect is expressed in code exactly, so it's easier to say like following this, this style probably. Game style make a calendar application. Let's try it. I hope I don't reach my limits, my credit limit.
Speaker 2:But it's good, it's also free. You said right.
Speaker 1:Well, what you're seeing now is all free and it's super easy to use. It's like literally just copy this, this tsx file, here, um and when, when it it also generates. When I, when I was doing it over the weekend with hd mix, it generates all the files, so it also generates multiple files that you can all copy paste. This is really cool yeah, um now we see the calendar page. The same type of looks very similar, same type of.
Speaker 1:Also a lot of animations, the same background very brazilian from the yeah it's actually quite, uh, quite consistent styling oh wow, look at that.
Speaker 2:They have the, the. It looks great, huh, cool. Yeah. Well, I'm not sure I'll make the same stylistic choices, but but like it's very impressive as a as a product, huh yeah so, yeah, I was excited about it when I tried it. No, this is really cool. This is really cool. Maybe I'll well, I don't do a lot of things like this, but, uh, I ever have a use.
Speaker 2:Maybe I'll give it a try, see how far I can get I'll stop the sharing yeah, do you think there's a danger of doing things like this because, uh, you don't understand the code necessarily.
Speaker 1:And then I think not more than any other, ai supported code generation yeah to be honest okay no, but this is uh, this is really cool.
Speaker 2:I gotta I'm gonna jump on that chip I think the possibility even, but like you said, it's not specifically with uis like you can generate very shitty functional code now without knowing how you're doing it and without being opinionated on how stuff needs to be done at all right, yeah, and I think also like this, for ui is like yeah, if it looks well on the ui, it's fine, right, yeah, and then it's a mess underneath, it's a bit uh, yeah, I feel like there's always a bit of a danger, right, but yeah, that's, that's, that's the truth, for I think the challenge is very much like with very limited effort, you can make something that appears very functional, yeah, which might not do what you're expecting yeah, I think the moment
Speaker 1:there's a bug or you need to change something before if you didn't have the knowledge, making something that appears to be very functional was very hard, yeah, so that barrier was completely removed so.
Speaker 2:So I guess from yeah, yeah, but I do think one use for chat GPT is like when I say I don't know, write a recursive function of this X and Y with this type of elements and I kind of know what I want to do, but it's just a matter of saving the time for me to type it. So if you know what you want to do, I feel like for this maybe it's too big if you know what you kind of wanted to do and you can read the code and you understand and you can make changes, I think that's also valid. I think the issue is when you don't really know what's happening underneath and then it's almost like a house of cards, right, like if something, if you need to change one thing later, if there is a bug, or if you there, I don't know then kind of everything falls on you, right. So, but there are there, it's a valid, uh, there are valid use cases for it. All righty, but really cool.
Speaker 2:I feel like uh covered quite a lot of stuff today. We did have a few more topics, but we can leave it for next week as well. We also have a long weekend ahead of us ahead of us, yeah, right monday is also holiday, oh wow. And then you realized again a long weekend. We just came from long week, I know. That's why I think a ahead of us, ahead of us.
Speaker 1:Yeah right, monday is also holiday. Oh wow, I didn't even realize it. Again a long weekend. We just came from long week I know.
Speaker 2:That's why I think a lot of people are taking this week as a holiday, because then you have friday, saturday, sunday, and then you take this week off, and then you have saturday, sunday, monday off. Why do you tell? Me this now I'll tell you earlier next time. I'm so sorry, so I'm assuming you don't have any plans.
Speaker 1:No, not really no.
Speaker 2:Okay, what about you, alex? Anything? No, I also forgot. Okay, yeah, I didn't know either. To be honest, I forgot I was filling in my holidays and then I saw like, oh yeah, a lot of people are taking holidays. Oh yeah, okay, there's a long weekend, long weekend, that's smart. But yeah, I also feel like in the end of the year, cause you have to use your holidays in the year cycle right In Belgium. So I also feel like either you plan well and then you allocate this time, or you just didn't use holidays throughout the year, and then you have to use it at some point. Oh yeah, I can use it here, or you used it before and then you realize you don't have enough. So it's a bit so, okay, but cool, then I guess I don't have anything planned either, but I didn't.
Speaker 2:Well, I kind of knew about this, but I think that's it. I think we can call it a pod. Stay warm everyone. Stay warm everyone. Thanks, you have taste. See you next week In a way that's meaningful to self-improvement Next week. We have to think while we're recording. Hello, good morning sir. I'm Bill Gates. I just talked about it.
Speaker 1:I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here, rust, rust, this, I'm reminded it's a rust here Rust, rust.
Speaker 2:This almost makes me happy that I didn't become a supermodel.
Speaker 1:Cooper and.
Speaker 2:Netties.
Speaker 1:Well, I'm sorry guys, I don't know what's going on.
Speaker 2:Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here.
Speaker 1:Rust Rust Data topics. Welcome to the data. Welcome to the data topics podcast.