DataTopics Unplugged

#50 Where Will GPT-4o Take Us? Exploring AI's Future & Latest Updates (Claude 3, Stack Overflow & OpenAI deal and more)

DataTopics

Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.

Dive into conversations that flow like your morning coffee, where industry insights meet laid-back banter. Whether you're a data aficionado or just curious about the digital age, pull up a chair and let's explore the heart of data, unplugged style!

  • Stack Overflow and OpenAI Deal Controversy: Discussing the partnership controversy, with users protesting the lack of an opt-out option and how this could reshape the platform. Look into Phind here.
  • Apple and OpenAI Rumors - could ChatGPT be the new Siri? Examining rumors of ChatGPT potentially replacing Siri, and Apple's AI strategy compared to Microsoft’s MAI-1. Check out more community opinions here.
  • Hello GPT-4o: Exploring the new era with OpenAI's GPT-4o that blends video, text, and audio for more dynamic human-AI interactions. Discussing AI's challenges under the European AI Act and chatgpt’s use in daily life and dating apps like Bumble.
  • Claude Takes Europe: Claude 3 now available in the EU. How does it compare to ChatGPT in coding and conversation?
  • ElevenLabs' Music Generation AI: A look at ElevenLabs' AI for generating music and the broader AI music landscape. How are these algorithms transforming music creation? Check out the AI Song Contest here.
  • Google Cloud’s Big Oops with UniSuper: Unpack the shocking story of how Google Cloud accidentally wiped out UniSuper’s account. What does this mean for data security and redundancy strategies?
  • The Great CLI Debate: Is Python really the right choice for CLI tools? We spark the debate over Python vs. Go and Rust in building efficient CLI tools.
Speaker 1:

this you have taste in a way that's meaningful to software people hello, I'm bill gates.

Speaker 2:

I would.

Speaker 3:

I would recommend uh typescript yeah, it writes a lot of code for me and usually it's slightly wrong I'm reminded, incidentally, of rust here, rust.

Speaker 2:

Congressman, iphone is made by a different company and so you know you will not learn rust while skydiving.

Speaker 1:

Well, I'm sorry guys, I don't know what's going on.

Speaker 3:

Thank you for the opportunity to speak to you today about large neural networks.

Speaker 1:

It's really an honor to be here. Rust Data Topics. Welcome to the Data Topics Podcast.

Speaker 3:

Hello and welcome to Data Topics Unplugged, your casual corner of the web where we discuss what's new in data every week, from AI to cloud, anything goes. Today is the what's today. Today is the 14th of may of 2024. My name is marillo. I'll be joining you. I'll be hosting you today, long day. Yeah, I'm joined by the one and only, bart, good afternoon and behind the mic, the sound engineer keeping the lights on alex hello, hello, hello um, we're also live on youtube, we're live on linkedin, we're live on x, we're live on twitch, um all the goods.

Speaker 3:

So feel free to check us out there. Feel free to leave a comment or question. We'll try to address them in the live stream. And, yeah, hang out with us virtually, so maybe. Uh, first things first. How are you doing part? How are you doing alex?

Speaker 2:

yes, I'm doing quite well last weekend was a long weekend last weekend was a very long weekend here in belgium, yeah very long weekend yeah, we had uh thursday, friday off yeah, the way you say, it was almost too long.

Speaker 3:

You're like I can't. Where are you?

Speaker 2:

it felt a bit like vacation, but with the realization on sunday that you had to start working again.

Speaker 3:

But next monday is again a day off, I know I feel like, uh, so and uh, we work on sprints, right. And then usually, okay, this sprint is two weeks. It was like, oh yeah, we're gonna this done, this done, and then halfway through you kind of realize like this is such a short month, there's so many holidays, you know, and then you don't get as much stuff done. But uh, any any special plans? Did you have anything fun on the long?

Speaker 2:

um. I worked in my garden nice with chickens the chickens yes, it's true, I also fed the chickens. Yeah, okay okay.

Speaker 3:

So what about you, alex? Anything fun. Yeah, I visited my parents nice and they. They have chickens nearby oh wow, I'm feeling so left out right now it's really a chicken themed weekend.

Speaker 3:

I know most of us and I'm a very, a very uh pro chickens. You know, I have, ever since I I heard or I learned that here in belgium it's it's kind of somewhat common, you know, to have chickens. You give them the leftover food, they give you eggs. It's like feels like good deal and I've always been advocating for for chickens. But uh, I've never been a chicken father why don't you get a chicken?

Speaker 3:

I think I have two dogs right. It's a bit true. I'm not sure how they are gonna be with the chickens, and I also feel like you need typically there's like an enclosure, yeah, but then you keep the chickens you need, you need a setup right, and I'm not a house owner as well. So it's like then we have to modify the house, which you could do, but then it also feels like we're doing something modify the house, like the chickens, don't live in the house, they're in the garden well, your chickens, um, you're gonna give them a room, a bedroom, but actually they do have the, the little houses right that they can go in, and then the door closed.

Speaker 2:

You know, I saw chicken coop I think so yeah yeah, but it's outside, it's outside.

Speaker 3:

Well, your chickens, you know I'm gonna shoot one. My like, my own, you. But yeah, for people that didn't know, yeah, that's a thing in Belgium. My weekend was also good. We went on a car trip to Normandy, in Etretat oh nice with my dogs as well. The only thing is that on the way back we got a lot of traffic. So one day I was really tired, and but today, I'm feeling better.

Speaker 3:

well, yeah, it felt like a short week because I try to really turn off when I'm on holidays, and we recorded last one on Tuesday, wednesday, working. Then it was already yesterday. Nonetheless, a lot of stuff happened In fact last week. We just missed Stack Overflow and OpenAI deal, is that right?

Speaker 2:

We missed the. I don't think we missed it. I think it happened before our podcast, but after our podcast there was a lot of fuss in the community about it so maybe what was the deal about Bart the?

Speaker 2:

exact contents of the deal I'm not really familiar with. What I understand is that Stack Overflow wants to use OpenAI's capabilities to. It goes both with. What I understand is that Stack Overflow wants to use OpenAI's capabilities to. It goes both ways. Stack Overflow wants to use OpenAI's capabilities, openai wants to use the content of Stack Overflow and the destructed deal around this. And then there was a lot of uproar within the Stack Overflow community because there was no lot of uproar, uh, within the stack overflow community because there was no way for content creators within stack overflow so really, people working actively on giving answers to questions on stack overflow, uh to opt out of open ai using their data for trading data. But there was no, no, there's still. I don't, I still think there still is no, no way to opt out. So users started deleting content, but as a protest, I think and then Stack Overflow blocked users from deleting their own content. So that felt very awkward, of course.

Speaker 3:

Yeah, I think, yeah, the whole thing is a bit awkward. Well, first is, I do feel like Stack Overflow thing is a bit awkward because I well, first is, I do feel like Stack Overflow, if they didn't partner with OpenAI, they were just going to die right, like I think a lot of people go to ChatGPT to ask questions instead of like scrolling through.

Speaker 2:

ChatGPT definitely overlaps in some extent with Stack Overflow's value proposition.

Speaker 3:

Yeah, I would say a lot when.

Speaker 2:

Stack Overflow is value proposition. Yeah, I would say a lot when Stack Overflow is. I have a problem, a technical coding problem, whatever that I'm not sure how to solve. I can ask it on Stack Overflow and probably someone has already asked it.

Speaker 3:

Yeah, yeah, but that's the thing. But I feel like a lot of the times when you're going to Stack Overflow someone asked it but it's not exactly your context, so like you still need to change it a bit or you still need to read another question or two and I think with open ai they already digested a bit for your context, so it's even like more tailor-made in a way. So in a way it's nicer. There's also another. It's not gpt but it's called finecom. I'm not sure this is. I'm also putting this on the screen. This is just findwithaphcom, which is, again, it's specialized for development, I guess. But they also, if you have a question, they'll also search documentation and stack overflow, so probably what a developer would do as well. And then again they also digest it a bit. So it's a bit of like, kind of like a RAG application but with a search engine and all these things.

Speaker 3:

So, uh, yeah, again, I do feel like I do see the move from step stack overflow. I think it makes sense in a way. I think it was gonna. It was a bit of friction there that you remove. I think it's better for everyone. But also, I remember that in the beginning open ai, there were reports that there were, that OpenAI had jobs for people to solve coding challenges or answer questions as part of their position to feed the machine Right, which is kind of like what Stack Overflow is kind of doing in a lot of ways. Yeah, so yeah.

Speaker 2:

What do you think is the? What does the future hold for Stack Overflow?

Speaker 3:

I think it's going to be a flavor of OpenAI.

Speaker 2:

Like a conversational UI. Yeah, but I think it's like it will not be like sort of now. Today is a bit of a forum like you post your stuff and you have a thread on.

Speaker 3:

Yeah.

Speaker 2:

And then it will be more conversational.

Speaker 3:

I think it will be Well, I do still feel like there should be a you post a question and people answer, because technology is always evolving, right?

Speaker 3:

So I still think you need that, but I think it will be a bit of a mix, you know, like maybe you ask a question, chat, gpt will try to answer and it will say did it answer your question? No, no, and then you just get you posted on the question form and then maybe they're I don't know if they're going to try to monetize this more like people that contribute more to stack overflow will have some benefits, some perks, right, because today people that do contribute to stack overflow you get like, uh, like reputation and stuff, right true um, but I think that's about it right.

Speaker 3:

There's no additional good badges you get badges, yeah, so like you have bragging rights yeah yeah, maybe a fun fact. Jonas, correct me if I'm wrong co-founder of date roots co-founder of data roots. He's, uh, the one of the top uh stack overflow answers for for the art community is that? Is that correct back? In the day back in the day. So he has a lot of uh bragging rights on stack overflow, but he's a humble guy so he never brings it up.

Speaker 2:

Shout out to jonas who is actually top seven percent overall top seven percent overall.

Speaker 3:

Now yeah of r or everything really yeah, yeah, calm down. I need to try to put this on the screen. Huh, dude, what's this? Uh?

Speaker 2:

if you just type Jonas Tundo stack overflow, you'll get it user Jonas Tundo.

Speaker 3:

This is the.

Speaker 2:

This is the one in only it says top 7% overall there wow look at that.

Speaker 3:

So, jonas, if you open AI, you know they start giving credits to the. You know the top people. Jonas is in a good position. Everyone's gonna be paying back. Everyone's going to be Jonas' best friend.

Speaker 2:

But I do think, like the future of Stack Overflow. I do think it's interesting to think a bit about this as, like it will come closer to OpenAI, like conversational, like findcom, for example. But I do think it's interesting to have this a bit as a backlog and bit of a training data set, like maybe it's conversational, but like if you use that, that validators, and say, okay, this is a correct solution, so you have a bit of a backlog on at that point, this was the correct answer, maybe devolved, and then we will need this data yeah, I think it will blend in somehow.

Speaker 3:

yeah, right, because, again, and also stack overflow, they do encourage people to upvote, to reply, to add comments, you know, but with this system and maybe now you already have a system that encourages people to do these things yeah, and now with the ChaiGPT it's validated these things more. So I think it's a good. I mean, I think it could be really interesting.

Speaker 2:

And I think, if we see it as a data source for more rack-like applications, I think the space that it fills, which typical documentation around the library does not fill, is like the interconnection between libraries and systems. Like I want to use this in that context, there's a lot of questions you see there which you are typically do not see very clearly answered in the documentation of library x yeah, yeah, but actually fine, I feel like they do a blend between stack overflow answers and documentation so you're saying it already exists.

Speaker 3:

It exists in a version of that right so, but I do feel like this will will bring it closer. You know the platforms and all the things. I think it will be a ui will be a bit nicer maybe, but uh, you know, and uh, also stack overflow. If I'm not mistaken, they're pretty open in terms of their data. Right, like I think on bigquery, you can actually download a dump of Stack Overflow questions.

Speaker 3:

I think, there's a public data set. I think they update it daily. I don't know if it's all of Stack Overflow, but I wonder as well if, with this, openai will make it less open, which is a bit ironic because OpenAI, but anyways, do you think that it's a thing? Do you think that Stack Overflow services will change a lot with this deal, or do you think it will?

Speaker 2:

To be seen.

Speaker 3:

To be seen and, as you mentioned, some people were not happy about this, so they started trying to delete their posts and whatnot. So this is the article as well, and again, everything will be in the show notes Anti-AI users. Everything will be in the show notes Anti-AI users. Do you understand, maybe? Why were people so against it?

Speaker 2:

Or do you know? I think you always have with everything in life you have people on extreme sides.

Speaker 3:

But what's the main argument for not wanting your data to be? You just don't want to give more money to the big players.

Speaker 2:

You don't want to well, if I would try to put myself in the position of the stack overflow users that are now angry because they can't opt out. I think it's the inability to opt out. So I will see this as my data, because I created it and I want to have some control over what what happens to it from the moment it's something else than for the purpose I created it for yeah, I see, but I guess it's like if you say that the purpose you created it for is to help other developers, then it is, you know, like well in their shoes.

Speaker 2:

It was for the purpose of helping other users of Stack Overflow, not to help OpenAI train their dataset.

Speaker 3:

Yeah, build their dataset right. Yeah, I see Fair point. For me it's not an existential thing.

Speaker 2:

You know, maybe I would have an opinion, but you know, but maybe you didn't spend 16 hours every week creating content yeah, it's now taken away from you and someone else is profiting of it yeah, I think that's the thing.

Speaker 3:

I think it's more like people are making money off of this. But I think it's like we talk. I think we talked about uh rough and the, the guy that, anthony, that he really moved the python linting ecosystem, and then you're saying, okay, now people are just translating to Rust and now they're making money, and I think it's a bit of that. You know, it's like when you publish it to Stack Overflow, you're like okay, this is going to be free, I'm not going to make money off of it. It's there Stack Overflow, it's on their database. But now someone found a way to make a lot of money from it it. And now you're like, ah, yeah, now I don't want this anymore, you know. But like I also feel like the moment that you decide to do this, you it's kind of fair game right, like it's a stack overflow. Now they're like they own the data technically.

Speaker 2:

No, uh, I don't know the terms and conditions but I think there's like there's a difference, feeling wise if you create something for the community and you say I'm gonna make this library and this is this library. Someone else built a SaaS platform and one of the hundreds of libraries that is being used is mine. Yeah, versus. There is this company that just takes my library and builds a hosted service around it and, basically, just like 90% of what they do is take my free stuff and make money, make money off of it.

Speaker 2:

It's a different feeling, right, and I think that is a bit of feeling that.

Speaker 3:

Yeah, no, I, I, I, I, I, I, I I.

Speaker 2:

I, these stack over for contributors.

Speaker 3:

Have you talked to Jonas about this? Maybe he's one of those I haven't actually where it's like one second and it's like red, you know furiously typing away Okay, open AI also, apparently, is going to have more deals coming up, namely with Apple. I don't think this is confirmed. Rumored, right Rumored.

Speaker 2:

Tell us about the rumored deal.

Speaker 3:

So I don't know all the ins and outs. I saw some some stuff on x, twitter, um and I saw some stuff. I mean I saw there were some articles and whatnot. The gist is, as far as I can tell, that apple will close reportedly will close the deal it's a rumor with open ai um and then the speculation is that chat gpt will be the new Siri.

Speaker 2:

And this rumor has come up again because we actually discussed this a few months ago right?

Speaker 3:

I believe so, I think. But again, apple has been linked to every artificial intelligence lab over the past few months, from Google Gemini. So I think that was some discussion. I think I don't remember all the details, so if anyone has this fresh on their mind, but I did remember, um, something about gemini and apple, um, but I think what we all kind of agree is that siri, it's very subpar compared to the other serious shit I was being political, but yes, it's pretty shit.

Speaker 3:

So, yeah, again, maybe GIGPT will be the new Siri. Here's Siri 2.0. What do you think of that? Do you think it's a good move? Do you think it's a bad move? I also saw on the Twitter thread that some people they were not very excited about this, because I feel like we do. I'm imagining that apple still has a siri team right, there's two people working on this, but like, then you go externally, then you kind of just kind of giving up on your internal people. What do you think of that?

Speaker 2:

um I think what we will see is that. I think think it's good, because Siri needs to improve quickly. Siri is really shitty, like if you get Siri to actually do a phone call, you're happy, right. There's a lot that can be improved in Siri. What we've seen over the last year coming from Apple they don't have the bandwidth to really go to a Siri V2. That is impressive.

Speaker 2:

In the coming months, with OpenAI backing them, they probably have and I think what we will see if it comes to a deal is that Siri will probably be OpenAI powered in the coming year and that Apple, in parallel, will work on their own model. You think they will still work on their own model.

Speaker 3:

You think they would still work on their own model?

Speaker 2:

I think so yeah, I think it's so core to what they do yeah and we've already seen a lot of like they, they open source some stuff.

Speaker 3:

We hear some rumors are around apple being involved in lma research I think that's the thing that is a bit weird, because it is something very core, but they're going to someone external right and I feel like that.

Speaker 2:

They made a lot of noise about the apple vision pro, which is not their core, I would say, but we haven't heard any big announcements about gen ai uh no, because I think any big announcements will raise expectations a lot, but two things like we've we have seen, like we know, that they're open sourcing, some stuff that is related, that allows us to to efficiently work with models on their and their new m chips.

Speaker 2:

So I think we do have some, some hints that they're working on this, and I think that's that they will work in parallel to actually have their own model, a bit like Microsoft is doing it. Microsoft has betted very heavily on OpenAI and it's paying off for them, but they're also still working on their own model, which is called May One, apparently Maybe it probably is a code name M-A-I-One where there was a bit of rumors coming out last week on Twitter, yeah, where they're apparently working on a really large-scale model that might even rival the likes of OpenAI and Dropbox and stuff. So they're doing this also in parallel, even though they are very heavily leveraged on OpenAI.

Speaker 3:

Wow yeah.

Speaker 2:

And I think we will see a bit the same way if the Apple OpenAI deal goes through.

Speaker 3:

Yeah, this one, right that you mentioned MAI-1. Yeah, exactly, actually, I didn't know about this goes through. Yeah, this one, right that you mentioned that mai1? Yeah, exactly, well, actually I didn't know about this, I don't know. Yeah, I see, I see what you're saying. I see what you're saying and I I tend to agree. I tend to agree as well. Also, this is a for completion. This is the comment from twitter that I saw. A very poor business move, if true, would strongly indicate, in combination with other recent shutterings of long-term projects, that there is a turgidity, innovation vacuum at Apple, and I think that's the vibe, that's the taste that I get. You know, like Apple is known to be so innovative, but on the JNI they've been really behind, right? Like the series is not good. They're not making announcements, no one's been talking about it, which is a bit uh feels a bit goes a bit against the dna of the company.

Speaker 2:

I don't know, I don't know, maybe that's uh too much. Okay, I mean they also went. They came from uh, what was it? The power pc chips that they very heavily controlled. They went to intel because they didn't have something, and then a few years later they actually moved to their own chip sets. I mean, it's the same strategy in the end.

Speaker 3:

Yeah, true, and it is true, like I do, like yeah, that's the thing I do hear about like the chips that you can program on the like for on that from a edge, basically AI stuff.

Speaker 2:

But I find it more because, like I was digging it and oh yeah, like apple has these chips and you can program these things and you can try to run it and it should be more efficient and they have like a vectorized thing, but I feel like almost found it by chance. You know, yeah, but I think apple is also, if you compare this to other companies, like a google, for example, and like apple, I think, typically does a big announcement when it's really uxy, something that is decent, that is mature, that is yeah there to stay.

Speaker 2:

I have a feeling no this is google that uh very quickly released something, kills it two months later, releases something new a month later, like yeah, yeah actually, uh, maybe something we can actually dive in another episode, but the rabbit are.

Speaker 3:

Rabbits are one the device. I also. I was reading some reviews, because we did bring it up a while ago and then I didn't hear any more about it. And then I was actually looking for these reviews and people said exactly that. That it felt really like backwards. Engineering sales thing is like they have something half big and then they sell it to everyone. They make a lot of noise and then they are now trying to work on it very weird. So, um, I didn't buy one, so you need to need to do to get investors on board.

Speaker 3:

I guess, yeah, but now it's like great hype, but it's like it's very short, short-sighted, no, like it's not gonna. You're not gonna go far with this strategy, right?

Speaker 2:

I don't know because the hype field everybody's a bit here like like you need to at the same the same time. You can't develop in stealth for five years and then come with something. Yeah, you need user feedback as well.

Speaker 3:

But I just feel like it was so hyped, the RabbitR1 thing, like they made the announcement. I remember someone saying like, oh yeah, the Steve job, like exactly the same. They put so much money. You know, like, you know, maybe it, you know, maybe it's just me, maybe it's a me problem. And are we done with the open ai announcements? Bart, not exactly. No, we're not open the eye galore. This best uh week has been busy for them. What happened with uh? What else? I think maybe, uh, that's what people would expect us to start diving in. Hello GPT-4.0. What is GPT-4.0?

Speaker 2:

A new version of GPT-4. I think it's best to look at the video that you just put on the screen People can follow via audio as well to have a bit of a view on what that is yes, are you playing this one?

Speaker 3:

hey, how's it going?

Speaker 1:

hey, there it's going great. How about you? I see you're rocking an open ai hoodie, nice choice. What's up with that ceiling, though? Are you in a cool industry style office, or something?

Speaker 3:

well, can you take a guess at what I might be doing based on what I'm showing you here?

Speaker 1:

Hmm, From what I can see, it looks like you're in some kind of recording or production setup With those lights, tripods and possibly a mic. It seems like you might be gearing up to shoot a video or maybe even a livestream.

Speaker 3:

Yeah, In fact, we've even a live stream. Yeah, in fact, we've got a new announcement to make.

Speaker 1:

That's exciting. Announcements are always a big deal. Judging by the setup, it looks like it's going to be quite the professional production. Is this announcement related to OpenAI? Perhaps?

Speaker 3:

It is. And, in fact, what if I were to say that you're related to the announcement or that you are the announcement?

Speaker 1:

Me. The announcement is about me. Well, color me intrigued. Are you about to reveal something about AI, or, more specifically, about me as a part of OpenAI? You've got me on the edge of my well. I don't really have a seat, but you get the idea. What's the big news? First impressions well, I don't really have a seat, but you get the idea. What's the big news first impressions.

Speaker 3:

Alex, I know you have a sorry to put you on the spot. What do you think about this?

Speaker 1:

I just can't get over the laugh. It's the laugh it's creepy it is a bit creepy it is creepy.

Speaker 3:

You know what I think immediately? The movie her. You know it's really like it's a, it's a female voice. You know as color me intrigued. You know it's like it has a lot of what I think it's creepy. It's a very nuanced voice, huh yeah, but like the intonation is like this, for example, the laughter, that's just a filler that humans do, right for a computer makes no but like why would you need? Or maybe say ums, you know?

Speaker 3:

like usually humans do that to give you time to think and like computers look human exactly, but it's very like she says me. You know like even with the intonation, you know there's a lot of uh like language is very rich. It's not just the actual content, right, it's the delivery, the timing, the pitch.

Speaker 2:

You know there's a lot of stuff there, um so the premise here of gpt 4.0 is that you can use the chat gpt 4 app on your phone to have like this seamless uh combination of video, uh text and audio um and it says here you can reason about these things in real time.

Speaker 3:

That's the other thing I was going to say. The, at least in this uh demo video, the response is very snappy, like they don't wait at all to get the response right, which even makes me wonder if they're doing something on the device. But I very doubted, because it looks like it's an iphone.

Speaker 2:

So I'm very curious to try it. I tried it today on my uh, on my uh mobile app. It wasn't available yet, but I did. I was also working on a project where we integrate OpenAI and it was actually already available in the API.

Speaker 3:

Oh really.

Speaker 2:

Yeah, but I haven't tried it via the API.

Speaker 3:

But via the API. It's like you have to use a voice, something for the API.

Speaker 2:

Well, you can probably stream audio to there. I haven't checked it out, yeah but I think so again.

Speaker 3:

Chad, Chad, TPT for oh. Actually, before this announcement there was a lot of noise Like what are they going to do? Even on the um, uh, on other articles they were speculating that they're going to announce like a voice assistant because they did the partnership with Apple, Um, but turns out it was this for oh, apple, um, but turns out it was this for oh. Does this mean, is the actual intelligence quote unquote, and I put a lot of quotes there. Um, is it better or is it more like the audio vision and all these things?

Speaker 2:

is it a question to me? I don't know, like you're asking me, if the model actually changed the underlying model yeah, is it like if the model?

Speaker 3:

I think it's more uh, more more tightly integrated right, but uh yeah, so here, maybe for the people that are just listening chat gpt 4-0-0 for omni. Not sure why they needed to explain that, because to me didn't add any information for me this is a side note anyways. Um is a step towards much more natural human computer interaction. It accepts input from any combination of text, audio and image and generates any combination of text, audio and image outputs. So again it feels like it's more of the human interface that they're working on, like the sound and the video and all these things, right.

Speaker 2:

So and in that sense, the reasoning is probably like it's you have a richer context, because it's not just what you're typing in the prompt, but it's also like the potentially like the video that you're showing, uh, everything that you're hearing around you. So you have a richer context. So, yeah, potentially the reasoning is is better because of that, right yeah, yeah, indeed.

Speaker 3:

So maybe I'll answer my own questions. On the announcement here, they have a few graphs on the evaluation. So you see, 4-0 it looks like it's a bit better. But for example, lama 300B, which is probably Billion's Parameters, it's also up there. Chat GPT-4 initial release is also up there. So the text part for this benchmark seems like it's okay, yeah, like. It looks like it's ballpark close to other models.

Speaker 2:

It's more or less GPT-4. Yeah, yeah.

Speaker 3:

But then I guess, if you go on the other ones, yeah, yeah, right, yeah, but then I guess, if you go on the other ones, yeah. So for audio they have whisper and gpt zero shot. Actually whisper v3 shows like it's better audio translation performance and they have zero shot results and visual understanding evaluation. So they have a lot of stuff there and I guess the main, but I think they're opening eyes again.

Speaker 2:

The first one to do this omni approach we have all these inputs together yeah, that's true that's true, that's true, that's true.

Speaker 3:

So very curious to see how I would leverage chad gpt quite a bit in my day-to-day work. It's like curious to see how uh like this will integrate.

Speaker 2:

Do you have a?

Speaker 3:

name for your chad gpt? No, not really. Do you go like I'm feeling sad? Can you tell me I look nice today?

Speaker 2:

oh yeah is that how you do?

Speaker 3:

it.

Speaker 2:

No, I don't it's fine no, but uh, I do feel like I get very her like the movie, her vibes a lot what is interesting about this as well is that, um, like this brings jenny I even closer to users. Right, and we see also in the in the long announcement video somewhere towards the end, like there is there's someone that is asking and it's not exactly how to like I'm very much paraphrasing here like they are asking like, can you interpret the emotion on my face? And it gets announced like, yeah, you look excited, something like that. But the problem with that, or potential problem and it links back a bit to our previous episode last week where we discussed the AI Act is that currently the AI Act within Europe says that interpreting emotions or doing something with emotions in a work or school context is prohibitive use, so it's actually completely blocked, so it should not be allowed. So we will probably see a limited version where we have some some uh guardrails in place where questions like these are blocked from, uh, from users in europe.

Speaker 3:

But I think it's like they have this general thing, very general, and now they're trying to cap it, but I feel like they're always going to be ways around it, right? I think so, so I think it's going to be very. I mean yeah, I mean it's not. It's not going to be a perfect system. I mean perfectly would be if you never put the data in the training data right but uh yeah, interesting, interesting, so I was gonna.

Speaker 3:

I was also thinking for bias in ai, right, right, like different facial, like different ethnicities, right, if it's going to be perceived the same, like the vision. I mean, there's a lot of different accents as well, different languages, right. I do think that now it's something so generic. I would be very surprised if the model is not performing much better for a certain ethnic group.

Speaker 2:

Oh yeah.

Speaker 3:

In terms of vision, audio, everything.

Speaker 2:

There are always going to be ways around it.

Speaker 3:

Yeah.

Speaker 2:

Right.

Speaker 3:

Yeah, yeah, yeah, yeah. Maybe one question that we had internally Speaking with ChachPT. Do you think it would be totally fine? Everyone would do this? It would be totally fine, everyone would do this. It would be the new normal. You'd be waking up, put your airpods on, just walk around, say hey, hey, chat, gpt. Uh, can you tell me what I'm gonna do today and then you're just gonna tell you? Do you think it would be weird?

Speaker 3:

I think we would already do this if cereal is better yeah, that's true, but like the google assistant, android or whatever, right, I think there's also the google hi google and there's also another one. No, I have no clue, yeah, but uh, yeah, I feel like none of them are very good no, but all today's, all very basic I like.

Speaker 2:

Can you please call this or can you please send? This message to x. Like these for key. You open this application like it's all very basic, like it's too basic to actually do stuff right yeah, yeah, yeah, yeah, yeah. I think so, but when I can say ah, uh, open spotify, set that playlist, open that playlist and put it on shuffle. I would do it, you would do it. I would do that today, like if I'm in the car. I'm gonna just say that out loud yeah, I see what you're saying.

Speaker 3:

I see what you're saying. Yeah, I agree. I agree, I feel like if it's better, if it's more conversational, I think people will use it more and I think, today is just so clunky that it's not very uh, we're not very motivated to do it right today is very super clunky, like if I'm happy if I get a spotify to open various series and do you think but do you think, okay, this is the more controversial, I guess do you think we'll ever get to more like of a horror kind of thing?

Speaker 3:

people just chat with it, because actually I did see, uh, it was like therapists, like ai therapists or something it's a bit uh, it's weird, huh like it's really the beginning of a black mirror episode, yeah, but I do feel like the more human it sounds, like you know, with the laughter it's like hey, hey, chachy pt, you want to hear a joke, and then the joke sucks.

Speaker 2:

But it's like, oh, my god, you're so funny it makes me think of uh, there was, uh, I think last week, I think there was from this dating website, bumble, is it correct?

Speaker 2:

I want to say bumble, something like that yeah, they included um chachy pt yeah, so so you have like, if you have a profile on this website bumble, is it called bumble? If you have a profile on this website, like, you get like a dedicated chat assistant and that chat assistant then dates other people's chat assistants for you to see if there is a match. What really really that? Well, that was the announcement.

Speaker 3:

I don't know if they already have it, but then you come back with like five little chatbots wow, that's crazy but so like so that your chatbot this is really like a black mirror yeah exactly like.

Speaker 3:

There's a couple episodes I can think of, because there was one that was exactly. It starts like this it it starts with the premises. There's the husband and wife. The husband dies and then the wife says, okay, based on all the chat, like the SMS, the history, we can create a personalized chat bot to imitate your husband and then you can chat with it hey, how are you? And then like and then you can always, since she has the limits, right, you can only have so many conversations. Then you can upgrade, and then upgrade. And there was even this like humanoid robot that you can actually have. And then people were like she's like, no, it's not that. And then like, she starts really getting at it and then her friends always like cut her off. And then it ends with once a year she has a date with her dead husband, which is the robot you know, and it's very like the kids go away.

Speaker 2:

It's like super weird but this becomes very close very close, very close, it's creepy today we find it creepy. Maybe it's normal in 10 years maybe your kids, it will be maybe in 10 years. You will say people will ask you if you're let's say you're single, you're looking for a partner. People ask you how was your evening last evening? And you will say I had five days and you just sat on your on your couch and did nothing watch tv. But it was your chat assistants at all the dates.

Speaker 3:

But so bumble. So maybe going back to that, how does it work? Like your, your chatbot goes on dates with other chatbots and then they report back to you like, hey, I had a really good time with that chatbot no, I think that they just assist you, like how grammarly works, so you'll write a sentence and they can make it better for you.

Speaker 2:

But I think there was also something like that chatbots would date the other chatbots, something like that I don't know.

Speaker 1:

I have a bumble bff, so the friend version and I I looked into it, but I couldn't use it yet.

Speaker 2:

Okay, okay so you can update us on this in a few months yeah, so it's bumble.

Speaker 3:

That's cool dating profile. No, I don't know, interesting. And uh, also one comment here that uh, gpt, gpt 4o gives chappy vibes.

Speaker 3:

So, kiran paul singh, I'm sorry I'm probably butchering your name, but uh, yeah I do feel like it's gives me these uh, chappie, I think, is the robot right like that? Uh, I think I think I watched this like robot and then like a gang tries to teach him how to like rob stuff, and he's like, no, but I'm a good robot and uh, yeah, yeah, all these things are creepy, but maybe it is the new normal, maybe I'm just slowly being left behind. So, yeah, you think it would be totally normal to just chat with ChatGPT.

Speaker 2:

I think so.

Speaker 3:

That's what most people within data roots also thought. 15. Five people said they thought it would be weird and no one would do it. And then one person said Zoomers will do it, and call me a boomer. I don't even know what Zoomers are.

Speaker 2:

Chenzi.

Speaker 3:

I guess yeah.

Speaker 2:

Which gen are you I?

Speaker 3:

don't even know. If I was born in 95, then I'm.

Speaker 1:

Oh, I think that's right on the Right on the cutoff.

Speaker 3:

Yeah, so I can choose. Alex is like, I think so. And uh, you're, you're, alex, you are, I'm definitely gen z, definitely gen z, and you are burnt, it's fine. Next question yes, um, cool, cool, cool, cool.

Speaker 2:

Maybe more on ai stuff, not so much I know right, it's very hard to keep up with all of this.

Speaker 3:

It's very hard. Again, we have an idea If people we should put like the news. You know at the bottom Like the strip for the live stream, you know how we have news anchors and at the bottom there's like scrolling. Like a ticker or like a scrolling yeah a scrolling thing that's like oh yeah, this announced this Like a live thing.

Speaker 2:

This model got released, that model got released.

Speaker 3:

And we should have like a Gen AI bot to scroll the web for that. So it's Gen AI, reporting on Gen AI, very meta.

Speaker 2:

Very meta.

Speaker 3:

What is this about? So, claude from Entropic, so you actually know more about this than I do, bart Well to me this is a bit old news.

Speaker 2:

It is relevant because so Cloud has a new model called no sorry, anthropic has a new model, cloud 3, but already I want to say two months now and when they released it they have like a UI, a bit like a chatbot UI, like chat GPT, but it was not available in the EU. And now they release it in the EU, but it was already like the model was already available in the EU via the API.

Speaker 2:

And so we are. I think we're, in general, already familiar with it. Cloud 3, especially the large version of that Opus model, is in my eyes very comparable to gpt4 turbo, for example. Um, they now also have a three-tier I think it's called sonnet, yeah which is a smaller model which is like 3.5 gpt, 3.5 ish performance, yeah um, did you read vitalis message?

Speaker 3:

because that's exactly what he said. Um, I did but shout out to vitalis as well for sharing this. Uh, yeah, indeed so well. The promise was reading from vitalis messenger that the sonnet, according to their benchmarks, is better than chad gpt for a coding. So coding only, uh. But then he did an update that he did a quick test and he saw it's very similar to chp 3.5. So then it's all news like.

Speaker 2:

This is so two months ago, you know but it's like, I think, and it's a bit like with chpd like if you really are actively using this day-to-day, you're not going to use the free tier. Yeah, it's good to teach you, but it's not what you're going to use day to day yeah, that, that's true. Unless you have a very specific use case where you need to scale up and cost becomes a factor and something else, but I'm just yeah, I guess it's also another thing.

Speaker 3:

There's so many models now that it's like which one are you going to choose? And to me, today I would just go for OpenAI. I mean, even OpenAI has so many models. It's like sometimes like, okay, I have this very simple task and then, which chat gpt model should diversion should I use? Right, there's 3.5, 3, 3.5, turbo through. I mean, some of it is obvious. Right, if you need a big context window, you can just go for this or that. But then sometimes I'm like, for this very simple thing, maybe I should just do the cheaper one, because I don't think it would be something super. You know like it's, for example, on, we have some knowledge sharing sessions and I want to make a quick announcement, right? So I just say, hey, this is the json created a small announcement based on this information and, uh, for me this is a very simple task. So I would imagine that 3.5 is good enough, but it's a bit like yeah I'm not completely agreeing, but it depends.

Speaker 2:

Well, yeah, if you test this for the use case and it's good enough, yeah, sure, the 3.5 today is like stone age right in the island that's how it feels right.

Speaker 3:

And you know, I think I mentioned it before, but at data roots we did some stuff with the gpt2. I want to say yeah, yeah, the auto joker.

Speaker 2:

Yeah, yeah, before it was cool and then it was like it comes so far in so little time, like it generating a sentence that made sense was like whoa, yeah, it's a sentence yeah, it's like, it's like the with the generated images as well.

Speaker 3:

Yeah, even if you go on the data topics and you scroll down with the thumbnails, that was a few.

Speaker 2:

That was gen ai actually usually squint a bit like oh yeah, yeah, I recognize this is an animal kind of see this the topics, I think, for pandas and friends great episode.

Speaker 3:

You had a great guest back then, um, and yeah, like he was like, oh yeah, you can kind of see, you can kind of see a little, a little here. This was jna back in. When is this july 2022? And we're like, whoa, you can kind of see a panda here. That's crazy, that's crazy good. And now he's like, oh, actually, I'm not even sharing this. There we go here. This was the state of the art Nowadays it's laughable.

Speaker 3:

We got a question. Yes, we do. What do you guys think about companies that are utilizing GPT to build products? It looks like OpenAI is eclipsing all smaller projects built on top of GPT with GPTs and now this Omni model. Thanks for the question. What do you think, Bart?

Speaker 2:

I think it's true.

Speaker 3:

I think it's true, but I also feel like a lot of this was a bit hacky in a way, like it was very low hanging fruit for them, but to me it's also a matter of time, until I think the premise is a way like it was very low-hanging fruit for them, but to me it's also a matter of time until I think that the premise is a bit like so open ai releases to gp3, 3.5, gpt4, like they it was very basic, right, like you just has a very generic chatbot and what you could very easily do is build a product around that.

Speaker 2:

You add a bit to the prompt, you, you give a visual interface, you make something for a specific use case and you leverage their uh, their endpoint, basically just for the model. And what became, what came them, is that you now can create your own custom gpts. So there is very little, so probably 80 of those use cases. You can now just make a custom GPT, especially with now that you also have the ways to call external endpoints in a custom GPT. So like suddenly a very rich functionality built into JetGPT. And then you still have some use case where you could combine video and stuff like that, like in something custom, and now they also have that. That is a point that Kieran has been making and that's very true. But that's a danger if you're fully leveraged off something that someone else built yeah, and I think.

Speaker 3:

But I was like huh, I mean, ideas are ideas, right, and I think openai has like they're set up in a way that is they have the money, they have the compute power, they have the people, you know. It's like yeah, like if you see a good idea, they're gonna be like why won't I do it?

Speaker 2:

I think the the the challenge of building products of this, to make a difference, is to not build something generic, build something for a very specific niche domain where it doesn't make sense to build something.

Speaker 3:

To build something for, yeah, anthony, to like open ai yeah, and I think a while ago, like maybe some years no, maybe not years, but a year ago, maybe that was also been my opinion, my prediction that we're going to have more specialized AI models, which we do see some of it. But I also feel like the popularity of ChatGPT is there because it is generic, like it's a everyone like, oh, what are you doing? It doesn't matter, chatgpt, that it is generic, like it's a everyone like, oh, what are you doing? Doesn't matter chad gpt that's the way to go, and I think everyone was thinking about chad gpt. That's why it also got very popular.

Speaker 3:

I still feel like there will be, uh, at one point we will have more specialized things, not for the general public, but like, for example, we talked about snowflake arctic, which is very specific for what they want, you know, and I still think there is a space for that. But I don't. I still feel like the really generic, the foundation models, the thing that everyone go through. I think people are gonna keep working on this more than I had anticipated, you know.

Speaker 2:

It would still be the main yeah, sure, but I think like, like, um, if you build something very specific, even if you build this off of OpenAI, that there is a room to do that. Let's say, I'm taking something super, completely random here. Let's say you have a management tool for a soccer club and you have an ability there, like OpenAI powered, to basically suggest the scheduling of teams on a playday. Saying something random here, that is so random. I mean, that is so random.

Speaker 3:

Like open ai is never going to compete with you yeah, that's like that's so niche, I mean you know, it's almost like ai, like traditional ai, like, if you have like turn, you have this, you have that something very yeah, okay, anyone can do it.

Speaker 3:

But if you have something like okay, for, uh, x-rays and ultrasounds combined we need to diagnose this instead of super so specific then yeah, that those are the cases that you really should build something very custom, right, in a way, it's it's not the same maybe, but it definitely rhymes. Yeah, okay, agree, no, but that that I, that I agree, I agree and I also think that, um, this space and I think we talked about it when we talk about the gpt coach the where we have a like it's almost the science is very well documented, but we still need a human interface, kind of that. These, maybe even now, with this gpt 4.0, it will become even more. I'm thinking, for example, nutrition as well. Right, you can have like very clear nutrition plans, um, but then people still go to a nutritionist right to give like your oh yeah, I ate this and I didn't feel good, I ate that and you know. But like, in a way, you could have bots or ai models that could fulfill this yeah, question is whether that is you can I fully agree.

Speaker 2:

But it's got open. Ai ever gonna be making a mission to be the best dietitian yeah, yeah, that I agree, that I agree.

Speaker 3:

Yeah, yeah, no, but I agree, fully agree with you.

Speaker 2:

And while we're on the gen ai train 11 labs previews music generating ai model yeah, and this comes only a few weeks after suno uh released, so you know is another music generation tool we're covering here when medius well covered briefly, but indeed um and suno is already very impressive.

Speaker 2:

So 11 labs is very well known in uh already since a few years for uh generating, generating synthetic voices and deep fakes. I think they've also really at the forefront of that one of the one of the leading players when it comes to synthetic voices and it's now that they released their basically a music generation. Maybe we can play it like in the in the link you know this one here yeah, I think so that one yeah alright so it's.

Speaker 2:

It's still a very uh early preview, is what they say. So it's not something that is already. You can already try, like you know but it sounds very good.

Speaker 3:

Sounds good. I feel like suno I do I. I had the impression that you still hear a bit the the it's like in the vocals of suno, there are more artifacts yeah you can say this is not perfect here.

Speaker 2:

That is much less Like. It sounds much more natural. I have a feeling.

Speaker 3:

Yeah, I also wonder if they have the same strategy, because actually, after we brought it up quickly when Maddy was here, and then we were hypothesizing a bit, like they probably have a very large sample and then they just kind of stitch it together with AI, so it's still AI generated that way, but a lot of it is just human, like beats and like, but we don't know, right, we don't know. We don't know um, I'm not sure if this has a very different. So I guess what I'm trying to say here is like it's impressive. I feel like if we could look under the hood, maybe it would be even more impressive if we see the different strategies, because I think today say, like it's ai generated, like yeah, but how much? Right, like maybe there is still a human in the loop, maybe there's a lot of samples.

Speaker 3:

What I think is also impressive is like the, the lyrics, like how they make sense, but they also rhyme and they also need to fit in a certain span, right, like, you cannot have very long sentences and you have, like some, there are some patterns there. Um, I wonder how they do that. I wonder how they do that. I know that, how they do that. I know that at Deduits we also had some initiatives for Gen AI music. Actually, if you go to.

Speaker 3:

Spotify and you go for the Beats AI.

Speaker 2:

You can see some of the things. We competed in the AI Song Contest.

Speaker 3:

We did A number of years, which is like for people that don't know what it is. It's kind of like Eurovision for Gen AI. Let's see if I can find it here. Yeah, so they still go strong, I think. Yeah, Um, so for people that want to participate, there's the. We can also put a link in the show notes. Um, but, yeah, like uh again. This is another thing that moved very quick. Uh, when we look back at what was done before, even when I was in PyCon uh, Poland in 22. One of the keynotes was about JNI music, and they've also talked a bit about the history, but the evolution is really crazy. It's really mind-blowing. Have you made it? I actually said it's not open, right?

Speaker 3:

I was going to ask if you made any hits with the 11 Labs, but I guess not yet not yet, but I think uh the isong contest next year we'll just get these type of uh submissions yeah, 11 labs, yeah, which is uh, yeah, yeah, I think so too it is what it is and maybe this links a bit with the question that they we had do you think that open ai will eventually tap into those domains as well? Um music generation, you mean yeah, I mean because, like voice, like originally, what we saw for uh 11 labs is yeah, I can put it back on the screen. It really says uh generate voice ai.

Speaker 2:

So it's really just ai generation, which sounds a lot from the demo of the gpt 4-0 um, if I today would start a company that is, uh, that does vocal generation, music generation, I would really see OpenAI as a potential threat in the future.

Speaker 3:

A big, big threat.

Speaker 2:

No yeah this seems to be very easy for them to expand to.

Speaker 3:

Yeah, I feel like yeah, indeed, it's, I'll be. If I was starting a company there, I'll be in constant anxiety mode. You know, every day I'll be like, oh, did you do it? No, no, yeah, okay, okay, maybe, um and all these things are hosted on the cloud right, bart.

Speaker 2:

Yeah, how robust is that?

Speaker 3:

yeah, right, it's. Uh, do you trust the cloud part?

Speaker 2:

I used to okay, used to.

Speaker 3:

Um, why am I saying that? Something that was announced, I think, last week? I want to say Thank you. Guardian Google Cloud accidentally deletes Unisuper's online account due to unprecedented misconfiguration. So what is Unifund? For the people that do not know? I think it's like an investment. Let me see Something related with money. It's a financial institution, for sure, and they even mentioned that the company had it's a big pension fund. Pension fund in Australia, that's what.

Speaker 3:

I remember, unifund has approximately 125 billion, I guess, in funds under management, which I would consider that a lot of money. For someone like me, that is a lot of money. And what happened? So, apparently, due to quote unquote misconfiguration or quote unquote one of a kind misconfiguration, Google Cloud deleted the account account, the private account from uni super. Uh. They were gone for like a week, I want to say, and now it's back online, even if you go to uni. I put the other link here.

Speaker 2:

So I'm if I understand what happened is that they're like, when you, when you look at google cloud like the top level, what you have is is that you have a, an account, which is basically your subscription with google cloud yeah and under that you can have projects which are more like isolated workspaces where you can do, you can separate stuff, and it's really the, the at the parent level, the highest level. So the, the cloud subscription yeah, that got completely removed due to a quote unquote misconfiguration, which is huge, right.

Speaker 3:

Yeah, it's like this. It's the worst thing that can happen. It's a big fuck up, huge. And I think even they said like, yeah. So I think at one point here they said yeah, google cloud, they have the redundancy, right.

Speaker 3:

So here, wow, uni super is normally has duplication in two geographies. So again, this is for people that are not super aware like, if you put the data in the cloud, a lot of the times they have two versions of it in two different. So take another step back cloud is really a computer somewhere else, right, like there's a lot of computers with a lot of data somewhere else that we don't know, right, people don't disclose it because otherwise, if they did, then maybe someone will try to I don't know drop a bomb there or something, right, so it's secret. But they have in different locations and a lot of the times, if you store your data with Google Cloud, they'll also actually copy the data in two different geographical geographical locations, and that's because if there's some environmental disaster, like a hurricane or something, and something gets damaged, you still have the data on the other one, and that's what they're referring here, while Unisuper normally has duplication in two geographies to ensure that if a server goes down or lost, it can be easily restored because the cloud subscription was deleted.

Speaker 3:

It caused deletion across both geographies. So basically it says it's almost like someone goes on your Google Drive and deletes your. Or maybe like you have a Google account and you have the email and you have Google Drive right and normally these things are separate, but like someone actually deleted your Google account so you lost everything there. So again, unisuper was able to eventually restore services because they had backups in another place, so their own stuff, so they didn't trust the cloud, so it's thanks to another provider that they were able to recover within a week.

Speaker 2:

Because, they had backups in another place.

Speaker 2:

Which is wow, that's already smart, right. It's already good thinking of them to spread a bit the dependencies that you have and to have the backups somewhere else. But even then, I think a lot of organizations would have just chosen to have the backup somewhere else. But even then I think a lot of organizations would have just chosen to have, for example, backups in a separate project space or something like this, because you do not expect your full subscription to be deleted. That is really the worst case, a worst case that you probably discuss in a context like a big fund like Unisuper, but that everybody thinks, okay, we need to have a plan for that, but it will never happen yeah, it's like insurance, right, like, yeah, I mean even insurance.

Speaker 3:

I think it's more likely using insurance than something. This happens exactly. And again, they mentioned a one-of-a-kind misconfiguration. So it really feels to me that someone really just pushed like terraform, destroy, you know, destroy everything. Uh. So they did mention, they did a root cause analysis. They already put like measures to a lot of this happened. I also think, yeah, maybe one person really messed it up, but at the same time, as an organization, you shouldn't have one single point of failure, right, like you should have. Like you shouldn't make it in a way that a hundred people need to mess up at the same time for this to happen. And that wasn't the case, so not this.

Speaker 2:

This decreases, like these type of things decrease trust in google cloud. That's a lot of price ready solution. I'm wondering, like if we will see like a detail, I think, what they should do. What google should do is like provide a detailed debrief of what went wrong, what they fixed, why it can't happen anymore in the future yeah, I think it's.

Speaker 2:

Uh, yeah, it's and I hope we will see that in the coming weeks yeah, I'm wondering like, even if it's like embarrassingly- I think this should, even if it's embarrassing to be transparent, like what happened, and how are we going to make sure that doesn't happen anymore?

Speaker 3:

yeah, but I, I already feel like, and if there's a new project people are going to be like are you going to google?

Speaker 3:

no, oh no no, this is big it's going to be like this yeah, like that's the most fundamental right. If you have the maslow hierarchy of needs, I think this is the very bottom. If you cannot guarantee that. Yeah, even if it's one in you know 100 years, a hundred years, it's like it still happened. People are going to remember that. Even so, this is the Unisuper. They made a joint statement. Also, there were a lot of fears that they were hacked. Right, because it's a. They meant no, it's not hacked, it's better and worse at the same time. Something really shouldn't have happened.

Speaker 2:

We didn't leak anything, we just lost everything. So they made it.

Speaker 3:

So this is an announcement from Unisuper CEO and the Google Cloud CEO. So I think also the people involved kind of highlight how much of a big deal it is. And I thought it was funny that, even if I went to their homepage, the first thing they said Unisuper services are now back online. This is like their new landing page. You know it's online. This is like their new landing page. You know it's okay. It's like the that meme, you know, with the doodle of the dog and everything's on fire. He's like this is fine. It's like that. Yes, that's how. That's how I feel. Okay, do we have time for a hot take? We have time for a hot take.

Speaker 2:

Part, I don't know if we can find the snippet. Oh, hot, hot, hot, hot, hot hot hot hot, hot so curious about this. You brought one.

Speaker 3:

I brought one, so it's my hot take. My opinion I'm a Python developer, in a way right Machine learning engineer, but I guess Python is my bread and butter and I love Python. But I noticed that a lot of CLI tools. I actually built a CLI tool in Python, but I see a lot of CLI tools. What?

Speaker 2:

is a CLI tool.

Speaker 3:

Before we go into this good question. It's a command line interface, okay, um, so tool, right. So you have a terminal, you use the thing, like if you look at the hackers, you know the things, like they have a black screen, and then you just have a cursor and you type things and things happen, and basically it's a tool for that. So how you manage these things, and so there are a lot of cli tools that are built with python and just to's a tool for that. So how you manage these things, and so there are a lot of cli tools that are built with python and, just to name a few, aws. Cli is built in python, so you can go.

Speaker 3:

Like any tool on the command line is a command line interface yeah that's yeah, more or less, unless it's a two-way yeah, and I think the difference there is that a two-way is a textual user interface. It's more or less like like you, you're working in your visual two is terminal user interface oh, thank you yeah, texture is that the two yeah texture is that framework that builds two is.

Speaker 2:

Terminal user interface. That's really like a visual interface built from text characters.

Speaker 3:

Yeah, but like a lot like the terminal today, they're actually very powerful. You can actually use the cursor, you can click, you can. You know there's a lot of stuff. You can actually add some animation, some graphs.

Speaker 2:

So with a TUI, you actually enter a UI, even though it's text-based. You enter a ui even though it's text-based. You enter a ui with a command line interface. You have a tool, but you stay on your terminal, right?

Speaker 2:

you don't enter anything else, exactly, exactly, maybe, uh that's the way a bit the way I look at it. And and you're saying, you build a lot of python, you actually build a cli tool recently and you're gonna come up with a hot take now yeah, I just maybe just sorry people that are following the live stream.

Speaker 3:

this is an example. This is an example of a TUI. This is all in your terminal, right, but you can have a lot of like interactivity, you can drag, you can do all these things, and a CLI tool. It's not like it's more basic, I guess in a way, let's see, not like it's it's more basic. I guess, in a way, see, uh, and for people watching is what? Basically, what bar described them? But uh, yeah, just kind of.

Speaker 2:

for example, github has, or aws, if you want to do this or uh, like a prompt, or you have like a multiple choice or you need to enter something like this Exactly, exactly.

Speaker 3:

So my hot take is Python. Though I love the language, it's a very bad tool. It's a very bad programming choice. Programming language choice for CLIs.

Speaker 2:

Very bad. It's not a hot take, that's real life.

Speaker 3:

But the thing is like I put it as a hot take because when you look at like AWS CLIs in Python, Azure CLIs in Python Azure CLI is in Python, google Cloud CLI is in Python, dnf is in Python, yum is in Python. If it's really that bad and again that's what made me question, you know, it's like I think it's a very bad decision and I'll get to some reasons in a second, but you see it everywhere.

Speaker 2:

But isn't this also a bit like how old are these products that you're describing? Because if you say it's a, it's a bad tool. What would you choose today?

Speaker 3:

to build this in rust or well, I guess you could you see, or even go, or I see, you see, what do you see for a cli? I mean, I wouldn't use it, but but I think there's anything that's compiled, go, even if you like.

Speaker 2:

I would say today. So go for Go or Rust.

Speaker 3:

I mean, Rust is a very popular tool these days. Right, but Go, I would say.

Speaker 2:

Even Go is even less complexity, I think for a lot of things Depends a bit on use. But let's say Go or Rust, but if you say these are things that especially Go, like the simplicity of Python, like Go and Rust, were not around this maturity when all the tools that you just mentioned were being built right, you also have this legacy aspect to it.

Speaker 3:

No, I definitely well, but AWS, cli, azure CLI and.

Speaker 2:

Python was shipped with every and is shipped with every Linux distro right.

Speaker 3:

No, that I agree, but I think, for example, there's a new Python version. I upgrade my Python version. My dual-cloud CLI breaks. There's a lot of issues with DNF.

Speaker 2:

Because we're not actually discussing why is Python a bad tool?

Speaker 2:

like I think the reason is probably like with python, is an interpreted language, yes, so you need to have the interpreter, your python binary, installed next to your the, the files coming from your cli, and then your cli might, might use some libraries that are even extra dependencies that you also need to get installed, and you have ways to pack this all together and then distribute it. But it's a bit of a hassle and it's very hard to do this in a cross cross platform compatible manner yeah, so in to summarize is exactly what you're saying.

Speaker 3:

But to summarize, it's like you have for for me to use the Google Cloud CLI. What I need to know is what Python version I'm using, all the dependencies that that Python version uses, and the version of those dependencies. Yeah Right, so every time if I update the Python, maybe the dependencies are not available anymore. So I need to keep track of all these things. And a lot of these dependencies is just more Python code, or it can be C like compile code or whatever more python code, or it can be c like compile code or whatever um. But if I have three cli tools and they have three times the same dependency, then potentially I have three, five.

Speaker 3:

I mean, I guess with compile stuff it doesn't matter. Like uh, you still have this issue. But then you're also duplicating some files, maybe, right, you also have like a lot of I, I think p, uh, npm also is like uh. So that's for javascript and node, right, but they also have an approach to try to minimize, because I think from my understanding and you are more experienced than me on this for JavaScript projects you usually have the Node modules within your project directory. It's almost like a copy of all the dependencies that you have. But then if you have five JavaScript projects, you have five Node modules and if the dependency is a dependency for all five projects, then you have the same files duplicated in all five projects.

Speaker 2:

Yeah, but I think, if you look at, I want to build a cli that is a bit independent of the projects that you're working on and you're using the apple skip.

Speaker 3:

You have the same challenges, for sure, anything that is interpreted for sure, exactly anything that is interpreted and I think it's like how should you install Python?

Speaker 3:

And then you have, like PyEmb, and then you have another tool I forgot the name AST or ASTF or something for installing. You have Brew and then you have to make sure that they're not. Maybe you don't want to reinstall it again, but maybe you need to have symlinks. And then you have tools like pipx. Then you know, they basically create a different environment for each tool, like, and you have all these things to work around.

Speaker 2:

But to me these are all but I think like depends a bit, like today, also like who are you building this life from? Like is this for an internal team and tooling around an internal product that is fully built in python?

Speaker 3:

yeah this may be a good argument to do no, I fully agree, and I also think it's like if you want to spend, you don't spend too much time building this.

Speaker 2:

I also think python is a good choice you can get going very fast what I like very much with uh go and you can actually do the same arrest uh. When I built like a cli and go, what you can do in go is cross compilation, so you can do it in rest as well, and it means actually that you can from, let's say, you're doing your builds, let's say, in CICD, in a Linux environment. From that environment you can actually cross-compile to Windows-compatible binaries or to Darwin, which is OS X-compatible binaries, and that's very also. That how do you build from multiple environments comes out of the box with these things.

Speaker 3:

Yeah, indeed, I feel like yeah.

Speaker 2:

Well, if you go to our strategy where you say I'm going to have this Python CLI, I'm going to need to bootstrap an interpreter bootstrap like the libraries and I'm going to package all that together in a way that I can execute it. If you do that because that's how a lot of Python CLIs are distributed like it's very specific to the environment where you're going to deploy this how you need to do this.

Speaker 3:

But I also think that and again, pycon Poland, there was also a talk for, I think, dnf people from Red Hat that they were saying that if you do Python, pseudo-python, upgrade pip or something, because there are some paths, even if you ship your own, like it will like everything will get messed up, depending on the environment, because of the paths and the siblings and all these things. Right, and then they were had like I mean, it was it was very interesting talk, you know, but they were explaining what he tried to do to solve it.

Speaker 3:

And every time at the end of the talk he's like, yeah, nothing really worked right. And even afterwards I talk to him and he says, yeah, we're probably going to go with compile language, c++ or something you know, so, again, we make it work.

Speaker 3:

The Python community is a very good one, and I think they make a lot of auxiliary tools for this, but I don't think it's a good choice. I this, but I don't think it's a good choice. I if I have to build. The other thing, too, is like I'm going to install aws cli. There are tools like pipx right, that mitigate a lot of these issues, but for me to use pipx, I need to know that the cli tool that I'm installing is written in python, which sometimes you don't know true right, so it was a bit my uh so it's not that hard of a take

Speaker 2:

well, for this group, no, but maybe for a really, uh, python, uh, pythonista, yeah, yeah, aren't you a bit of a pythonista?

Speaker 3:

I am a bit you're a pragmatic pythonista yeah, I identify as that, but I think indeed, again, I'm not gonna pretend like I. I mean, if every, all these tools are written in python and they have success, who am I to say that they're also true? Also, it's the evidence. So, huh, that was it. We didn't have a tech topic, but a tech. Actually, we did the Level Labs.

Speaker 2:

Yeah, but we have an intention. So we sometimes like bring up very, very technical library stuff, like these type of things, and we feel it's a nice combination. Yeah, and we're going to try to have every episode like we bring up one nice interesting library in whatever language one library a day uh one library a week. One library a week part two.

Speaker 3:

Come up with the rest keeps the mind at peak ah, aren't you a poet? Look at that swordsmith with words, the one and only can we get an applause for that. Alex, you think, okay, cool indeed. If you think it's a good idea, bad idea, feel free to reach out to alex thanks everybody for listening.

Speaker 2:

Thanks y'all thanks, alex, for helping us once again of course yes, see you next week.

Speaker 3:

Actually I'm off next week.

Speaker 2:

I'll see you sometime the people will see us next week. Yes, we will still publish ciao, ciao. When are you going to update this?

Speaker 3:

you have to in a way that's meaningful to software people never. Hello, I'm Bill Gates. When are you going to update this?

Speaker 2:

Oh, I need to update a new sample still, yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded of the rust here when I'm bored in the evening, congressman.

Speaker 1:

I was made by a different company, so you will not learn rust while I'm driving. Well, I'm bored in the evening.

Speaker 2:

Congressman, I was made by a different company and so you know, I will not learn Rust while skydiving.

Speaker 1:

Well, I'm sorry guys, I don't know what's going on.

Speaker 3:

Thank you for the opportunity to speak to you today about large neural networks.

Speaker 1:

It's really an honor to be here. Rust Rust Data topic. Welcome to the data. Welcome to the data. Rust.

People on this episode