DataTopics Unplugged: All Things Data, AI & Tech

#37 Text2Video with Sora and Blazing-Fast Python Packaging

DataTopics Episode 37

Send us a text

Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.

Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!

In episode #37, titled "Text2Video with Sora and Blazing-Fast Python Packaging," we're joined by guests Vitala Sparacello and Lukas Valatka to navigate the latest tech frontiers. Here's what's on the agenda:

Intro music courtesy of fesliyanstudios.com.

Speaker 1:

All right, let's do it. Hello, hello, hello, and welcome to Data Topics Unplugged, your casual corner of the web where we discuss what's doing data every week, from Xcode to UV, wow, anything goes. We're also on YouTube, linkedin, twitch X, live streaming, so check us out there, leave your comment or your question. Today is February 16th of 2024. My name is Marillo. I'm your host for today, joined by the one and only Bart, hey, bart, hi, hey. We have two really cool guests today. We have Vitaly, hello, and Lucas, hey, guys. So Vitaly is one of the tech leads of the AI business here at Data Roots, but I'll let you give the floor to Vitaly to introduce yourself.

Speaker 3:

Yeah, thanks a lot, marillo. Thanks for having us today. My name is Vitaly, I'm a machine learning engineer and I'm also one of the tech leads of the AI unit here at Dutthorps. And, yeah, ready to discuss with you guys.

Speaker 4:

I thought you were going to say rock and roll.

Speaker 3:

Well, yeah, Also could have been a better line.

Speaker 1:

Are you a rock and roll guy?

Speaker 3:

I am A lot, A lot really. Yeah, all the 70s rock, all this kind of bands. What's your favorite band Of?

Speaker 1:

all time?

Speaker 3:

Yeah, of all time. Hard to say. Hard to say Maybe Led Zeppelin Led.

Speaker 1:

Zeppelin. Yeah, yeah, I love them. All right, and then we also have Lukas. Lukas is a machine learning engineer, software Python guru here at Data Roots. But, yeah, I'll give you the floor to introduce yourself as well.

Speaker 2:

Yeah, I'm going to put it well.

Speaker 1:

Yeah, it's nothing to add.

Speaker 2:

That's pretty recent joiner at Data Roots. Actually it's been five months. I think. But yeah, python engineer, software engineer, ml engineer, yeah, I tried a bunch of stuff.

Speaker 4:

And it's wearing very nice socks.

Speaker 2:

Oh, that's true.

Speaker 4:

With socks with red beads on them.

Speaker 2:

Yes, they are a gift from our Ukrainian friends. Oh nice, yeah, that's why the beads.

Speaker 3:

You're going to show it to the going to pop it on the screen, for it it's visible though.

Speaker 1:

Yeah, there we go, there we go.

Speaker 3:

Nice Today. I'm losing the socks, yeah, but I'm always.

Speaker 1:

I always lose the socks better, actually, I have. I recently got nicer socks, like a more colorful one. Yeah, it's not a very I don't know. I have like usually, like this actually is a sport socks, and I use the long sock on purpose because I know Bart shamed me last time for wearing short socks, but that's how we were going.

Speaker 3:

Why? Why, bart?

Speaker 4:

I don't think we should go in depth on the socks topic today.

Speaker 1:

All right, what should we go on that? What happened? What's?

Speaker 4:

new A lot happened this week. Yeah, a lot Crazy.

Speaker 1:

Bart also told me in private that he knows the news of next week Sometimes, sometimes, but this week in you, this week I do, this week in you. But did you know that now there's another extremely fast Python package out there very recently? Did you know that? Not at all. No, as fast as the speed of light, maybe Like UV light. No, okay, that was a try, a try. But what is this package? It's called UV.

Speaker 2:

It was released 20 hours ago. Super easy.

Speaker 1:

Yeah, if you want to put it there on the screen. Bart, yes, I can actually do it.

Speaker 4:

You can.

Speaker 1:

Should I do it? No, you didn't. So it's called UV. It's extremely fast, that means immediately. You should think rust, because it is the case, surprise please, yeah, yeah yeah.

Speaker 3:

That's a surprise.

Speaker 2:

What is it about, Lucas? Yeah, it's. I don't know. It's such a buzz right now it's hard to see what it? Is about. It's about a lot of things that really, from what I saw in the tweet and what the author says on their website, the astral, it's just a fast pip, fast pip install, is it?

Speaker 1:

it's like pip tools, right? Or pip compiler or something. Yeah, if you have this one here. So this is the actual git repo. They don't have a documentation there. Stills to the astral. It's designed as drop in replacement for pip and pip compile. What is pip compile? Pip compile is a it's more, not like, it's like lower. It's more transparent, let's say, than Rustin poetry and PDM Compile and basically has another strategy for requirements and pending requirements and whatnot, right? So actually, pip tools, equal people compile, plus pip sync.

Speaker 1:

The idea is that the workflow would have would be that you have a requirements that in and you have a requirementstxt, the requirements of txt x as your lock file and the requirements that in x as your dependencies, right, You're discussing pip compile now, not you. Yes, pip compile, but UV is a replacement of that. Okay, so it's not. I think I read on another article that it's not.

Speaker 4:

And it's faster than pip or pip compile. That is the main premise, yeah.

Speaker 1:

So pip sync 4.63 seconds on this benchmark. Pdm 1.9, poetry one second UV 0.06.

Speaker 4:

And where does the name come from?

Speaker 3:

You have a question.

Speaker 1:

I think we were speculating a bit right, but maybe because it's Astral and you have ultraviolet Super fast.

Speaker 4:

Astral is the company behind it Astral yeah. But I was thinking when I was showing the benchmark.

Speaker 1:

I can put it there.

Speaker 4:

Maybe they ran the benchmark and they went like that's fast.

Speaker 2:

Yeah, yeah.

Speaker 3:

It can be a better explanation.

Speaker 1:

Also, the F is close to the V.

Speaker 2:

Maybe they wanted to type that and then it was like oh yeah, that's a good name.

Speaker 4:

But like it's a alternative, faster alternative to pip compile.

Speaker 1:

Yes.

Speaker 4:

Is it a better one as well, or is it just faster? Because this seems like a bit pedantic, I'm going to install on poetry it's like one second, and now it's like zero seconds. How often do you install thousands of packages at the same time?

Speaker 1:

But I would argue it's the same thing for rough originally, right, people are like, oh, this is a very fast Python mentor. That was the tagline. And even for me I didn't use rough for a long time because I was like, yeah, but I don't. It's not like I was running out of time because I was linting so much and I didn't have time to develop, you know. So I think it's kind of among the same lines in my opinion, like maybe there's more to it. It's like maybe once you do it you're going to be so much in love that you don't want to go back. For me, the thing that sells me on rough is not the speed action, which is what the advertises more because it's a one stop shop for everything.

Speaker 3:

Yeah, maybe this could be the selling point of UE. Right, you have one shop for all the tooling you need to deploy, develop, deploy If you open up the Twitter package. So this can be the real value, rather than blazing fast.

Speaker 2:

Maybe it's a good hypothesis, but I think if you open up the Twitter thread I'm not sure if you can Can I say this? I think one of the premises the author tries to develop is that it's like cargo for Python and it's like the stepping stone into cargo for Python and that's your author's idea.

Speaker 1:

Yeah, I also feel like that's also big, like the, there's also Rai right Another.

Speaker 2:

Yeah, oh yeah.

Speaker 1:

That that's also rust or cargo, cargo inspired. Well, at least that's what it was said before. Let's see Hassofree. So I think it's not a I mean, it's not a new idea. I think this one is really kind of saying this is an alternative to PIP, compile, PIP tools, which kind of falls in line with the previous one, right Like rough, which is like saying he's not I don't know if I'm being too blunt here, but it's not like I have a great idea, let's do this. It's like I'm going to take an idea that already exists and I'm just going to rewrite it in rust to make it super fast. Which also I know there were Vitaly. You shared a video before from Anthony. What's his last name? Sotile, Sotile.

Speaker 3:

It sounds really Italian. Yeah, when you say it too, it's like yeah. In Italian it means thin, like Sotile, Sotile.

Speaker 1:

Oh, actually it's a word. It's a word, yeah, it's like subtle yeah.

Speaker 3:

And yeah, indeed, I think there is a sort of trend, right, a lot of open source software has been released in the past 10 years, for example, written in Python, and now we have a new kid, rust, and a lot of companies or even individuals are taking an open source package, an open source tool, and they are rewriting it in Rust, which is okay, right, because then you have a new tool, you have more, better performances and you release it for, let's say, the open source community. But what if you build a commercial company around this? Is it stealing or not?

Speaker 1:

Yeah, I think. I don't think I would say it's stealing, but I think the argument that the guy also puts that the company is not giving back to the projects that set their standards and he's also one of the maintainers- I think he's one of the big guys behind Flickator.

Speaker 1:

Yeah, Flickator, exactly. And then he was, I think, his struggle. I mean, he's upset about it, but he's not bashing Rust either. He also says that Rust is a very incredible piece of software and everything, but the thing his struggle is that he doesn't feel motivated to contribute to the space anymore because there are people that are making a lot of money. There's no financial attribution going back to the original people and I still think we need people to. They want to keep investing. This right, Python is a dynamic thing. There's been no standards. There'll be new things in me to be developed, but if everyone's going to be like, well, I'm not going to do this, because if I do this, someone else is going to just take what I did and reimplement it in Rust and make a lot of money. Is it good or is it bad? Right?

Speaker 3:

Really hard to say, indeed, because at first I was like, ah yeah, he's right, let's see how this situation is evolving. But also having a big company behind the project and make it available to a lot of people, make it stable so that people can use it to adopt it, is positive for the world industry and the open source movement as well. For example, I was thinking to, let's say, the new Linux project. Right, and it was fully open source, but at first it was not so easy to manage, to install, to distribute. Then companies like Ubuntu created an IOS around this, collecting all the pieces of software together in order to make a distribution, and it got adopted by a lot of people. So it created a sort of awareness about open source, about all these kinds of distributions, and Ubuntu itself is also using the same technology for commercial reasons and as a user, I'm happy about that because they can continue to support the product, I can get qualitative software and I can still use it for free, which is always positive. What do you think about it?

Speaker 1:

I think it's the reality. In any case, I think we can debate whether it should or shouldn't be like this, but it is the way things are today. If you're doing something, open source, you are kind of saying, if someone takes this and makes a commercial thing and they can pull it off, it's not like legal, it's something that's in the cards, right? Indeed, I think you can have the same idea, but you can implement it better. That makes a big difference, which is maybe what happened with Ruff right. Maybe what happened with UV as well, and everyone had access to the same thing, like anyone could want to start a company, true.

Speaker 4:

I think Anthony's main gripe was with. There is more or less a replacement now for Flaygate which basically takes all the learning from Flaygate, because they implemented a lot of these web standards in a lintig Ruff, took it and now they're making a living out of it. That's basically what it is right. There is a company behind it now which feels a bit weird here.

Speaker 4:

I would perhaps also be demotivated to do much in the original Flaygate realm after this, but to the company behind it and to me I still need to see what Astral will be like a company behind the lintig. I mean, how can you commercialize the lintig?

Speaker 3:

That's true.

Speaker 4:

That's maybe an old discussion.

Speaker 1:

Indeed indeed, but I agree with you as well. Vital, I think, having companies that promote open source and can give it like a nice product. And talking about companies that promote open source, MD quietly funded a drop in CUDA implementation built on ROCM. It's now open source. Do you know anything about that part?

Speaker 4:

A little bit about it. So I think it's an interesting news. In the world, today's world of AI and GPUs and TPUs and whatever, there is only one big player that everybody knows, which is Nvidia, and everybody else is trying to catch up, amd being the already 20 years trying to catch up. Well, this is something like that, never known it otherwise, and the difficult thing is that a lot of these implementations around AI and already related solutions are built in CUDA, the lowest level.

Speaker 4:

Cuda is a language in which you can basically program for Nvidia and GPUs and in order for people to run this stuff on AMD GPUs, they need to fully reimplement, because CUDA is for Nvidia, it's not for AMD, and there have been a lot of attempts in the past and I think this is the latest were to basically make something, an abstraction layer where CUDA applications become runnable on AMD, and this news dropped yesterday I want to say 12 of February.

Speaker 4:

A few days ago Dropped the my Mailbox. Yesterday they were funding for the last two years a project which is called ZLUDA Zluda, I think it was originally built to make a code application compatible with Intel. Someone took this up. The development, André Janique.

Speaker 1:

You're very brave in pronouncing this.

Speaker 4:

Can someone pronounce this better, andré? Oh, it is the idea that I'm not allowed. For now I'm going to ignore all the extra letters. So, andré, he took this up and was paid by that's how I understand it paid by AMD to make it compatible with AMD GPUs, and the AMD decided this year to stop funding him. But he had a contract clause that said that one AMD stopped funding him that he would be able to release this as an open source package with teachers.

Speaker 2:

That's why everybody knows that AMD has been funding us. Now Interesting drama.

Speaker 4:

Interesting drama.

Speaker 3:

So there is a okay.

Speaker 4:

But the difficult thing to me is this is yet another failure to make an abstraction layer to make CUDA runable everywhere or on AMD. And I think it's also super hard because, from the moment you make an abstraction layer, you are basically not able to be compatible with all the latest tech, like all the latest features, like the latest, the latest feature on Nvidia probably doesn't exist on AMD and vice versa. So it becomes super hard to to build an abstraction layer that basically allows you to use all the latest features, which in the end, for this type of development, is extremely important, because you want to optimize everything and you want to use the latest features, and I think that is a bit where this always feels.

Speaker 3:

It's crazy that Apple was able to create a sort of similar software, the Rosetta layer, to basically compile the on the fly almost x86 programs instructions for their architecture. And it's working so well, but this latest work for GPU failed.

Speaker 4:

But of course they have, like, the full control over their environment, right Like there is it's going from. They went from the x86 CPU architecture to the ARM M1 M2s, but they have full control of how it looked in the past and how it looks now. Well, here you need to make something compatible which are two highly competing companies. Yeah which is not as easy, I think.

Speaker 1:

But I was thinking big picture. It's good for both of them in a way. You know what is like the fact because more people hopefully will be building things for them right, like I think in a way, if you have the common layer we imagine, well, maybe hoping that that would encourage people to put more work in these things, make two applications and bring more attention and all these things Like if it's less fragmented. I will hope that that would move the field forward.

Speaker 4:

Yeah, I think the challenge is that their gap between AMD and Nvidia is so big. Is that from the moment that you start developing something custom, now at the level of CUDA, low level, towards, like really a solution, babe, that's close to the GPU you're not going to say I'm going to do this for AMD. You're always going to say I'm going to go for Nvidia unless.

Speaker 4:

AMD says to you, I'm going to subsidize you, I'm going to get everything for free and like maybe then, but in all cases you're probably going to go and develop for Nvidia. And that is a bit of the, because there's such a disbalance that it's hard to correct that disbalance.

Speaker 1:

Yeah.

Speaker 3:

I agree. Probably only companies like Meta Microsoft are trying to develop custom hardware for accelerated computation, scientific computation, like CUDA, but AMD is completely out of the game right now. Let's see they are probably only competitive yet with the gaming industry, since they powered the latest consoles like PS5 and Xbox.

Speaker 2:

Yeah, it's still weird that we're so overfit as an industry to CUDA Like that does not healthy.

Speaker 4:

Yeah true.

Speaker 1:

Yeah, that's also. My feeling is like just Nvidia, just CUDA.

Speaker 2:

Yeah, it's a monopoly, basically.

Speaker 1:

But also to be fair to them as well. They've been there for a long time, right, like even before this big knee. Let's say like it was always Nvidia, always from the beginning. You know, if everyone is like, yeah, okay, but I don't mean, you know.

Speaker 3:

CUDA, I think it was released years ago and, yeah, a really softer like probably Matlab was trying to lose the acceleration from CUDA.

Speaker 1:

Yeah, and Nvidia is also in the news. No part. Yes, Nvidia's chat with RTX is a promising AI chat, but they run locally on your PC, so they're not sitting on their thumbs either, right?

Speaker 4:

Yeah, they released a small video demo recently, the 13th of February, where you can download their chat with RTX and it's basically a local LLM. So they've created a application that allows you to speak to a local LLM and to that is also more or less, I assume, optimized for the hardware and it gives you, when you look at the demo it's a very short demo very much of a chat, gpd kind of kind of vibes, and I think why did I put it on? Like you hear, today a lot of companies like Dell, like Microsoft, that are talking about your next computer is an AI computer, which I think is a very vague definition, but I do think, like this natural language, chatting with your computer, with your documents, with your appointments, with your whatever, will become a big part of your workflow and I think, with the work that companies like Nvidia are doing here to also make that practical from a hardware point of view, like it exemplifies a little bit how the future can look like.

Speaker 1:

Maybe just to be more concrete from my head what is an AI computer?

Speaker 4:

It's just like stuff that you Well, that is the big question I have as well, of course, but there's a lot of companies today that are making statements that are releasing AI computers, where everybody still needs to see where it actually is. But I think what we will see is that you have a combination of both hardware, which is already there to some extent, and software that makes basically the LLMs much more exposed towards your day-to-day workflow, versus having to check out to chat GPD that does not have access to your local files, that does not have access to anything.

Speaker 1:

Yeah, I think it's on our computer, but we had the Rebit R1. That is similar promise for mobile devices, right, a natural language mobile device kind of thing. Yeah, I also think that it would be interesting to see in a couple of years how the kids are using it, I guess, because I feel like there are some concepts. It sounds so old, right.

Speaker 1:

Yeah, because actually I was listening to I think it was a podcast or an interview or something and they were saying that the concept of file system is something that, for us, is a given. But their kids were like, why didn't you file system? Just go on the app and you look for your files, you know. So it's like you don't have.

Speaker 4:

Do you still know where the Save button comes from?

Speaker 1:

Come on, I know.

Speaker 4:

Have you ever used?

Speaker 1:

it.

Speaker 4:

I think so. Oh, you think so. That's a bit weird.

Speaker 1:

I mean, I used it. There were games. There was a game that I know for sure my brother played it more, but I think it was called SokoBung or something At least that's how we say in Brazil, but it was like in a floppy disk and you had to move boxes around and stuff like that.

Speaker 4:

And was it the small floppy disk or the big?

Speaker 1:

Probably the small one, probably the small one, I don't know but it looked like the Save icon, you know.

Speaker 4:

No, that's the small one.

Speaker 1:

But yeah, but I think it was called SokoBung. Yeah, you know.

Speaker 2:

I remember it from the AI benchmarks or something, the SokoBung. It's a typical puzzle thing.

Speaker 4:

Yeah, yeah, yeah.

Speaker 2:

They tried to solve with reinforcement learning, I think. I do remember it for sure. Or these path planning algorithms, because, yeah, you see those boxes.

Speaker 1:

Yeah, yeah, so I had to basically put it there, but it didn't look like it looked. The UI was worse than that. It was just like an X. It was literally just across. There was not a person. It's yeah, that's the one. That's the one, sokobung, yeah. But I'm curious, right, because I think maybe a lot of things that we think it's very intuitive today. Once you have this AI computer right, we're going to be like oh yeah, but what is the save? It's like yeah, don't worry about it, you know like it's there, you just ask it, we'll give it to you. So, curious, curious to see what's going to happen. And also I saw from the article as well, like at least on the demo, they actually selected between Mistral or something.

Speaker 4:

Yeah, indeed yeah.

Speaker 1:

So you're still running those models. It's just in your.

Speaker 4:

And it's still visible to the end user today, but probably not in the future right, yeah, but then like are they?

Speaker 1:

I guess they're using this small version of those models, and maybe they're. What are they doing? There's some magic there, I guess.

Speaker 4:

And probably fine-tuned to the Well, maybe not today, but in the future fine-tuned to the task that you would do locally with these things.

Speaker 1:

Yeah, does that mean that we have to? We're going to have a Windows comeback, I think I see we all have a Mac here as well, that's the Nvidia stuff. There's also a Well.

Speaker 4:

maybe Apple is also working on this.

Speaker 1:

Maybe it is, Bart. That's what you did there.

Speaker 4:

Nice.

Speaker 1:

Apple is working on something right. What is Apple working on? Apple is reportedly working on updates to Spotlight and Xcode. You part. You know what about it, don't you?

Speaker 4:

Well, maybe explain to people what Spotlight and Xcode is.

Speaker 1:

Spotlight. Actually I would show this. This is not really Spotlight, this is Raycast. That's why it's a bad example. But Spotlight is If you hit Command Space, you have the search On an Apple. On an Apple. Yes, you have the search that you can find your files, or you can find programs or something. This is not Spotlight, this is actually Raycast, which is a replacement of that. Thank you, I am a fan. I like Raycast better, but Spotlight is the one that comes installed with your Mac.

Speaker 4:

And Xcode is basically the Apple's development SDK right.

Speaker 1:

Yes, so if you want to build the applications for iOS, I think that's it.

Speaker 4:

And why I put this on, is that Merillo was the other day complaining about Spotlight and that he, instead of Spotlight, uses what is called Raycast.

Speaker 1:

I was complaining.

Speaker 4:

That's a bit of a, which is a bit of a smarter alternative to Spotlight, right? It's maybe a good way to do it.

Speaker 1:

Yes, but it's not just AI stuff. It has other capabilities, the other things you can do with it, right.

Speaker 4:

And I think Apple heard this, and then the following day published this.

Speaker 3:

There's a point with Merillo right.

Speaker 4:

Merillo doesn't like Spotlight anymore. We need to do something.

Speaker 2:

Yeah, exactly.

Speaker 4:

And then this came out and it basically says that Apple is working on bringing a LEM type-ish feature to Spotlight and Xcode. And to Spotlight I can imagine what it would be Like search your files with more intelligence, that you can in natural language Say I want to plan something in my calendar. These type of things send an email. Not sure how it will look like for Xcode, for example, but more hard of time imagining what it means.

Speaker 1:

Yeah, but I think it's the development of AI-powered code completion tool. That's similar to Microsoft's code pilot, so I guess it's really cool completion yeah cool completion.

Speaker 3:

But then I'm wondering, like they were the first releasing a sort of assistant like Siri, right?

Speaker 1:

Yeah.

Speaker 3:

Then it basically died.

Speaker 4:

Siri so exists, but it gets worse every month.

Speaker 3:

Yeah, it's like. I don't know if anybody's using it anymore.

Speaker 4:

I do from Apple with Apple CarPlay.

Speaker 3:

CarPlay. It's called right.

Speaker 4:

But the thing is like every month you need to pronounce clear and like shouting harder also helps. It's a bit how I Siri Con moreno. Do it. So I drew hope that Siri gets better, Like Siri should be better in this day and age.

Speaker 3:

Yeah, I need it because they were the first right. Yeah, and now I mean Siri is just something that automates you to call something or someone, and yeah, true.

Speaker 1:

Yeah, I think also. I mean, you have also the charge-e-p-t voice thing on your phone, right? So I guess it's really. If you compare it it's like a World difference, right. World difference yeah, but you only use Siri when you absolutely have to right, like on driving. You cannot say, hey, charge-e-p-t, you can have to Like. You know what I'm saying?

Speaker 4:

There's no trigger right, no but I would really enjoy it if I can say to Siri hey, Siri, open up this podcast and this app, yeah For example, what if it's a hey, Siri open charge-e-p-t and Siri's like nope, no, I'm gonna do it? But said to Siri, the mobile app, the only thing you can do is have a dialogue with it via voice, which is nice which is cool.

Speaker 3:

Which is cool.

Speaker 4:

We can't do anything for you.

Speaker 3:

I remember Mozilla. Do you remember Cortana from Windows?

Speaker 1:

Yes, I remember.

Speaker 4:

You're also dating yourself, huh Vitaly.

Speaker 3:

Yeah, I'm dating Gold.

Speaker 1:

But why do you? I feel like you targeted me now.

Speaker 3:

No no, no, no, because Well, sorry, mozilla, I didn't want to say that you are old as well, but you are. No, I'm joking, but once I tried to ask to Cortana, instead of saying hey Cortana, I was saying hey Siri, and there was a source of animation, and then the assistant was saying ah, you are probably wrong, but I'm not offended.

Speaker 2:

And these kind of jokes. No, that was a joke, so yeah, that's crazy. Okay, yeah, it's crazy how far we've come with all these things.

Speaker 1:

I was also thinking for yeah, it's insane AI generated images If you go to the data topics actually, so maybe I'll put it here on the screen. If you go to data topics before it was unplugged, we had the tools, we have a lot of things right and you can actually see some of the AI generated things that we were like whoa, this is crazy.

Speaker 4:

Good yeah and then now, these days, you're like what I mean you really need to squint to see what it was.

Speaker 2:

It was like a year ago, that's not so far along the best right.

Speaker 4:

Exactly, this is from July. You pundered a friend's toe, should I want?

Speaker 2:

Jesus what.

Speaker 4:

That was state of the art, huh.

Speaker 1:

We were like whoa, like wow, this is cool, he knows what a panda is you really need to squint like, okay, yeah, it's a panda?

Speaker 2:

Yeah, like I need to zoom out, you know like you have to.

Speaker 1:

Really it's a foot risk pain.

Speaker 2:

Oh yeah, it's a panda there.

Speaker 1:

Yeah, okay, right, yeah. But it's crazy how far we've gone, right, and I'm also saying this because recently we have the Sora, so OpenAI.

Speaker 4:

The whole world is talking about Sora.

Speaker 1:

I know right. So we couldn't believe how we can have FOMO. We had to also be in it. So what is Sora?

Speaker 2:

Anyone I don't know. Sora is everything.

Speaker 1:

So it's also from OpenAI right. It's creating video from text and the reason why it's making so much noise. So it's not even images, it's video.

Speaker 3:

It's video. Yeah, it's insane.

Speaker 1:

So just a piece of text is actually realistic, right? So super realistic. Yeah, I think if you're creating images, sometimes you have the more how do you say like icons or more like not realistic. There are different styles, but Sora seems to be extremely good for very realistic things, so if you're on the live stream, you can show some of the things here like a prompt. A stylish woman walks down Tokyo Street filled with warmth, blah, blah, blah, and you see here and it really looks crazy.

Speaker 4:

Really, really, really good, it looks really good, but a very limited prompt. That's what I noticed. Like it's a very limited prompt and like it also you see all the other things that are moving around, the woman that is walking down the street, Like it has some understanding of how things seem should function in the world, which is crazy for me to think about. Yeah.

Speaker 1:

I think you really need to look very closely for this one. I don't know like maybe if you look at the shoes and the way, like you can see that, because also the camera angle changes. It's just a bit of a cranky angle.

Speaker 2:

Yeah, yeah, let's look at the shots. Yeah and the reflections. Actually, if you look at the reflections, it's like but actually I thought it was pretty good.

Speaker 1:

Oh, look at this.

Speaker 2:

Yeah, a lot of the it's crazy, yeah, it's crazy.

Speaker 3:

It's like yeah, yesterday after I saw this, I was like now this can be true, is it really Ben?

Speaker 2:

Yeah.

Speaker 3:

Because at first we're simply some random videos on X. So okay, maybe they are real, maybe they are not, but yeah, it's insane.

Speaker 4:

But this is such a huge step up from what the state of the art now is like with runway mail and stuff like this yeah.

Speaker 3:

Like this is Also Bart, coming back to, let's say, your sentence of before. If you go up Moorillo and if you click on the red technical report, they are defining this, like video generation models, as word simulators, so they are actually able to simulate the physics and Sorry, and how different objects are interacting in the scene. And is insane.

Speaker 2:

Yeah it's crazy.

Speaker 1:

Yeah, but I think I thought about it as well. There are some Like this one the women in Tokyo. Once she gets closer, I feel like it reminds me a bit more of a video game like.

Speaker 2:

Yeah.

Speaker 1:

But it's very, very, very realistic, like you see some other ones, like prompt historical footage of California doing the gold rush, that's it.

Speaker 2:

And you see, like so much stuff, so much detail everything going there and a lot of them.

Speaker 1:

You really need to know that this is not true. You really need to look for stuff there, right? But even like things, for example, on the first part. The second example is like a giant, woolly mammal Approaching, so it's like snow when you see the smoke coming up and smoking.

Speaker 4:

It's not something easy to Like, the physics of it, you know how fluid dynamics and all these things right, but where does the smoke come from? I think Isn't it in the prompt.

Speaker 1:

Let's see Several giant, woolly approaches, covered in trees, dramatic snow, capped mountains, distance.

Speaker 4:

Not really. The video with the car, so it's like mammoths walking towards us ice cold and they let out a huge fart. Sure.

Speaker 2:

I had a different interpretation.

Speaker 3:

But for example Murillo I don't know. I just sent you a link about Lumière I don't know if I pronounce it correctly the model from Google, which should be a competitor that got released a few days ago actually a few weeks ago, and Already back by then. The samples were quite good, but now we are totally on another level with Sora.

Speaker 1:

Yeah, so these are the videos that they created, right Like. So you see some things here, but, like you see, it's not as fluid, it's not like.

Speaker 4:

We used to think this was impressive.

Speaker 1:

Yeah, yeah, but that's the thing we're getting very like Back in the day.

Speaker 2:

Yeah, like two weeks ago, back like two weeks.

Speaker 1:

This is so January time for the world.

Speaker 2:

You know, it's like, yeah, it's building so fast.

Speaker 1:

Yeah, it's pretty crazy, it's pretty impressive, and there are also some funny ones on the Sora what?

Speaker 4:

I find amazing is that OpenAI keeps outbiting the market.

Speaker 3:

How is it possible?

Speaker 4:

And it's actually, I think, a few days ago that Sam Elton, I was talking at one conference or the other where he made a statement about GPT-5, where there is much rumor about, and there was even rumor in the community that we are being limited by resources, by the architecture, we're not sure that we can improve on the GPT-4. But some ultimate statement that the GPT-5 will basically be better across all domains.

Speaker 1:

All the benchmarks On everything, all domains, like video, like I need to do video.

Speaker 4:

Well, no, everything that the GPT-4 does, the GPT-5 will do better. That's basically more or less a statement that he made.

Speaker 1:

Yes, here it is. The GPT-5 is better than everything across the board, says OpenAI CEO Sam Elton, which is a bold statement but honestly, like you said, it can be Like you can't question them really.

Speaker 4:

I don't know.

Speaker 1:

It's like it's very Brazilian comment of me, but it's like back in 2006, ronaldinho football player, he had this like we always go back to Ronaldinho.

Speaker 4:

At some point, we always go back to Ronaldinho.

Speaker 1:

But he had like these crazy football videos that he was like kicking the ball on the crossbar.

Speaker 4:

And everyone was like it's fake, no.

Speaker 1:

And then he's like and people are like you're a professional athlete, they play with him, saying like, yeah, with him you never know Like, you can never say it's fake, because the guy is just so good that you should like you. Just you can't question it.

Speaker 1:

Like it looks really fake, but you don't know, Maybe it's possible. You know, and I think the same thing with this. You know, it's like it's a really bold statement, but I feel like, given the track record of OpenAI and what they've done and how they keep reinventing, it's like, yeah, make it all right, you know.

Speaker 3:

So you are saying that back in 2006, Ronaldinho had Sora and it was exactly yes, no, no, no, no.

Speaker 1:

Everything is possible.

Speaker 3:

But I tried to extrapolate a bit how this model works from this technical report, although I'm a bit disappointed they are not releasing anymore like in-depth papers, like, for example, for Lumiere we have all the details but not for Sora. And they are both diffusion models and they are very clever because instead of generating one image they generate a batch of images. So the diffusion process that goes from random noise to images is done on batches of I don't know 32, 64, it can scale up of same images.

Speaker 4:

So this image are the frames. Batch of images at a time.

Speaker 3:

Exactly generates at the same time batches that are frames. So concatenating together these frames, you can generate videos. So the mechanism behind is the same, but Google uses a unit with some attention mechanism. So it's a convolutional network in the basics, while Sora says, okay, this is a diffusion transformer, that's why they can scale it up. So can they get this kind of results? Simply because of scaling, because it is a bigger model, because they have a bigger training set. And then the question is where did they find so many videos? Right?

Speaker 1:

Yeah, yeah, I also. Yeah, I have this question as well. This is becoming more commercial, right? Like OpenAI you said they're not releasing as much. Maybe there's also commercial thing there. I'm also wondering if this research is getting more commercial or industrial. Because you need this hardware, you really need all this compute, and it's something that universities don't have access to.

Speaker 3:

Definitely, definitely of course. Openai, is Microsoft backing them, so they can get unlimited, almost unlimited. But I'm also wondering if you have power on cloud.

Speaker 1:

Like. If you have like, let's imagine that someone comes up with a new architecture that is actually better and it scales better, but because they don't have the compute available, they cannot make noise about it.

Speaker 3:

True, yeah, they maybe don't have the training data, they don't have compute.

Speaker 4:

And if we speculate, what kind of training data do we think went into?

Speaker 2:

There was one speculation on Twitter actually I saw I think was a couple of days ago. Yeah, you cannot quote that, but it's just guys speculating. One of them said that it might be game engines because it reminds yeah, if you look at it like this common sense, but like this is a good example.

Speaker 4:

It looks like.

Speaker 1:

GTA, basically, if you ever played GTA so if you're not, one or whatever. For now in the live stream. This is basically like a Jeep running through nature and the camera angle with the right behind like floating around behind the car. It really looks like video game.

Speaker 3:

But that's so maybe they created the training set like this simulating videos via game engines.

Speaker 2:

Have a lot of games.

Speaker 1:

You can actually yeah, like maybe, if you have hyper-relsing games yeah, maybe you can even automate, like you don't need people actually playing the video game, right, like you can have things you know they can have like agents or something.

Speaker 2:

Yeah, what you can actually do. I think you can translate, text. Okay, it's a yeah let's not rebuild this on the fly, but yeah.

Speaker 3:

That's why I don't like a lot of OpenAI anymore, because GPT-4, okay, we know more or less how it works, but not all the details. Also, now we have this new model. How does it work? Okay, it's a diffusion transformer, but what are all the details? So in it they should rebrand a bit the company.

Speaker 1:

Just AI, ai, yeah, just no Open.

Speaker 4:

Maybe this is a nice segue to go to an article that was released yesterday by Nile Patel, editor-in-chief of the Verge, and he has an article on how AI copyright lawsuits could make the whole industry go extinct. So we've seen I think the biggest one that we've seen in recent times and we've covered it a little bit is a New York Times lawsuit against OpenAI when the New York Times basically has a very strong case where they can really show like, when using these prompts, you get exactly the same content as their articles. What the article says, more or less, is that we in the community or from the major companies like OpenAI, everybody seems to be a bit more or less at ease, like the Stormo pass, and that from the more legal point of view, that everybody thinks like this could be huge, this could be and what they call it, an extinction level event for these type of companies. Because when we and I think for a lot of these things we're talking about US law, what all these companies are basically saying is that so there is a copyright law, but there is basically for the copyright law.

Speaker 4:

Do you have like a bit of an escape hatch where you can say there is a fair use. I can use these on a fair use terminology and then I can ignore the copyright law. And this is, for example, that you can use articles in a classroom. These type of things are fair use, like a mashup on YouTube. This is fair use. You don't use a full thing, but you use bits and pieces to create something new.

Speaker 4:

A fair use basically hinges on four pillars, so it's purpose and character of the use, the nature of the copyright at work, the amount and substantiality of the portion use and the effect of the use on the potential market. And they're argumenting that for all of these four things, it is very easy to make strong arguments that this is not fair use and that everybody is becoming a bit anxious. Like what will this mean for the market? True, but also the other way around, like if New York Times will see this. True, goes through court, goes to the highest level, doesn't take, because what can also happen is that there's settlement right, open access.

Speaker 4:

Ah here's this amount of money Don't bother us. But I think we are at a stage that companies like the New York Times and others are saying like this is so big, like this does something to us our viability to live longer which might make them go all the way to court. But there is also like it's very hard to predict what it would do, because we could see that there are different rulings coming out that affect copyright law as it is known today and that might have a lot of unintended consequences as well for artists, for creatives, and so it's very risky, as from that point, if you for New York Times to push it all the way through, Like buy the bullet right and then what's going to happen is.

Speaker 4:

Yeah, like no one really knows that this, but something needs them because, like, at the same time, for a lot of these companies and I think New York Times is a good example like this changes everything too for who they was, and like leave it in the middle if it's good or bad, but like this is like a major event for these type of companies.

Speaker 1:

Yeah, I don't know, what do you? Maybe do you have? Does anyone here have any hopes or predictions on it?

Speaker 4:

Hard to say. Well, maybe to make the parallel that we made with the video of Anthony, like Ruff just builds on something that already exists and then benefits from it. I think this you make the parallel here that OpenAI builds on something that exists and then just profits off of that. I think what you I don't think anyone is against what is happening in OpenAI, but what I think what I hope will in some way or another take shape, is that artists, that's create works that are being used by companies like OpenAI, that they also benefit from it.

Speaker 4:

Yeah true, that is the hope I have for the future. And this is an artist, but also journalist and also authors.

Speaker 3:

I like your positive view on things because, for example, I don't know when chatGPT got released. In general, after a couple of hours I was like scared because of the content I was seeing online. I was like, okay, is this real or is this chatGPT generated?

Speaker 4:

And at the time it was text okay.

Speaker 3:

Then image model got better right. Then we saw all the pictures generated by mid-journey, for example. They were super realistic, but videos were still okay. Right Now, also videos, and these I think really dangerous in general. For example, there will be new elections in US and better, there will be people.

Speaker 4:

Well, and the interesting thing of this because you're hinting towards the deep fakes, right, and the interesting thing of this is that this is not even covered by copyright law, like deep fakes are not prohibited by copyright law at all. Like this is like you need to say, like this is defamation of you as a person, but like there is no clear law that says that you're not allowed to do this. It does even way harder.

Speaker 3:

And I think now in US some things are changing thanks to Taylor Swift. I don't know if you know, Everything changed because of Taylor Swift. Because I think there was a period where there were a lot of deep fakes appearing online.

Speaker 4:

Didn't notice.

Speaker 3:

Yeah, and she's appealing, let's say, in order for regulators to change something.

Speaker 1:

I'm trying to find you.

Speaker 4:

Don't Google this. Are you honest that you don't know what this is about?

Speaker 1:

I mean I can have a guess right, so I'm not Googling on the page.

Speaker 4:

There were basically a lot of deep fake news of Taylor Swift which triggered a lot of discussion, which made it very visible what are the negative side effects of these technologies to the public at large.

Speaker 1:

Yeah, but I also think we talked in the elections, for example. Now I think in the last elections there was the Russian trolls or bots or whatever. You know that they would have to do this and this. And now creating fake content is easier, Right, Like you can ask GPD hey, write an article about how Morillo burned down the data it's building. And then you come up with something right, but it's completely fake. This is the first thing you come up with.

Speaker 4:

You need to tell us something.

Speaker 2:

Yeah, it's specific.

Speaker 1:

But there's also the interactive thing. Now, right Like, you can very easily create a bot that, if you send a message to the person that doesn't exist, that you will reply yeah, right, so it's like it takes it to another level. Right, and what can we do? Right Like, it's not very hard to do these things either, in my opinion. Right Like, I mean not like anyone can do it, but I think someone that has some programming skills can.

Speaker 3:

Yeah, but there will be people creating UI, UX for that, so it will be easier and easier. Yeah indeed, yeah indeed. It's really dangerous, I think.

Speaker 1:

Yeah, yeah, it's a bit not so positive.

Speaker 3:

For example, I'm noticing already that there are a lot of people believing to this kind of content. I was in one of my productive evenings. I was scrolling through YouTube shorts and since I like football, I get a lot of this content and I was getting videos from a very famous interview of Zlatan Ibrahimovic, the guy from US.

Speaker 2:

I forgot his name. Do you know him, zlatan?

Speaker 3:

Yeah, but the other guy, the interviewer, is really famous and there were some pieces of the real interview and some pieces were generated by a deepfakes and voice cloning and for example, he was saying oh no, messi, messi is trash right. And the best this kind of things and people were like, oh, Zlatan is able to say something like this and yeah, so it's really innocent, right yeah?

Speaker 1:

But I'm also throwing people. But what if, instead of Zlatan, it's someone else, right Politician or but sometimes these videos or the shorts on YouTube sometimes I see it because somehow some things get recommended to me and I'm like this is clearly staged. Maybe it's not a deepfake right, but like I don't know. Sometimes I get a video like, oh, this teacher was getting this and this and this, and now they're trying to show how Chag APT can program the UI for them.

Speaker 1:

And then it's clearly like a student that just used the classroom they just go in there and go in the Chag APT and then it's like they act a bit like the teacher gets mad at the student. The student goes and shows and this and this, and it's like for me it's clearly staged, clearly staged, right. And then sometimes I'm like it's impossible that no one noticed this, right. But then I see this and then it's always comments like oh yeah, the people are not ready for the future. Or yeah, this, I was like man, this is clearly staged. So I'm also wondering what are the people like? It's also bias. Right, like the people that do leave comments. Who knows as well if these people are real as well or not? Right, like you can also.

Speaker 4:

Yeah, but do you even like is how much of a problem is the fake content actually Like what you see today? A lot is like people shout something, those people have a following and the whole following believes it.

Speaker 2:

Yeah, yeah, yeah.

Speaker 4:

Like it's not just the content, the fake content that is a problem.

Speaker 1:

Yeah, yeah yeah, I think it's.

Speaker 4:

It maybe accelerates that problem, but yeah.

Speaker 1:

In a way, I think we need to kind of be reeducated to this reality that, like you, have to be more skeptical, right? I think, maybe years ago, you know, if you saw something on the news, if you saw something like even in terms of when information is available, right, Like I said, my dad, when he was writing his thesis, whatever he was in university, he wanted to know something. He needed to go to the library and get the book right. And it's like books have editors, they're not going to just put whatever there. So there's already a lot of layers of kind of saying, okay, this is probably trustworthy information, and now it's a whole different dimension of that.

Speaker 4:

Now you Google what is the shape of the earth and then you get like five different answers on Twitter. But I think the thing is like most people are not going to Google.

Speaker 1:

They're going to Google like, prove that the earth is X and you're going to find it.

Speaker 3:

And then people are like, oh, you see, it's here you see, it's indeed also a generational problem, right, you saw it during the first wave of COVID, right, at first it was.

Speaker 4:

Pizza gate yeah, before COVID.

Speaker 3:

Or during the first vaccination campaign. Initially, like people were super skeptical, they were putting three year coins here to prove that they were injecting something magnetic in your heart yeah, I love people.

Speaker 4:

I think this was just Italy.

Speaker 3:

It can be. It can be. Yeah, that's great, actually it can be. But people, I think, from two, three generations before us. They were born like during the TV era. They were believing to the TV news, so they were somehow as you were saying, they were authors. They were people taking responsibility of saying the truth there. Now they see social network and maybe they relate with, for example, television or newspaper. The thing is the truth, while it's not for us, maybe that we are born in the internet era, it's a bit easier to spot.

Speaker 2:

We're going to have a better discriminatory model or something I don't know in our heads.

Speaker 1:

That's the idea, yeah it's kind of also like one of the skills you need to have right, and I think more and more the skill of critical thinking. Should you just take this as the truth, or should you apply some health and skepticism, Should you like? How do you navigate all these things?

Speaker 3:

But also education. I would say yeah, yeah, yeah.

Speaker 1:

I mean, I think it starts like I think I saw actually but that was also in Italy that they were doing identifying deep fakes or fake news actually not deep fakes in schools like for young children in Italy. Actually there was like courses for this.

Speaker 3:

Oh, okay For fake news. Good job Italy.

Speaker 1:

I don't know.

Speaker 3:

That's just a hundred bucks.

Speaker 2:

But, yeah, indeed.

Speaker 3:

But these are the risks of innovation, right? So we cannot do anything about it.

Speaker 1:

There's the whole story with the dynamite right and the guy that discovered, invented dynamite. He saw the destructive power and then he was like, oh, he's going to give his money back now for people that have innovation, scientific innovations, for good and that's the Nobel Prize or something. No, Am I making that up?

Speaker 3:

But no, he's a bit soy so.

Speaker 2:

He's a judge.

Speaker 1:

It's sexy. Don't take, don't believe anyone, okay, all right.

Speaker 4:

Sema Altman will donate everything to the Fovellas.

Speaker 1:

Yeah, do it. Merle approves All right. Anything else Maybe Google's.

Speaker 4:

Gemini.

Speaker 1:

Oh yeah, I think we should include that.

Speaker 4:

Blows my mind At least what is state blows my mind.

Speaker 1:

So what is Google? Gemini part.

Speaker 4:

So it's about. So Google has more or less a chat GPT competitor right which is called Gemini Ultra and it's called, since yesterday, gemini Ultra 1. And I think Gemini Ultra 1, even though it states it's on par with chat GPT 4, it's not really on par with us a bit. Yesterday they released an article about Gemini Ultra 1.5. Which is shoot to drastically enhanced performance.

Speaker 1:

Why 1.5 and not 2?

Speaker 4:

With a Samurai expert here.

Speaker 3:

GPT 3.5.

Speaker 1:

Yeah, but to me yeah, okay, carry on, it seems random yeah, but to me.

Speaker 4:

I think this is safer, like if people say yeah, but it's not really great thing, yeah, but it's 1.5.

Speaker 3:

Oh, that's, true, yeah.

Speaker 1:

Safer. Then they have like a 1.6 or 1.75.

Speaker 4:

If it's really good, people go. This is only 1.5. What was 2? They're like it's good anyway it's like win-win.

Speaker 1:

I think it's smart. Okay, I'll give you that.

Speaker 4:

But which? What blows my mind is the context length Is where they say that there is a contact length of up to 1 million tokens and within research setting they used up to 10 million tokens, Jesus, but what they will release now is 1 million tokens. How is it possible? Which is just crazy.

Speaker 2:

How many papers is this? Like A4 papers with text.

Speaker 4:

I can tell you so, and it's multimodal. So they state in the announcement that this type of contact length it can include one hour of video, 11 hours of audio code basis with up to 30,000 lines of code or 700,000 words. Wow 700,000 words, wow.

Speaker 1:

So it's like basically solves the problem of.

Speaker 4:

And what that allows you and that really blows my mind is that it has because it has this large context length and because it's apparently also able to use this large contact length is that it has a huge amount of performance on in-context learning. So what they did as an example is that they Very quickly.

Speaker 1:

what is in-context learning for the people that are not?

Speaker 4:

In-context learning is that you add a lot of information to your original prompt, which your original prompt is limited by the context length basically. So that's your. If you have a very large context length, your initial prompt can be very, very large, and in-context learning is that you give information to that prompt that you can use later on in subsequent prompts. So it's basically like zero-shot learning, Like you don't find your model, you don't retrain the model, you add all the information in the context and they give an example where they give a lot of information, language information from the Kalamang language. Never heard of it. It's a language with fewer than 200 speakers worldwide and they gave a lot of translations in-context within a single context and based on that single context and information that was inserted, it was able to translate everything from Kalamang to English and vice versa, and which is absurd to think about, right.

Speaker 2:

That's absurd to think about.

Speaker 4:

Yeah, it's like this Super impressive?

Speaker 1:

Yeah, indeed, indeed, that's crazy. That's crazy. That's that. Do you think? Wow, maybe Question, if I remember. Well, I'm not an expert, you can't be that as well. This, the large context, is a problem for LLMs, I guess the traditional LLMs If. Can we speculate a bit on the architecture of this, because I know that the state space models they were claimed to use better the context. Do you think that this is the same architecture? Do you think they do something very different here? What's?

Speaker 4:

here. Actually, let's throw this out a bit. Let me see. Let's throw this out a bit. Let me just quickly search. So there is a bit of rumor on what exactly is being used, and the rumor is that they were using the what's it called Ring attention.

Speaker 4:

Ring attention, which is a paper that was released end of 2023, which comes from Berkeley, from Liu, and that this and I am not knowledgeable on this specific paper enough to give an Eli5 context. Maybe we should try for next time that this is being used, but these are rumors at this point. But pretty cool. Pretty cool, have you used?

Speaker 1:

it yourself, Bart no.

Speaker 4:

So what they say is that they have released it to a bunch of rat-teamers, which apparently I am not.

Speaker 1:

Bart.

Speaker 4:

So rat-teamers are typically people that are the first one to get access, that drive at the limits, whether it's safe to use or expected things. So basically, the cool kids.

Speaker 1:

Except Bart.

Speaker 4:

Yeah, rat-teamers, not to be confused with red shirts.

Speaker 2:

What's this?

Speaker 4:

Everybody, no sci-fi fans.

Speaker 1:

So like a Star Trek thing.

Speaker 4:

It's a Star Trek thing actually, but I think it's known broader in the sci-fi world. Red shirts are like a character that's introduced early in the show and then quickly die, and in Star Trek they typically wore red shirts.

Speaker 1:

They typically like all of them.

Speaker 4:

Yeah, because they were from Star Trek. They were the non-ranked Star Trek staff.

Speaker 1:

So like they did a lot Like the anonymous people, and then they quickly died.

Speaker 4:

They were introduced and then, yeah, when the new episode started, like he saw a few people sitting and he was like oh, who's that wearing a red shirt? And he knew like this person's gonna die.

Speaker 1:

So it's like you get the script, you know. It's like yes. I'm gonna be Star Trek and he's like this is your office, fuck, this is like I'm gonna.

Speaker 4:

So not that, rat-teamers, but it's like impressive, like it blows my mind, but to me the statements they make is a bit on the same level as what we see with the Suora video. Yeah, but I think it's hard to wrap your mind around.

Speaker 1:

And I think the use case with the language, because I think it's also an endangered language, right. So I think it's also an interesting. It's also an interesting use case.

Speaker 4:

Yeah, maybe we can now just send everything in Kalamang to each other. We just translate using you just do it yeah, to keep it alive.

Speaker 2:

Where does?

Speaker 4:

it come from the language New Guinea.

Speaker 2:

Yeah.

Speaker 4:

West Papua.

Speaker 1:

I must say I don't know what it is. It's between Indonesia and.

Speaker 4:

Oh there.

Speaker 1:

Yeah, okay, very cool. Very cool, do you think. And maybe last question on this Google. Well, I think, in terms of whose? There's OpenAI first, microsoft OpenAI, google now is catching up or trying to catch up. Aws is trying to do some stuff with JNAI as well, I think it's not very Different approach, but yeah. Not as commercial. I feel it was more like developer-oriented Apple, Like now. We saw the article Very scattered, right. Who else is there?

Speaker 4:

Mistral.

Speaker 1:

Mistral.

Speaker 4:

Something as big as your EU. I love Alpha, but it's less To me, a bit less transparent what they're doing. There are a few others. There was a company behind Claude again.

Speaker 1:

I'm not sure, not Claude.

Speaker 4:

Entropic, Entropic yeah.

Speaker 1:

And Meta Not really. Yeah, meta is not Lama.

Speaker 4:

Lama is true, but I think Only the second most used, but I won't put you in a difficult spot there. Yeah, thanks.

Speaker 1:

Bart, I feel very much now the part.

Speaker 3:

I think the most interesting one is Meta from all of these, because all the others are trying to compete on a commercial level with OpenAI and Microsoft and they are basically losing it right, losing the competition. They should be Google nowadays, but from the other side you have Meta, is developing a lot of new ideas, new models, new research papers, and they are all open. And now even Lama has a very permissive license that you can use, you can integrate in your products. So they are deciding to not fight back on a commercial level but more on a knowledge level. And a few days ago of course it's a bit biased because it was a tweet from Jan Le Koon Can you say tweet with the new?

Speaker 1:

Bart doesn't let me usually, but.

Speaker 3:

And.

Speaker 4:

Put an X for it.

Speaker 3:

Yeah.

Speaker 4:

Twitter and X in front.

Speaker 3:

X tweet. X tweet. Okay, street, street, street. And it was showing that after the release of Lama 2, for example, the stock price of Meta increased. Is it cornered? I don't know, but definitely interesting.

Speaker 4:

And they fired lots of people. But Well, Maybe as well. But I think this is a very, very, very remark. I think their approach I think you can debate. I think we had a discussion on it the other day. What are the open source? Actually, the open source initiative is having a whole work group on this to define what is open source when it comes to AI assets. But they have an open source-ish mindset on this and because of that, you see a lot of usage in the community. Yeah, Like there's a big, big community. They're probably by far the biggest.

Speaker 3:

Definitely, and even In the open source world. I think even other companies like OpenAI are reusing ideas of Meta and I remember I think it was in December, Andre Carpaty who left OpenAI. Is there a point of discussion for today with this? But on the stage during a Microsoft presentation, it was describing how LLM works and it was showing all the results of the paper or the benchmark of LLMA2. And it's really interesting, right, and maybe they are reusing some ideas that are their differentation mechanism to using LLMA2.

Speaker 1:

But I do think Mistro is doing a bit of both. They're kind of competing with ChagBT, but they also have the open, Like the models for developers. I guess they're trying to cater. I feel like OpenAI is the only one kind of that is open source and kind of that is more for commercial purposes, and I guess Google but I think Mistro it kind of tries to cater to both. Aws is just for developers, Meta is just for developers, right, or I guess not just for developers, but it's more. It feels like it's catered more towards one than the other, right.

Speaker 2:

Yeah.

Speaker 1:

Well, yeah, let's see, let's see what the other team gives, let's see. And also there's Sora, there is RunwayML. I wonder if there's going to be more video stuff as well. All right, it's.

Speaker 3:

Who knows? Next year, who knows?

Speaker 1:

Who knows? And what else is there? I think there's audio generation. Well, audio, yes indeed.

Speaker 3:

Right.

Speaker 1:

Like I heard some stuff there last year but nothing that really made a lot of fuss.

Speaker 3:

There's a bit behind right. I think so.

Speaker 1:

I think so To be seen.

Speaker 4:

What YouTube is working on it together with a number of artists.

Speaker 1:

For audio.

Speaker 4:

Yeah, Together with artists, which is an interesting what?

Speaker 2:

are they still?

Speaker 1:

Big names or.

Speaker 4:

They did state it, but I forgot.

Speaker 3:

Do you think in a few years, your home board is raining outside. You're still saying, okay, open Netflix and let's see the catalogue. And instead of choosing the movie, you type, okay, today I want to see this, this and this.

Speaker 4:

It's crazy to imagine that yeah.

Speaker 3:

We'll create a movie music the script.

Speaker 1:

I don't think you work for me, because I think I like movies that are unpredictable. I mean, I guess I can say make something unpredictable.

Speaker 2:

But then it's like you look wronged.

Speaker 1:

But then it's like if I say I want to watch a story about this, maybe, unless you say I'll start the story, I want to be surprised. Finish it for me.

Speaker 3:

That could be maybe an approach yeah, and the style of your favorite director yeah.

Speaker 4:

What about music?

Speaker 1:

What is?

Speaker 4:

your favorite movie.

Speaker 1:

Oh, okay, I don't know if it's the favorite, but one movie that I like a lot is called Predestination. Yeah.

Speaker 4:

How does it end? Everybody happy after.

Speaker 1:

Not really you have to watch it. I have one time I tried to spoil the movie, but it's so complex.

Speaker 4:

But that would be cool if you can say, like I take a movie, let's say, what does it mean? The notebook like a super romantic movie. Everybody lives happily after, did you just say generate me an alternative for everybody ends in tears. Like something like that, like it's more or less the same, but like it has surprises.

Speaker 1:

Yeah, okay, this is a bit, it's very much of a cycle.

Speaker 3:

There is a Hold on, yeah sure this is, but you can say something like please fix the last season of Game of Thrones.

Speaker 1:

Oh yeah, Controversial.

Speaker 4:

A reboot of France Also? Yeah, also.

Speaker 3:

Okay.

Speaker 4:

Hold on.

Speaker 1:

I'm going to share my screen. I'm going to share this.

Speaker 2:

The future is bright.

Speaker 4:

But actually, if you like, reboot France. But it makes me think about so you have, because you're saying audio is not really there. I think audio has been like especially voice cloning of artists is extremely high level that these days, like you have songs of Tupac coming out where you don't If you didn't listen to the lyrics that are about today, would like think this is actually Tupac, right, yeah, yeah, yeah, yeah, like this is really high level, yeah.

Speaker 1:

There are some things. No, what I wanted to show is, like you mentioned, if the movie is happy ending or nothing, there's a database Does the dog die? So basically like if you don't want to watch a movie because there's animal cruelty or anything, you can actually go there and, without spoiling the movie, I guess you can kind of see it. So it's like so if you don't feel cheated, like you're going to watch a movie and then there's something that triggers you and you don't want to do it, and then you can go there and take a look. Yeah, so there's a lot of. No, it's not just for dogs, but there's a lot of stuff there.

Speaker 4:

Okay, so they can use this as a trading data, I guess.

Speaker 1:

No, I think it's more like you're thinking of watching a movie. Does this happen? The people that I do this? Is this good happy ending?

Speaker 4:

A bit of happy ending, you know I do think the thought process of what Vital here introduces, like will you use Netflix differently, like can it generate and like it's a very interesting thing. Or like you take an existing movie, you just add something random, like everybody's licking an ice cream all the time, like you're just like you're on these types of videos.

Speaker 1:

And also this was not a topic, but I'll add it here as well. Actually, you mentioned my screen, you mentioned the movie, and maybe this is not so far off, because this was also something from not that long ago.

Speaker 4:

Yeah, true.

Speaker 1:

This ASML Harness AI to make its latest brand film.

Speaker 4:

Yeah, asml. For people that don't know, they are one of, or the major manufacturer behind, the machines that make chips, and this is so they kind of show like a sneak peek of yeah, I think I forget this and it's fully Fully regenerated.

Speaker 1:

I mean you probably wouldn't be fooled, right Like when you look at the animations and you see the changing things. They aren't trying to fool you that this is not regenerated.

Speaker 4:

I think they were not part of the rat team. No, I probably think so.

Speaker 1:

I don't think so for Sora right. But you see, like what is this.

Speaker 4:

But this is like, literally, like in a year we will have to squint our eyes to see what is happening here in a year.

Speaker 1:

But that's the thing, right, we're saying this now it's impressive, and this and this, but actually this was a bit before Sora, we've put in different standards.

Speaker 4:

Now, yeah, exactly, this is so yesterday.

Speaker 1:

But the idea of making a movie from AI maybe not that far from there we are. So maybe I guess Netflix needs to pick up the game. We have an idea to kill Netflix, basically, all right, cool, I guess. On that note, anything else, anything else you'd like to share Mention, shout out.

Speaker 3:

Thanks for having us.

Speaker 4:

Thank you Thanks for being here Now.

Speaker 2:

I feel left out and it's also a bit weird, because I'm so weird.

Speaker 4:

We can do it.

Speaker 1:

All right y'all. Thanks everybody for listening. Thanks everyone for listening for joining.

Speaker 2:

Oh, there's a chat.

Speaker 4:

Trunet. So someone says in the comments really curious to see what runway and peak are going to do with the Sora release. Would be cool if they also go for Sora's. Well, yeah, this is suddenly a major competitor that steps in. We'll completely upset the current gen video market. Let's see what gives.

Speaker 1:

Let's see what gives indeed Indeed. Thanks again, thanks everyone for listening. See you all next time. See you all next time.

Speaker 2:

Ciao.

People on this episode