DataTopics Unplugged: All Things Data, AI & Tech

#80 AI Agents Run Wild, DeepSeek Breaks Records, Polars Cloud Expands, and Perplexity Reinvents Search

DataTopics

Send us a text

Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. DataTopics Unpluggedis your go-to spot for relaxed discussions on tech, news, data, and society.

This week, we’re unpacking everything from AI-powered vacations (or the lack thereof) to corporate drama, and even a deep dive into the quirks of COBOL. Join Morillo, Bart, and Alex as they navigate the latest happenings in data and tech, including:

Speaker 1:

Let's go.

Speaker 2:

You have taste In a way that's meaningful to software people.

Speaker 3:

Hello, I'm Bill Gates. I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here, rust.

Speaker 2:

Rush. This almost makes me happy that I didn't become a supermodel.

Speaker 1:

Hello and welcome to Data Topics Unplugged, your casual corner of the web where we discuss what's new in data every week, from trips to pickles, everything goes. Check us out on YouTube. Feel free to leave a comment, question, reach out to us, etc. Yada, yada. Today is the 17th of February of 2025. My name is Murillo. I'll be hosting you today, joined by my faithful sidekick, hi Bart Yay, and the one that keeps us in line behind the scenes, alex Hello. Hey, alex, how are you?

Speaker 2:

Good.

Speaker 3:

Good how are you Bart? You bart good?

Speaker 1:

as well. Good as well. It's cold. I'm still warming up a bit. Yeah, it got really cold here in belgium. It's sunny, but it's almost froze something off say more, no, no, but um, it is february. I'm waiting for the weather to get better, but um, here we are still.

Speaker 3:

It's not raining. At least it is not. It's not raining, that is true, that is true, it's just freezing, it's just freezing.

Speaker 1:

Actually I prefer that than if it was just above freezing but raining. Yeah true, yeah, so maybe let's be positive. You know, let's keep dreaming and talking about dreaming we shared our dreams about like an AI that will plan our trips for us.

Speaker 3:

Oh, we did yeah.

Speaker 1:

We did talk about it a few times, right Like because it's a known annoying problem. That is just like grunt work almost. You know, just got to go here, go there, check this, check that. Okay, now try this other thing. Okay, this flight, I don't know, booking cars and all these things, um, but sadly for us, apparently this future is a bit too far ahead. As the airbnb ceo says, it's too early for ai trip planning. Can you say more about this part?

Speaker 3:

uh, yeah, this is an article on tech crunch um, where the ceo of airbnb like goes in a bit on how they're using Gen AI to improve their service. Basically, and I think the headline already says it he thinks that the current state of AI is not ready yet for complex tasks like helping people really plan their whole vacation on airbnb. Um. He compares it a bit to the early state of the internet in the mid to late 90s, like very promising but not but not yet ready for to support these kind of things. Interesting um, but they do uh have plans for in this summer to go live with a better customer support uh, jenny, I powered like, I think, mostly internally like helping customer service uh handle different languages, that kind of things. Um to optimize support uh and to initially focus on these type of things before going to automated threat planning I see, but then the ai part is not for the planning is more for like languages and all these things.

Speaker 3:

But I think it makes sense, like I think there are so many moving parts to that like to like a vacation or something like you want to make sure that everything is done correctly yeah, yeah, yeah, yeah indeed, but uh, I think, from what I hear as well, it is going to happen.

Speaker 1:

It's just a bit early. There's a lot of promise.

Speaker 3:

Yeah, and I think what we will probably see earlier is like AI-powered queries. Like I want to go on a vacation to a sunny spot. It needs to be a max five-hour flight. I want a bit of an outdoor cabin experience. Yeah, what can you like? What can you recommend to me? Like a bit a little more like that, right, I think like more like subsets of that trip planning. Yeah, we'll see going live sooner than like running that query and having it done everything for you yeah, indeed, I also think it's uh well, airbnb ceo.

Speaker 1:

That's a credible source. Airbnb does a lot in tech as well. I think even stuff like Airflow, which is super popular, started Airbnb. If I want to say so yeah, it's not like. I think it's pretty much in their wheelhouse, the tech stuff, and it's good to see that they're trying these things out.

Speaker 3:

Yeah, and I think honestly, like these type of companies like Airbnb, but also Expedia, like they will probably benefit a lot from this, for sure, definitely agree, definitely so, yeah, cool.

Speaker 1:

So I guess it would still take a while, but, uh, nice to see that people are taking action, because my life would help. My life would improve tremendously, I think, as soon as something like this is rolled out. What else Still now? In the closer future, anthropix's next major AI model could arrive within weeks. What do we have here, bart?

Speaker 3:

Anthropix's current model current, let's say, state-of-the-art model uh is uh solid 3.5, which is um, I want to say roughly a year old today maybe we can check no, it's february, no, it's uh. It's uh, let's say, eight to ten to ten months. I think that's a safe statement and they're working on a new model On March 2023.

Speaker 1:

That's a quick. Google search Cloud initial release March 2023.

Speaker 3:

Okay, and there are some rumors that within the next few weeks, we will see a new model where the rumors are that it will be a hybrid, where it can switch between deep reasoning and fast responses and that you'll have a bit of a sliding scale to say, to choose a bit do you want fast performance at a lower cost or do you want very good performance? Probably?

Speaker 1:

slower at a higher cost. I want a very good performance, probably slower at a higher cost. I see so like quality versus speed kind of. Thing yeah, okay, yeah, and that's because the entropics, then I guess, in this sense, is really a service. They're not just putting a model, it's really going to be a service that, depending on your selection, depending on the type of query, they will route to different models. Interesting. Interesting as well because arguably, cloud is still the best model for programming, at least for like developers uh, it's.

Speaker 3:

Uh, I think there's a debated statement, but I would, uh, I use it for such a yeah, I still default to to solo 3.5 yeah, I think, yeah, I hear.

Speaker 1:

So this is the what I see on social media, my experience talking to colleagues as well that, uh, ggpt. So the 4040 mini 01, they're good for certain tasks, but not necessarily programming. Anthropics model is like the most reliable. Deepseq is very good, but it's also very inconsistent, right? So there you see some cases. They're like well, deepseq, it's all these things by itself, but then a lot of people are like, yeah, but this is not like, it doesn't always deliver right.

Speaker 1:

So that's a bit the vibe that I get, and I think the cloud is the most people. Okay, right, so that's that's a bit the vibe that I get, and I think the cloud is the the most people still default to it, because I use cursor as well, which gives all of these things, and I'm also following like on red and all these things, and I still see a lot of people saying, yeah, I use, I use cloud, yeah, so, yeah, so cool to see, well, interesting to see how this is gonna, how this is gonna evolve, true, right, maybe. Now, on the other side of the roadmap, the competitor, sam Altman, he put well, I didn't read this as thoroughly, let's say, but Sam Altman, he went to X and he basically put out a bit more on the roadmap for ChagPT. So they said they want to well, basically, they first say that they want to be more transparent put out a bit more on the roadmap for ChaiGPT. So they said they want to.

Speaker 1:

Well, basically, they first said that they want to be more transparent about the roadmap, which I think is a bit interesting because it also it was after he said it that the OpenAI was on the wrong side of open source, right? So it's like trying to be more open I don't know if there's something to it and they basically said that they want something that works well, and I think they were just trying to say, like, we're careful about releasing new models, you want something that actually has better performance. Uh, they want to unify the o-series models with gpt series models, right. So it's kind of what entropic announces very, it rhymes a lot, right, um, but yeah, so gpt5 basically talks a lot about the, the gpt5 stuff, without actually I think it's funny because it's like opening a roadmap update, but I don't know if I'll call this a roadmap, right, it's more of thoughts on the future, but yeah, so to be seen what happens there.

Speaker 3:

So they're talking about GPT-4.5 and GPT-5. Yeah, and GPT-4.5 will be the long their last non-chain of thought model. Yes, oh, that's a big statement, I guess.

Speaker 1:

So they're saying the future is in chain of thought well, they're saying also, unify all series in gpt series, right, so maybe it will be a mix of the last specific yeah, the last standalone okay, okay, yeah, and the gpt5 will be a system that integrates a lot of our technologies, including o3, indeed, so we.

Speaker 1:

It will not be a dedicated model, it will be a group of things indeed, indeed, so like says we will no longer ship o3 as a standalone model, so basically, everything would be a hybrid, right? Um, yeah, talk of. Then to go keep going on the announcement the free tier will We'll get unlimited access to GPT-5, subject to abuse thresholds, plus subscriber will run GPT-5 at higher level of intelligence, which, again, when we talk about intelligence, maybe you're saying how much O3 or the reasoning model usage, right, yeah, high level intelligence, modern corporate voice, canvas search, deep research and more.

Speaker 3:

So, yeah, okay, so yeah to be seen and um, interesting, that's uh, like that they abstract this away, like what type of model is running? Or that they pick it for you automatically, like it's an.

Speaker 1:

Uh, it's probably good for an end user, but of course it removes a bit of transparency on yeah which models are performing well and indeed but maybe for an end user that doesn't really care, it's a good thing but I think also as a, as a researcher, someone that's trying to advance these things, I think the transparency I think would help a lot, you know, to see what kind of tasks is this better for? Is that better for? I don't know, I'm a bit uh, I would prefer to have the transparency when it's using what. Um, but yeah, I know I'm not maybe the average user, right, just consumer of these things.

Speaker 1:

One thing that also came out on my feed quite a bit was Sam Altman's response to Elon Musk's AOP&I buyout, which I just wanted to mention here because I thought it was a. I thought it was a bit entertaining. Um, so elon musk went on twitter and said I would buy open eye for 97 billion, I think. And then sam almon yeah, he responded, like on tweet on twitter, but also interviewed here and that's what I'm sharing on the screen for people following the video he said OpenAI is not for sale but we can buy Twitter for 9 billion, which I think is also funny. They call it Twitter, you know, kind of saying like yeah. And then there was the whole thing I was also watching, like there was the like a YouTube video. They were explaining like that Elon Musk's offer is for the non-profit part of OpenAI, which is actually evaluated for a billion, so it's actually offering twice as much. But then the assets are the for-profit and OpenAI is trying to move up to a for-profit organization, so kind of everything kind of gets a bit more entangled with this friction. Right, and it's not the he doesn't own, right, there's the board of open ai. They need to decide these things, uh, and I'm here just playing the video and I thought it was funny like that, uh, sam alden's response.

Speaker 1:

He's, I feel like he's trying to play cool, but he looked like he was a bit bothered by it. He was like I think elon musk is just doing this because he's insecure. And then the guy's like, ah, he's like yeah, yeah, actually I think most of Elon Musk's life is insecure and I think he's just reacting to that. I feel bad for him. So I feel like I don't know, I think he's trying to play cool, but I feel like it's just the. It's like the new Kendrick Lamar beef. You know now it's Elon Musk and Sam Altman, but I do think that this beef has yeah, yeah, I think there's also the whole thing that like uh, with the us presidency and elon musk having the year there, so there's a bit of all these uh things that are in the mix yeah, it's uh, it's uh.

Speaker 3:

Not sure I should respond to this like I think uh, I think uh for open the eye. Musk has always, since the beginning, since they're falling out has tried to hinder them one way or another.

Speaker 2:

Yeah.

Speaker 3:

I think this is just the latest example. I think Altman being frustrated, probably a bit too much, and then making these statements about Musk is… yeah, it's a bit… I think it's not a typical Altman. I think he's typically very well spoken, very reasoned, and I think it's just another way for Musk to get in the way right. Yeah, I think there's not a formal offer. I think that is the response from OpenAI. It's not a formal uh offer. I think that is uh. That is the, the response from open ai. Like it's not a formal offer. But what he's what I understand a bit that he's trying to do is that open ai now wants to move the for-profit out of the non-profit yeah saying the non-profit is not worth much.

Speaker 3:

So we, we can restructure in a more or less efficient manner. Now it's having received a offer of almost what was it on the billion? Yeah uh, like that's much more difficult because like, apparently, that there is a high value in the non-profit, so how can you restructure? I think like this is trying to create complexity and just trying to make it more difficult to move forward with uh, with the restructuring.

Speaker 1:

So indeed, indeed, I'm not sure how much practical impact this will have on us. Like this actual move, right, because I think probably not much.

Speaker 3:

I think this is just any other day in the Open AI Musk playbook yeah, but I thought it was.

Speaker 1:

It made me giggle like see these powerful people like, almost like, acting like children. You know, it feels like yeah, it's crazy.

Speaker 3:

Yeah, I don't know, but I don't understand.

Speaker 1:

Uh, two days uh yeah, indeed, a lot more there to dissect, but, uh, we'll stay to, we stick to the data. I guess the well on a more lighter. Well, not lighter, but a different type of news. The euro gamer this came up. Oh, that's not lighter. No, it's not lighter, but it's different. It's not. Uh, it's sad. It's sad. Yeah, indeed, it's not a controversial maybe. I don't know what is this about. Bart half-life 2 and dishonored art lead, victor antonov dies at age just 52 so I saw this uh popping up in my feet don't?

Speaker 3:

Normally we don't really talk about games, but I think Viktor Antonov and to be very honest, I didn't know this, but he was the art director for Half-Life 2 and a bunch of other games, among which Dishonored was one but I think in a lot of people's lives that grew up playing games, halflife 2 was a was a big thing. I think, especially people my age and a bit younger. Did you, uh, did you play half-life?

Speaker 1:

2 I played a bit.

Speaker 3:

Yeah, I played a bit, I think uh, I think half-life well, not only two, but also half-life one, like it's a standard of. This is what you can do with first person shooters and storytelling, and combining that with with amazing graphics for the time indeed, indeed, indeed, indeed so yeah, I think it was a word, a shout out, yeah, word, word shout out um, what else do you have?

Speaker 1:

we have something else on games.

Speaker 1:

I thought, no, not on games, but maybe something kind of like a game moving towards our next topic, um, and you know, I I like to talk about agents, ai and stuff, right. So what they did here and this, I came on linkedin, right. Uh, they basically had 1000 ai agents in minecraft and they just kind of let them interact with the environment. They also had different um, I want to say different communication channels, like discord and whatnot, so they could talk to to. Yeah, they had like a word document with a quote-unquote constitution. So slack yeah, discord, all these things, and they just kind of left the agents run and they kind of formed their own government kind of thing.

Speaker 1:

Like they kind of come up with like rules and how they need to interact and how, like they had people farming, they needed to sell stuff, and I think they talk about here the person that would most trade was a priest, because they were bribing people to come to his religion. Priest, because they were bribing people to come to his religion. Uh, there was also sometimes someone was an agent was missing and people were mobilized to kind of work together to find that person. They showed like they were concerned. So it was just a bit of an experiment and this is the project sid um, to see how, yeah, you can kind of play with agents to kind of see what they can do in an environment like uh, like the shared environment, I guess.

Speaker 1:

So what I'm showing here is the yeah. So they also, like in the video, they show simulations of what would happen, like faith, playful simulations. Let's say what, what would happen if trump was elected versus kamala? Right, so they would have uh different constitutions and they would edit, and I guess it comes in the prompting of the agents, right, like what would have uh different constitutions and they would edit, and I guess it comes in the prompting of the agents, right, like what would they like to do and this and that?

Speaker 3:

I would wonder how much is in the prompting like yeah like, for example, the example that you give, like uh, someone is missing, we need to go find him. Like is this? Is this prompted somewhere? Like is this but I guess this wouldn't be this like part of a, of an, like an objective function, where you try to optimize for something and you need this person back.

Speaker 1:

But do you think it's like wouldn't it be easy enough to if you put on the agent prompt to say you are caring, you work in a, you live in a society you know and like the ideals of caring about people and going to their aid if something goes wrong. It's kind of in that prompt right, like you don't have to say that specifically to.

Speaker 3:

I feel like it wouldn't be hard but that would mean that someone is going missing, that it is bad for that person.

Speaker 1:

I mean, you need to have, like, some concept on this but if I say, like you're helpful and someone goes missing, like yeah, I'll help, right, like I think it's that you assume that's a bad thing, like I mean that needs, like those concepts, like it's a bad thing to go missing, like but don't you think, like if you think of just agents, like it probably is somehow embedded on the text that you crop right, like if you see news articles and stuff, like I think it's a bit that concept if someone is missing, that you see sadness, you see all these things?

Speaker 1:

I think you would, it's, it's, I guess. What I'm thinking is like it's not hard to embed that on the language model. I'm sure that if I mean I'm not sure, but I would imagine that if I was chad gpt, it would give me hints that it's a bad if someone goes missing yeah if I say, hey, my son is missing for two days, I imagine that chad gpt you say I'm sorry to hear about that, right?

Speaker 3:

yeah, I'm wondering how that's come. Like everything ties together, like I'll check it out, like like how, what are the inputs to these agents? You have the environment, but maybe you also have discord, where this person says ah, I'm missing yeah, yeah, yeah.

Speaker 1:

How do you, how do you? Know these things I think I thought it was well. Again. I don't know how useful this is, but I do think it's interesting. I do think it's uh also the fact that you have the same agent with different access to different tools and different ways of interacting Also interesting to see.

Speaker 3:

Like, what kind of like is it just running an LM? Is there some optimization model? Like would be interesting. I'm going to read up on it.

Speaker 1:

Yeah, read up on it, because I didn't read up as much, but I thought it was interesting enough to share. So there you have it. What else do we have? What else do we have? Maybe we can talk maybe a bit more about AI models and DeepSeek, and here I see that DeepSeek R1-671B because it's billion just broke speed records At 198 tokens per second. Is that what this is? Is that the fast reasoning model available? Yeah, Is this? What is this about?

Speaker 3:

Bart Say more about this. It's apparently a new speed record we discussed last time, I think, on the pod. That speed does play a role from the moment that you're using it to prompt for code and these type of things. So it becomes a bottleneck if it takes a long time to get a full response back. And I just saw this popping up, I think, on Hacker News that apparently one of the DeepSeq R1 models is running at 198 tokens per second. To be honest, I don't really have a strong frame of reference here, but apparently it's beating OpenAI O3 Mini.

Speaker 1:

And in terms of size, can we compare the two?

Speaker 3:

To be honest, I don't know what the assumptions are how big O3 Mini is.

Speaker 1:

Yeah, interesting. Yeah, it's crazy how much noise DeepSeq made. It's still making Now, instead of DeepSeq seek. What about deep research, perplexity? Deep research is now free for all users, so perplexity is the search engine right? So it's like a google competitor, I guess. So you type something, it would actually crawl the web for sources and it will give you an answer based on those sources, but it will also give you the sources available. And then here they come, and, if I understand correctly, they have also a deep research.

Speaker 3:

Yeah, basically what OpenAI announced, I want to say, two weeks ago. They have a deep research functionality and now Propaxi also has it. You can use it for free and what it basically does is it uses. I'm trying to find a way to explain it, but you start with your prompt and it basically reasons about the first output it gets and then basically does some extra investigations to come with a cohesive report on your research question so is it like before?

Speaker 1:

they still crawl the web based on this prompt, but now they also kind of have like a genetic step. They're like they kind of see what you need to get and kind of plan it out and do some queries. They have some follow-up stories.

Speaker 3:

Um so, to give an example, I launched a query and takes a few minutes or two to finish actually. So I launched a query on to to do an analysis on a certain domain within the business, belgian business landscape, like to analyze a bit like how big is it? And it it actually because you see which sources were were access and accessed 77 sources. And you also see a bit like the I'm not sure if Chain of Thought is a good explanation here, but the research steps it takes to get to the full report.

Speaker 1:

I see.

Speaker 3:

And everybody can try it, I think, and while Deep's research for OpenAI is still not available here, and would you say the quality was better than what you would have got. Like well, I don't know if you tried both but like no, I don't think we have access to OpenAI's deep research.

Speaker 1:

No, in regards to ProPlexi alone. So deep research mode.

Speaker 3:

I don't use ProPlexi a lot.

Speaker 1:

But were you happy with the results from this search?

Speaker 3:

It was, but were you happy with the results from this search?

Speaker 1:

it was very quickly, okay, okay, okay and maybe it's free, but do you need to have? Do you need to have an account? I guess you need to have an account, okay, but I'm actually I quite enjoy perplexity. To be honest, I think it pretty much replaced my Google searches, except when I'm looking for something very specific, like if I know like I want to go to this blog because I want to read this article, then I go to Google. But if I really want to learn something or like figure out how to do something, I just go to Perplexity. I feel like it's, you know, it's pretty good. Now changing gears a bit Cobol, this. Oh, now changing gears a bit COBOL. What is COBOL, bart? Maybe let's start there.

Speaker 3:

COBOL is a programming language that was mainly used on mainframes Not just, but mainly used. I'm quickly looking this up. It appeared in 1959. The latest version is COBOL 2002. And it's still being used by a lot of typically financial institutions, government institutions that also often still have a mainframe.

Speaker 1:

And it's very like. So it's more like a legacy thing, or do they have strengths in like reliability and performance? That that's why they're still used?

Speaker 3:

um, I think it's a whole other topic. There is, uh, I think they're the, the main, um, not necessarily cobalt, but mainframes, and the way they are, the the is set up, is that they are very good, in a single-threaded way, but at a very high scale, handle financial transactions. So that you're sure that everything happened, and only happened once.

Speaker 1:

Yes, I see, I see, I see and you see does or did Kobo default to to 1875 05 20 for corrupt or missing dates?

Speaker 3:

yeah, this is actually like it is a threat on stack overflow and they're asking of if if cobalt used uh 1875 05 2020 for missing dates. Actually, it links to a thread on Hacker News I read and I thought it was an interesting thing, because the answer to this is no, because it didn't have a date type structure. It doesn't exist date types and what you need to do with COBt and, I think, a lot of older programming languages like they were not always the right um types in place to do things that we take for granted now, like dates, and so you had to think a bit about a bit like, a bit like the art of selecting the right type to do what you need to do. I see, and what you saw is that a lot of things were very domain-specific. So, for example, this organization was created in 1875. And what we're modeling here in this column is like how long?

Speaker 3:

have you been saving pension at this organization since, since that date. So so, for example, you use an integer then and if you don't fill it in, if missing, like it depends on, like the the like how you should interpret this field, like maybe it's 1875.

Speaker 3:

I see it's really a bit more about like thinking uh, how do we use the types, that we have, a limited set of types, to represent a lot of things that are in the real world and that are typically today in modern programming language, also expressed in more modern types, more complex types?

Speaker 1:

I see.

Speaker 3:

And so you had these things where you had this unexpected behavior like a default, so a weird date. But it's actually not in COBOL, it's more in the logic around it. I see.

Speaker 1:

So then 19, because I did see 1875 come up a few times more again on the musk news, because, uh, they're making sure quickly here, like you almost claimed some receptions of social security checks 150 years old. And then someone said about cobalt, oh it is it?

Speaker 3:

does it? Is it triggered by this? Is it triggered?

Speaker 1:

I saw I saw cobalt because of this, because, like, because uh, 150 is like it goes exactly to 1875. No, okay, you can do some quick math, but, um, but that's that's what I remember. I saw um, and then, yeah, like, because it has like the doge team right, like from the, the government, and like the cobalt and all these things.

Speaker 3:

Oh, I didn't know that this triggered the whole discussion.

Speaker 1:

I saw this on that context. Okay okay, I don't know if it's related or just coincidence, right?

Speaker 3:

It's probably related then, because it's a very recent thing on the stack of change.

Speaker 1:

Yeah, because someone was saying, like, elon Musk claimed this, and then someone said, well, maybe they just need to to learn cobalt because it defaults to this. So if something is missing, then they just did. That's why it's on 50. But what you're saying here is like it doesn't really default to 1875. It's just that people were trying to be creative on how to express these things. But then 1875 was a bit like today, cobalt evolved. Since then, I imagine, did cobalt today have like daytimes today? Today? Did someone further down the line decided that like 1875, like no? But it's just a bit of an arbitrary thing.

Speaker 3:

This is just like if. If this links to that yeah I'm not sure um that, uh, that someone in that organization that created this mainframe application said we start counting as 1875. And then, if they used an integer, if a value of one means 1876, a value of two means 1877.

Speaker 2:

And a value of null.

Speaker 1:

If you interpret it wrongly, it should mean we don't know but if you interpret that wrongly, like you assume it is 1875. Yeah, okay, I see, I see, I see interesting. Yeah, not sure if it's not sure it's connected, but I'm sure that one thing didn't happen.

Speaker 3:

That could very well have been the trigger for the strat.

Speaker 1:

Yeah or I think at least didn't. Yeah, maybe came more popular because of it right now going to something on the other.

Speaker 3:

There's actually a very cool uh I think it's a changelog interview with uh with a professor that is teaching cobalt, um, and I think it's uh, I can really recommend it. I think it's the let's see if I can find it but I think I want to say it's from two years ago or something. But we can maybe try to link it in the show notes it's.

Speaker 1:

Uh, maybe is it. This one may frames is still a big thing yeah, yeah, yeah, that one yeah I'm sure this year very interesting uh listen yeah, indeed, I remember. Well, I don't remember everything, I remember it was interesting and I remember that.

Speaker 3:

Uh, yeah, it's from two years ago. It's uh with uh. They're talking to uh uh. Professor cameron, she, she, she. How do you pronounce this?

Speaker 1:

alex, I'm looking at you for this here. Cameron shea, thank you there we go.

Speaker 3:

He's a professor at east carolina university and a member of the governing board of the open mainframe project. Um, so yeah, check it out if you're interested. You can actually run like mainframe emulators and stuff as well, so it's fun to play around with yeah, I remember.

Speaker 1:

It's also like I was just supervising from this episode as well to play around with and like a big career opportunity.

Speaker 3:

Yeah, I mean no, honestly, like like there's like a very, very, very strong shortage on cobalt developers, um, while a lot of large institutions still run on cobalt yeah, that's what I was gonna say, like I think that's what was really surprising to me from that interview is that a lot of people still use cobalt.

Speaker 1:

A lot of people rely, like the world relies on this, still like it's really needed. So, indeed, if you want to make a lot of money, I think cobalt and maybe arguably I don't know how these uh lms do with cobalt- well, I would hypothesize pretty bad yeah, because there's not a lot of cobalt code out there indeed so if you become a cobalt guru, no, you know, you can make a lot of money for sure. A lot of financial institutions that pay really well, I think they use cobalOL still no.

Speaker 3:

Yeah.

Speaker 1:

Career tip there. Now yeah do we have the money thing, Alex, actually.

Speaker 1:

There we go Now, going to the other end, so something that is not even done yet Polar's Cloud. So Polar's is the rust-based pandas alternative I'll just call it like that so performant and all these things. They started a company which is Polar's Cloud and they announced that they're building a distributed Polar's. I saw this also before from the creator of Polar's on LinkedIn that he made a comment saying we're building distributed Polar's, which I thought was really exciting, because then I think pollers can be thought of as like a better well, better, arguably better but alternative to desk, so you can run stuff on your own computer, which is already faster. But then if you have something that is too big for even one computer, even with the things that pollers offer, like streaming and all these things, then you can go distribute it and it's also almost like a guarantee. So if you go to any project, you can say we start with Polar small, but then if we ever get to a point that it's too big data, we can still distribute it and it's fine, we're going to be fine, right? So well, I didn't read this thing as thorough. I know it's not finished yet and here they only showed the Rust API for it. So Polar is within Rust, but they have a Python API. They tell you that you can do stuff with GPU or CPU Again, different strategies for distribution that they talk about here. The thing that was not clear to me is that whether this is going to be available as an open source thing or not. So Polar's cloud is the is their company.

Speaker 1:

Some people equated this to modal, so modal labs uh, you remember modal part, I think is see if I can find it here. Modal um, high performance I infrastructure. The difference is that Polar doesn't have an orchestrator, so Polar is just a compute, at least for now. But if I want to deploy this on my own I don't know Kubernetes cluster, if I want to deploy this on my own AWS cluster, could I go for it?

Speaker 1:

I saw the Richie, so again, the creator of Polar, so he did mention that he was working on a, a license that allows you to drop this in your kubernetes cluster if you want um, but it wouldn't be like open source, I think, or I don't know if it would be. I don't know if it'd be source open, but so I think there would be some restrictions, but you would be able to deploy your own kubernetes infrastructure. If you want. Um, again, I'm not sure. Does this mean for any distributed cluster, like if I go on aws and I want to drop something there, like I know in gcp you could do, like desk clusters and all these things. Is this something I could do or not? I'm not sure because, again, it's also, it is a business right, so I'm not sure how.

Speaker 1:

Let's see how this would go, but I do think that, well, I got really excited because that's something that I was really like.

Speaker 1:

You're a Polar's fan right, I am a Polar's fan. I think I like the API. I mean, yeah, it's fast and all these things, but I think the API is the thing that it's nice. So I got excited about this, but at the same time it's like, yeah, if this is not available for people to just run, then I'm also a bit less excited. Not to criticize Polar by any means. I know their company. They need to make progress, they need to make money. But something that came up on my feed and made me raise an eyebrow yeah, interesting to see how they want to roll this out.

Speaker 3:

It mentions clouds. Will it be purely cloud based? But what about organizations that want to do it more quote-unquote, on-prem or in their own cloud?

Speaker 1:

let's see indeed, let's see, let's see, let's see and maybe what else do we have here? Maybe get pickle. This is something shared by, uh, our good friend colleague, bato data root. Uh, make your pickle today, have you? Have you seen this before? No, did you take? Did you? Did you see this? What this is about? Uh, no, I don't know so I'm gonna ask you what you think about it, okay, um make your pickle today make your pickle today. Replace your camera on zoom, twitch t, TikTok and more.

Speaker 3:

What is the URL that you're going to?

Speaker 1:

Getpickleai. Okay, it doesn't show, but how this works. How it works, so you record this five-minute video of yourself, wait 36 hours and they train AI for your appearance and then, when you are using Zoom or something, you can change your camera and you can put a pickle camera and then, basically, it will use that AI avatar that you have and then, on real time as you talk, it will make the avatar like still you behind the scenes, right, but in the video you will show the avatar that will be lip syncing what you're saying.

Speaker 3:

So you can lay down and bat or sit on the toilet.

Speaker 1:

Exactly so. For me, that was a bit I mean, that's a bit like, even on the third step, your talk, right, they're showing someone that is driving, which I also don't think this is a marketing move. I'm not sure if that's the best, I don't know. Um, and the person is driving and then on the video actually shows the person like the avatar talking as if it's sitting on the living room or something. Person, uh, like the avatar, talking as if it's sitting on the living room or something. Um, what do you hear? What do you think about this?

Speaker 3:

if you're just uh, this feels a bit dystopian future.

Speaker 1:

Yeah right, like no one talks to anyone real anymore but to me it's like so I feel like the use case is what? Like yeah, you're on your bed, um, you're driving, I guess from what you're showing here, or maybe you're having a meeting and you want to go for a walk because it's a nice day out and like.

Speaker 3:

The next step is that, like you don't respond yourself, you have an lm respond right that's what I thought it was at first, but I was like this is a bit but yeah, it's not a big step from there yeah, that's true, but I think yeah, indeed, I thought it was going to be more like.

Speaker 1:

At first, I thought you could have this agency you can have, being five calls at the same time, with all the agents talking for you. I also feel like this is a bit misses the point, because I think if you want to be driving, if you want to be in your room, if you want to be going for a walk and you want to have a meeting, then I think we should normalize that instead of creating an ai avatar so you can get away with it. You know, I feel like there's a bit of a dishonesty in this. You know, I feel like if you're having a call from your couch and you think it's not a big deal, then you should be able to have a call from your couch. You know, you shouldn't have to cover it up.

Speaker 3:

You see what I'm saying, yeah, I'm wondering what, uh, what their business model is behind this, because I can imagine that this like requires a lot of capital to to do all this inference in real time yeah, true, I'm not sure what this just says join pickle.

Speaker 1:

Now, I didn't look for pricing but I couldn't find anything, so maybe everything's paid right. Um, yeah, I'm not sure. I I I understand why they did it, but I feel like it's a bit it misses the point a bit. It's like maybe we should all just work as a society to normalize having calls from when you go for a walk, for example, or not have your camera on if you cannot really right, just feel like this creates a bit of a yeah, it's weird.

Speaker 3:

So how would you do it like, what is your preferred setting to become your avatar?

Speaker 1:

to become my avatar. No, I wouldn't use an avatar, but I do think it's like. I do see, like maybe you want to go for a walk and you have a call and you want to just have the walk while you have the call for me. That's, I think, depending on the type of meeting you're having.

Speaker 3:

That's fine, right and then the idea is that you're having that's fine, right.

Speaker 1:

And then the idea is that you have an avatar.

Speaker 3:

Yeah, but then I think for me it's like no, yeah, but you're saying you wouldn't use it, I wouldn't use it.

Speaker 1:

That's the thing. Because it's like, even if I was using the avatar, I feel like I'd still say, oh yeah, I'm going for a walk.

Speaker 3:

So I'm just letting you know like you still see the avatar, but you still know that I'm not there because I feel like this is just weird, I just don't, I just don't see the.

Speaker 1:

I don't see myself ever using it. Would you ever use this uh?

Speaker 2:

I hope not. What about you, alex? Um, I don't think so, but I also said that it's used for tiktok. So does that mean that people would just put a video like I have? No idea, I don't know how that would work either like you just did a lecture on tiktok yeah, and then you don't have to sit in front of the camera, you just let your avatar do it.

Speaker 1:

Yeah, I I have no idea. I really don't understand. I mean, it's cool idea, but I don't see any. I wouldn't make a product like. This is like the kind of thing that it could be fun to just build and show people, but I wouldn't build a product around it.

Speaker 2:

I just don't. I just feel like I do see potentially students using it Like if but I feel like it's a bit.

Speaker 1:

I mean, yeah, I see people using it, but I feel like you're. There is no way you can justify using this.

Speaker 2:

I feel like you're being dishonest, right.

Speaker 1:

So so I don't know, but you see, the reviews are here. Like, ah, try yourself. I was able to take walks during meetings. This is a game changer. Ai Remote work will never be the same. I thought it was a bit you know, a bit much. Another thing that I thought. So, changing a bit the subject, and this is a very big change, but I was just looking at these reviews, there was a you know what new brutalism is? Do you know what this is?

Speaker 3:

but new brutalism, new brutalism, new brutalism, neo neo. It's a design style, yeah yes, do you know this is alex.

Speaker 1:

Alex knows what. How would you describe it, alex?

Speaker 2:

well, a while back I was trying to find what the style was, and it's neo-brutalism, it's like the. I don't know if you're gonna show it, I'll put it.

Speaker 1:

I'll put it there, there we go yeah yeah so how would you describe this for people that are just listening?

Speaker 3:

uh, you've opened a page, no, but like that has like uh components I would assume like react components in neo-brutalism style.

Speaker 1:

Yeah right so it's like in the style, it's like uh, round shapes, yeah uh yeah.

Speaker 3:

But like uh yeah, I have a hard time explaining it, but it's like a bit retro modern combination. It's like nothing like 3d, nothing like uh thick outlines uh, drop shadows, but not transparent drop shadows like black drop shadows.

Speaker 1:

Yeah, indeed, it's not like uh, it's not like trying to pretend like it's real, it's like really like a retro kind of drawn kind of thing with like colors, that yeah, stand out.

Speaker 3:

What is it?

Speaker 1:

what you're showing is like these are react components, or yeah, these are react components that you can, based on shad cd, shad cn.

Speaker 3:

ShadCN.

Speaker 1:

So, yeah, it's made with Tailwind and all these things. I believe it's React, but I just thought like I just wanted to share because I thought it was so funny. They had love by our community and they have. This library is a complete garbage. I don't believe there are people who actually use this. This thing frankly sucks. I want to vomit. So I just so. I just thought it was like they really picked her. I thought material ui was the worst looking ui library. Ha ha ha. All caps, imagine using this. And they just put it there. I thought it was pretty cool. So, yeah, and then they have a whole bunch like accordion, how pre-built components. So I think chat cn or chat cdn I don't know if it's a um, basically like a copy paste stuff into your code, right? So this is the way you can do this and use this tailwind so you can get a lot of cool stuff here.

Speaker 3:

So I thought it was nice I think the people that are a bit live a bit in the same bubble that we know do.

Speaker 1:

They will know it from the model duck website yes, indeed, but I thought, indeed, I thought, I think it was was quite popular in the last two to three years, but I think it's also like the world has passed on yeah, but everything comes and goes right, everything comes and goes. We wait 20 more years. It's back, or maybe you're super ahead, you know.

Speaker 1:

Instead of being like you're just like really, really, really ahead. Who knows? Okay, how much time do we have? We have time for a few more. So food for thought. Vim, after Brahm Core maintainer and how they've kept it going, tell me more about this Bart?

Speaker 3:

Do you know, Vim?

Speaker 1:

I know Vim. I struggled with Vim a bit. I've had people trying to convince me to use Vim inside VS Code, but they failed to convince me fully.

Speaker 3:

So Vim, what is Okay? So you used VS Code first and then Vim came.

Speaker 1:

No, I mean I kind of use it to get. Actually, I started with PyCharm, right, but then you can use. Vim bindings within vs code and they were like oh yeah, you look how cool vim is, you can do this and you can do that no, no, I mean only when I'm, when I commit and I have the message, and then I use vim because it's on the terminal and I just use it. But I'm not, I'm not very, I'm not more efficient with vim.

Speaker 3:

In any case, um, vim is um, vim is an improved version of VI. Vi, yeah, vi and VI. I want to say it stands for visual editor. But that would be weird, right, you're Googling it? Yes, I'm Googling it, and I think Vim stands for visual editor, improved but you're Googling what the I is.

Speaker 1:

Yeah, so the name vi so this is from wikipedia, I guess is derived from the shortest, unambiguous abbreviation for the command visual in ex. The command in question switches also.

Speaker 3:

Vi just stands for visual, yeah, um, and vim stands for vi improved, that's them ah, vi improved. Yeah, yeah, yeah, I see um the vim was created. Um, I want to say late 80s, early 90s okay, I'm not sure. To be honest, um, I am 1991. Oh wow, that's good. By bram molenaar, a dutch software engineer.

Speaker 3:

Another dutch a lot of dutch people, a lot of dutch people, um, and he, uh, he uh. Unfortunately, uh passed away recently. Uh and but the Vim project is used by a lot, a lot, a lot of people. There's also new Vim nowadays no. There's also new Vim, but typically, if you're not an IDE user, if you're more of an editor in the terminal user, you're either a Vim fan or an Emacs fan.

Speaker 1:

Yeah.

Speaker 3:

It's a bit how the world is split. Right, that's one way to split it. Um, and I understand where you're coming from. Like, like you're, you're not more productive through vim bindings because you never used him.

Speaker 1:

So maybe yeah for people that never used vim before, right, like you have the asdf, I think, keys or yeah, right, like or maybe not, but basically like the going up down arrow to quit, you have to put like semicolon in the command delete a word is D to go append and all these things. So it's like.

Speaker 3:

But I think if you learned to use these editors like before you had a visual interface like, and if you knew all these key bindings, you could be, you can be super, super efficient yeah, there are people that are super fast. People that come from that background and now are in vs code or in pycharm that use these bindings.

Speaker 3:

They they can be very, very efficient but anyway there is definitely an audience for this, and the challenge when, uh, well, when the core uh uh developer ramona passed away is like, how do you keep this going? And um, the article, I think, is a nice read. I think you should, you should read up on it, but like it touches on a few key points. Like there is a way in hit up to uh to define, like, who inherits this code base when you pass away. Ah really, but he didn't use it because it also implies that your account needs to go inactive, and what I think the community wanted not necessarily him, I'm not sure in the background is that his family would still have access to the code base, like active access. He was also, he's been for since that. He started. He's been the benevolent dictator for life, very well respected, and there were also these type of things.

Speaker 3:

Like you had this new core maintainer coming in that really wanted to pick it up. I forgot the name. I read the article this morning. Maybe you can. It's at the beginning of the article that is now picking it up and it's basically from a Christian Brabant. Is he Dutch as well? Uh, I don't know. No, no, and he, uh, he explains a bit like, uh, he tried to take it over and he did take it over as, like he's, I think, the new core maintainer. Uh and uh. There were a number of challenges like, for example, v, example Vim, like it received donations for a specific nonprofit in the Netherlands that he wants to keep going Like but, how do you do this?

Speaker 3:

Because this person that was the admin to all this like is no longer there. Like, you need to clarify, like, what was the procedure? You could also use this, like these donations, to basically vote for new features, like if you make a donation with this feature, but it new features, like if you make donation with this feature.

Speaker 3:

but it was very hard for him to link these together, like the votes and uh, and also like, um, there was at the same time, another core maintainer left the project, so he also had to bring in new people. I had to uh, basically align with the family of uh to see, like, what can I do? What can I do? And I see it's.

Speaker 3:

It's a very interesting read on on, uh, on a project where I think a lot of people think like, okay, there's a lot of commits, but what happens when this benevolent dictator for life is suddenly gone? Yeah, and what we see now is basically that there is still a lot of activity on the repository, and that's, I think, being optimistic here. Like this is this is not the end for him. Like this is just this was a phase for him yeah.

Speaker 3:

I agree, which I think is also nice towards Bramolena's legacy yeah, for sure, this is a continuation of what he has, what he has built with his community indeed, I think it's a yeah, indeed, this is legacy, I think.

Speaker 1:

I think most open source maintainers would be very happy to see that their projects kept on right, you also have some like skeletons in the closet, like they're.

Speaker 3:

Like he found out that that, uh, the website was running on a like a super, super, super old php version. Um, and uh, like some files were hosted on a very old ftp server and they had to negotiate with the organization behind it, like how do we handle this? And in the end I think he just switched to GitHub downloads. But these things everybody worries like, okay, this is my project. I need to hand this over. Okay, what are they going to think about this small?

Speaker 1:

thing, that I just neglected for the last 10 years. That's a bit of shame, right.

Speaker 3:

Yeah, exactly, I was on the rush and it worked that's nice.

Speaker 1:

It's a bit of like it's a nice. Well, it's a nice read it's a nice read, yeah, I think it's sad that he passed away, of course, but I think it's a, it's a nice how, like bigger than tech like you know how everything comes together because in the end, it's people. I think we talk a lot about tech, but in the end, it's people. Very nice, and Vim. A lot of people love Vim. I'm not super proficient, but I know people that are super super. Do you use Vim, by the way?

Speaker 3:

When I don't have access to a visual environment, I use Vim.

Speaker 1:

Yeah, okay, but if you have a VS, Code do you use?

Speaker 3:

Vim no, but I used to, back in the day when I was not on uh on a mac, I was always on linux, and then, uh, jesus, all the way.

Speaker 1:

Yeah, vim is a. I know a lot of people that are very efficient with it and it's, it's impressive. I think it's nice, it's a good. It's a good project as well it's both on it out, maybe for on that note as well, for people that do want to learn vim. One thing that I came across is this Vim Adventures. So basically, have you seen this part? Yeah, so it's like you have a little character and then it kind of teaches you how to use Vim.

Speaker 3:

Vim's key bindings.

Speaker 1:

basically, yeah, indeed, the key bindings. So, for example, in Vim, if you want to delete a word, you can combine commands right. So for people that want to try it out, we'll put on the show notes. Do we have time for one more, bart? I think we do. Maybe one more, so maybe we'll just go for this.

Speaker 1:

Working Fast and Slow this is something that I read a little while ago and I felt seen. You know, I think that Working Fast and Slow it's a bit of a wordplay with that book. Know, think I think it's thinking fast, as though. I haven't read the book so I cannot comment, but this post I read through. Um, I'll summarize a bit.

Speaker 1:

They just he mentions that he's someone that he can be very productive at times. So some days he can sit down and focus, he gets a lot of shit done, but then there are days that he can't be bothered. Right and uh and I'm maybe mixing things a bit, because I also listened to a podcast that was an interview, the guy that was advocating about for adhd and all these things. I also felt a bit seen. I'm not sure if I have really adhd, but I do have some things, some traits that I really felt seen. That basically like if you can focus a lot one day and in the next two days you cannot focus, because in software engineering you have the stand-ups and all these things and you should talk about the progress you had on the past days.

Speaker 1:

If you're very productive on day one but you're not productive on day two and three, you really just look like someone that is just slacking off, like you can be very efficient but you just choose not to because you're lazy, right, and I think it's like what he says also in this article is that he kind of he really reached he had a lot of resistance towards this, but now he kind of learned to kind of go with it. You know he understands that someday his battery is low and he knows he's gonna make up the next day and he's really like he organized a bit more his, his work, his tasks and his schedule based on how he's feeling that day and it's fine, right. And that's why I feel like I felt seen right, because I also have these things that sometimes like I can't be bothered. But then there are some days that things just flow.

Speaker 1:

You know, like I'm really, maybe it's the environment as well. You know, maybe there's some days more quiet and all these things. But sometimes, like I just feel like I can, like I'm the zone, and some days I really struggle right um, maybe do you have this as well, bart, or is this something?

Speaker 1:

uh?

Speaker 3:

do you relate to this as well? I relate to it as well. Uh, but I try to fight it. You try to fight it?

Speaker 1:

yeah, I think he, but I think he maybe again. That's not something I do, but one thing that the author here does he says that when he is in the zone he also leans into it, so like if he stays up later he just does it as well.

Speaker 1:

Yeah, so he really like, yeah, he really doesn't hold back. Basically, right, I still have tried bit of a balance and I still resist a bit um, but I also feel like I'm more forgiving to myself if one day I feel like I'm not very focused and I know I can like I. He also says the sometimes they're deadlines, so sometimes you have to power through, yeah of course, right, but I'm also.

Speaker 1:

I'm also a bit more forgiving, you know like, or even sometimes I'm a bit stuck and then I just go for a walk. You know, like I'm not, I cannot focus on something. Okay, I just go for a walk. You know, like I'm not, I cannot focus on something. Okay, maybe, just go for a walk. Or maybe you're feeling very tired, maybe, if you're working from home, take a 10 minute nap, you know, and then sometimes you feel much better afterwards. And I think over time I've been, I'm trying to re-educate myself, to be a bit more understanding and also, as long as I have my commitments, you know like, as I have, like I still try to keep a grasp on the longterm right, like, how are we progressing on these things? But uh, I guess it's just more forgiving in a way. You know like, I understand that's how I work and all these things. What do you think, alex?

Speaker 2:

I was just going to say he also mentions, cause I read it too Um.

Speaker 1:

I read this, yeah, yeah, because I found it interesting.

Speaker 2:

Oh okay, that he will work on the more low-priority tasks when he isn't feeling as I don't know productive. So I understand that. So then he'll put the focus on that and then, when he has the time and the energy, he'll work. Like you said, he'll be in the flow for a long period of time and he'll work over like 12 hours, so he does make up for it. Over like 12 hours, so he does make up for it. Yeah, um, but yeah, I understand that being in the flow, I feel like the quality of the work will also be better. So, instead of forcing yourself to do it when you don't feel like it, then you just focus on the easier tasks and then you go into the yeah indeed.

Speaker 1:

Yeah, he really, he really leans into it right, and I think for me, and I think that's also, yeah, very well put that.

Speaker 1:

If you don't feel like it, maybe you have to book some meetings or send some emails like something that is more like grunt work that you just do right and it's fine, like there's no like such a thing as a well, maybe there's always this, but low quality and high quality throughput. You know you have to book a meeting, you just book a meeting, you find a date. You know you put a description. It's not like needs to be top notch, um. So also playing a bit with it, I also did it. Also, I'm thinking, uh, when I was commuting to to my client, so on the train, if I had to send a quick message or if I had to book a meeting, all these things, these small tests, I also try to take advantage of these things. So I think it was also a nice way to to kind of reflect a bit, like to try to schedule the tests that you have from work based on how you're feeling, to like basically be more harmony with uh, with your with your because you can't be a hundred percent every single day.

Speaker 1:

No, that's true, indeed, and also even like in in that of being in harmony with your biological I don't know state, also even for me, like booking meetings. It's better for me to book meetings in the morning or in the afternoon? Do I have, like, you know, what do you prefer? Like? And I think I also. That's also something I thought a lot about, and I go back and forth a bit like I don't have a best strategy, but it is something that I also think about. Cool, do you do that, alex, as well?

Speaker 2:

yeah, I definitely separate, like depending on how I feel. I usually try to do the harder tasks earlier in the morning because I'm more focused, and then obviously as the day goes on. I kind of get more tired.

Speaker 1:

But I think, for you as well, I think there's also a big distinction on, like, creative tasks, yeah, and also more operational tasks. Let's call it, because I also and I was talking to my wife about it as well how, like I do think that for creative tasks, you need more time, you need more mental space. I think we've I remember we talked about this even for the podcast right Like, if you're having, if you book this at 3 pm on a Wednesday and you have meetings before, you have meetings after, like, your mind needs to be in a certain state to be able to, you know, like, take a breathe, take a breather, like and I think there's also a difference on the test in that sense right like, I think if you're designing something, you have something creative. You cannot be rushed, right?

Speaker 2:

but for me, like designing anything like that, that's actually my last task, because it's easy right, but if I have to think of an idea or something, that's really the first to do on my yeah, yeah, but I think research, if I have to research something I really put that as my first task.

Speaker 1:

Interesting, do you have, yeah, and do you have something like this as well? Just do stuff. I'm just a machine, I don't, I just do it, just get it done.

Speaker 3:

Uh, if no, I don't just get it done. But, um, I am, depends a bit on the type of task. Yeah, yeah, I think. If then it's something, uh, very new, like you need to create something that doesn't exist yet, yeah, um, I tend to need a bit of headspace for it. But for other things, um, uh, I kind of force myself. So I agree, like with the podcast preparation, you need to be in a certain state, but I think you can. For me at least, I think I can force it, like you just say okay, now I start, even though you don't like to do it, but then if you focus on it for half an hour, you're in that I get into the right mindset. It's pretty like that. You don't always like to do it.

Speaker 1:

Yeah.

Speaker 3:

A lot of the let's say a quote unquote programming projects do are for me. In the evening hours, I'm often known to feel like, okay, let's, let's spend another three hours because I'm tired.

Speaker 2:

Right.

Speaker 3:

Okay, let's just do it and like after some time you'll get you'll get in a certain flow and it still works.

Speaker 1:

Yeah, I think. Well, I don't know if I'm. I know I kind of relate to what you're saying. I don't think I'm like you in that sense, but I do think that sometimes if you just start doing something, you just get into it right, like I think in the beginning. I don't know for me sometimes I don't know, I think it's a personal thing. I really it doesn't yeah, but I wish I could be.

Speaker 3:

Yeah, I think it's good but, like you, you need to get to that. Like, okay now, okay now, I'm just going to do it. I don't know what's going on.

Speaker 2:

It's kind of like going to the gym.

Speaker 1:

Yeah, that's what I was thinking Like. Sometimes it's like I don't want to go to the gym, but like I just hop on my bike and I just go, and when I'm there, you know, like you kind of get through it and you go.

Speaker 2:

I think on a lot of projects it's just like but you need to need to show up. Also, I noticed for myself. I don't know if this is for other creative people, but the night time I get so creative. I get all my thoughts at night yeah and yeah.

Speaker 1:

So if I have to work on something like I don't know, I can my ideas at night, like right before I sleep, which is probably not good, because then it yeah I used to have like sometimes, like not now, well, sometimes now, but like I'll have an idea right before going to bed and I would be afraid that I would forget it by the time I wake up. So then I would like a notebook. Right, you have a notebook. Yeah, but actually notebook is good, because sometimes I would try to get on my phone and do stuff, but then, like it messes up my sleep as well.

Speaker 3:

I sometimes wake up and I think I have an idea and I think, oh, wow, wow, you're really. This is, this is smart. You write it down.

Speaker 2:

The next morning.

Speaker 3:

I wake up and it's gone.

Speaker 1:

Oh, really it's gone, or like when you grab a few beers, when you're you know like, oh, this idea is great, yeah, then you write it down. The next day you wake up you're like what the fuck? This is so stupid, Like this is horrible. I wonder if for you, like as you mentioned, creative stuff before bed, and sometimes when you're a bit tired, it's almost like you're a bit tipsy, you know, like you just kind of you know the filter is gone, yeah, Well, I read one time too that no, I don't know if I read it or saw somewhere.

Speaker 1:

The creativity is like coming up with ideas. So there's like the just come up, comes up with stuff. But then there's also the filtering of ideas. So, for example, kids, you can say they have a lot of ideas, but a lot of them are bad ideas. But to come up with good stuff you have to have a bit of both. You know, like you have to have the time that you just see a lot of crazy, outrageous stuff and then to yourself you say, but these three are good ideas, right?

Speaker 1:

so maybe when you're in that state of like a bit tired, a bit like yeah, just say stuff exactly just pick and nitpick those things cool, very, very cool, um, and I think I think we can uh call it a pod here today. Anything else you want to share?

Speaker 3:

maybe the last thing I thought was interesting. It's the last point on our uh. Let's do it, and it's from flyio. They released, uh an article two days ago titled we were wrong about gpus oof. Is that a hot take, do we?

Speaker 2:

have the hot, hot, hot oh, hot, hot, hot, hot hot hot, hot, hot, hot.

Speaker 1:

Unless they're saying that they're wrong and meaning that I don't think it was. It's really a hot day okay, the article is interesting.

Speaker 3:

Is you read it? They basically invested a lot in both hardware but also building the software to make it work so that you can have fly GPU machines. So for the people that don't know fly, it's very easy to spin up a virtual machine with your web app whatever workload it is and have it go live. Flyio makes it very easy for developers. They also now created GPU fly machines where you basically have access to a GPU and what they realize is that their typical type of developer that uses Flyio doesn't really care about GPUs.

Speaker 3:

People care about having access to LLMs these days and you have like and I know maybe I'm being generous here, but like five to 10 major providers and you want easy access to that, and at the other extreme but I think that's not the typical audience for Flyio you have these people that want to run heavy GPU loads but actually the, the type of gpus that they're really invested in, aren't really suited for that. Ah, I see. So they uh, they basically made made a bit of a bad gamble. Yeah, it's not that it's going away, but they're realizing this and they're seeing, like, how should we shape the future going forward?

Speaker 1:

but I also think it's just like. I think it's uninter, like I think it's uninteresting, but I think the fact that there's a company are putting this forward. I also think it's nice. Yeah, yeah, like we made a mistake, indeed, like we gambled this, and I think it's also been normalized as like yeah, we make, we, you have to make decisions. Sometimes you're wrong, and this is something we're wrong as well. I also think it's really nice. I like Fly as well, as a product as well. I used it a bit and it is very smooth. Yeah, it's very easy to use, very cool.

Speaker 2:

Cool, cool, cool.

Speaker 1:

Anything else you want to share, alex? No, alright, then I think we can call it a pod. Thanks.

Speaker 3:

Thanks for listening everyone.

Speaker 1:

You have taste in a way that's meaningful to software people. Thanks for listening, everyone, and usually it's slightly wrong.

Speaker 3:

I'm reminded it's a rust here, rust, rust.

Speaker 2:

This almost makes me happy that I didn't become a supermodel.

Speaker 3:

Cooper and Netties Boy. I'm sorry guys, I don't know what's going on.

Speaker 1:

Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here.

Speaker 2:

Rust Rust Data topics.

Speaker 3:

Welcome to the data. Welcome to the data topics.

People on this episode