DataTopics Unplugged: All Things Data, AI & Tech
Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.
Dive into conversations that should flow as smoothly as your morning coffee (but don't), where industry insights meet laid-back banter. Whether you're a data aficionado or just someone curious about the digital age, pull up a chair, relax, and let's get into the heart of data, unplugged style!
DataTopics Unplugged: All Things Data, AI & Tech
#69 From Engineer to CEO: Alex Gallego on Building Red Panda
Welcome to the cozy corner of the tech world where ones and zeros mingle with casual chit-chat. Datatopics Unplugged is your go-to spot for relaxed discussions around tech, news, data, and society.
In this episode, we’re joined by a special guest: Alex Gallego, founder and CEO of Red Panda. Together, we dive deep into building data-intensive applications, the evolution of streaming technologies, and balancing high throughput and low latency demands.
Key topics covered:
- What is Red Panda and why it matters: Red Panda’s mission to redefine data streaming while being the fastest Kafka-compatible option on the market.
- Batch vs. streaming data: An accessible guide to understanding the classic debate and how the tech landscape is shifting towards unified data frameworks.
- Scaling at speed: The challenges and innovations driving Red Panda’s performance optimizations, from zero-copy architecture to storage engines.
- AI, ML, and streaming data integration: How Red Panda empowers real-time machine learning and AI-powered workloads with ease.
- Open source vs. enterprise models: Navigating licensing challenges and balancing business goals in the hybrid cloud era.
- Leadership and career shifts: Alex’s reflections on moving from technical lead to CEO, blending engineering know-how with company vision.
You have taste in a way that's meaningful to self-led people.
Speaker 2:Hello, I'm Bill Gates. I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong. I'm reminded, incidentally, of Rust here Rust, this almost makes me happy that I didn't become a supermodel.
Speaker 1:Cuber and Netties. Well, I'm sorry guys, I don't know what's going on. Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here. Rust Data topics. Welcome to the data topics. Welcome to the data topics podcast.
Speaker 2:Hello and welcome to Data Topics. I'm Flob Deep Dive, your casual partner of the web where we discuss all about red pandas. My name is Murillo. I'll be hosting today and I am really excited to be joined by Alex Gallego. Am I saying this well?
Speaker 1:Yeah, you can say Alex Gallego with a hard J, but Gallego is fine too.
Speaker 2:That's how you would say, like back home. We were just talking just now how you're originally from Colombia, but you did high school and college in the US.
Speaker 1:Yeah, yeah, yeah, that's how I would say it back home Gajego, gajego, is that it?
Speaker 2:Gajego Gajego. Okay, I'll work on that it. It's funny. I'm from Brazil, actually, and I also went to university in the US, but I didn't do high school and then after that I actually went to Europe. So I came to Belgium, I did a master's in AI and this is where I'm working today. So really nice to have you here, taking some time again, as I mentioned, from a busy schedule to chat with us all about Red Panda. And we'll definitely get to Red Panda, but for the people that don't know you yet, would you like to give a quick intro like who you are, what you did, what you do, and we can start from there?
Speaker 1:yeah, so my name is Alex. I am the founder and CEO of Red Panda. I actually grew up prior to being the CEO of this company through being a principal engineer, and so prior to this, I sold a company to Akamai, where I was the CTO, and then I went to become a principal engineer at Akamai, working on some large scale problems, and you know, really, I guess at the end of the day, I kind of wanted to build something that looks like red panda, and so one of the freeing things that I like to share with people is that when you're an engineer, you don't have to ask for permission, sort of like you write it and like you know, um, and so that's, that's how red panda came to be. Uh, but I am no, I haven't coded for for a few years now and I am the the ceo of the company okay, very cool, but you, you have coded some of the Red Panda stuff.
Speaker 1:Oh, I wrote the first full implementation. Ah, okay. Yeah, I wrote the storage engine and the RPC and parts of the consensus layer, wow.
Speaker 2:And even some of the original Kafka compaction.
Speaker 1:I should say bugs.
Speaker 2:I was the one that wrote it, wow that's super close, so your commits are still there. If you go into Git history, there Git blame like one of your engineers are going to be like who wrote this Git blame? Oh, Alex, oh, never mind, it's a feature, not a bug actually.
Speaker 1:Yeah, I'm not sure if it's a bug, but let me see if my contributor is there.
Speaker 2:That's really cool and before you said you work for Akamai, you founded Akamai or were you working at Akamai?
Speaker 1:No, no, I recently founded Akamai. Akamai is a multi-billion dollar company.
Speaker 2:I was going to say right.
Speaker 1:It's the largest content delivery network in the world. I think that at the time that I worked there, they used to touch something like 70 of the non-video traffic of the web actually moved through the akamai servers, through, like you know, reseller contracts and agreements and things like that or or more. This is like all public info, but but um, yeah, no, they're like massive, massive scale uh, company streaming all sorts of data, kind of they. They grew up in the, you know, in in thecom boom era.
Speaker 1:And so I think their claim to fame was hosting Apple content for a long time. That was like their marquee customer that took him IPO, Uh, and then from there on, it of caught on. At some point they did all of Twitter. At some point they did all you know, they still do some of the world's largest events. So I'm not sure of the recent customer count, but I think even Facebook at some point. So they're really sort of one of the largest content delivery network companies in the world.
Speaker 2:I have a shirt. Fun fact, I have a shirt from Akamai. I think they did a partnership with Uniqlo and then they had like some stuff there. So I actually have a shirt. It's a cool one too. And so you were a principal engineer at Akamai, right? But before, if I'm not mistaken, you also founded another company.
Speaker 1:No, CTO of this company called Concord that I own Concord. And you know, before Red Panda, we were just three poor kids from Brooklyn, new York, trying to make, you know, trying to like make it, and so Concord. And then we all grew up as infrastructure people, right? So Shinji Kim, who was the ceo at the time, she actually just started a new company too a few years ago. Um, she was my roommate at the time and she went to waterloo, uh, on her underground engineer.
Speaker 1:She was a pm at the company. I was working at this attic in new york city. Um, I was one of the early engineers at this attic company, and so on our subway rides we're just brainstorming. I was like, hey, the world should be easier with some of the attic infrastructure that we were trying to work on and that literally became the birth of the company. Like, concord was born on subway rides from Brooklyn to the city, to Manhattan, I guess, for people that are a part of New York or haven't been in New York for a while and so that was Concord. And so, prior to Akamai, we joined Akamai through the Concord acquisition in 2016.
Speaker 2:Ah, I see, I see, I see, okay, and you still, you're like. So, and Akamai, did you face any any like? Is there any connection between leaving akamai and starting red panda, or was it more like was some time in between? How did that play out?
Speaker 1:so I guess, for a little bit of history, I've been doing, uh, what has been classically known as streaming, which I, by the way, I think, uh, a a different opinion from the consensus of the world is that, as nerds who have spent decades working on streaming tech and like trying to point out nuances between batch and real time, I think all of that is going away, so we can talk about that later. But, yes, uh, you know, I've been doing a streaming for a long, long time, and when I joined, that come I so on. The ad tech was like how do you run a real time multi-armed, banded algorithm, basically with a second price auction, and so the idea is that if you were to serve an ad so that you don't charge the most amount of money, you charge the first placement the price of the next best bidder, and so an example if you're willing to pay ten dollars for a spot in the new york times website and, uh, someone else is willing to pay, uh, five dollars.
Speaker 1:Then you recharge. You still get place at the top, but you would charge five dollars and one cent, and so you know, you're sort of incentivizing the behavior of the highest bidder.
Speaker 1:But long story short. So if you think about an ad tech, this whole idea is it's, like you know, basically as close to real time as a streaming, as it gets right, like, actually, ad tech is a great place for a bunch of technologies to be developed. I think if you look at some foundational technologies in the web, the reason is that there's very little money to be made and the volumes are massive, and so you're really making, like you know, pennies on the dollar at large scales. It's really hard to make a ton of money unless you have a, you know, basically a money printing business like a Google. But if you're an ad tech, it's just, it's hard, it's a game of volume and scale.
Speaker 1:And so how this is tied to Akamai and Concord, concord was hey, we were struggling at the time with this project called Apache Storm, and so for those of us that were doing processing, real-time processing back in the day, it was getting stuck with Clojure stack traces and Java and Scala stack traces, which is really gnarly to debug. Especially the JVM projects at the time had this really hard problem to debug, which is like a Schrodinger cat problem. It's like, is the cat there or is the cat alive? And so, from the point of view of the operator, this JVM was up, but there's no progress. And you're like, well, I don't really care if the JVM is up, I care that the data is actually moving.
Speaker 1:And so Concord solved, or attempted to solve, that problem the computation and Red Panda through the years. I still couldn't find a storage engine that would keep up with the volumes of data. And so where Concord was the compute layer, red Panda was the storage layer, and so, yes, akamai had an influence in, ultimately, me seeing more of the operational side than I think, the scale I think I've seen similar volume scale. Maybe Akamai was definitely a little bigger, but anyways, it's sort of the complement of the compute problem I I had attempted to solve before right.
Speaker 1:So it's like okay, what storage engine could keep up with this new paradigm of thinking about the world?
Speaker 2:I see. So it's like you just tackle the problem from one way and then after a while you're like, well, what about this other part that I haven't tackled yet?
Speaker 1:and that's how red panda came to be basically, yeah, and I was really into at some point, um, I had done some work with uh dpdk and so this was before uh, jen's uh I think I probably mispronounced this his last name came up with iou ring right.
Speaker 1:So there was this huge split in the in the networking space of like, hey, do you do this kernel bypass tech that mounts effectively 8 kilobyte pages straight from kernel into application memory so you could do as close to zero copy as is physically possible by the hardware? And so I was really into testing those things in 2016, 17 or so. And then those ideas sort of started coming together. We're like, hey, what if you could create something that was as close to zero copy on the hard drive? And so you would have zero copy because you would write your TCP buffer right and then you can tweak which core is the session manager for this particular handshake, and usually by default on the Linux kernel it's like a hash of the five tuple. You get round. Well, yeah, it's just, it's the hash mod of whatever is the five tuple of the TCP connection.
Speaker 2:Like you know, import, destination, port, whatever, not that it matters and then it lands on a particular core, I was like, okay, now that it lands on this core.
Speaker 1:I want the kernel to then write this page onto the application stack and then from there I just kind of kept playing with those ideas. It's like, well, how do I take this format and then write it to disk as untouched as possible? And it turns out at the time most computers were little Indian architecture. And if you were looking at the Kafka protocol there most computers were little Indian architecture.
Speaker 1:And if you were looking at the Kafka protocol there was like this three layer translation between big Indian and little Indian and a bunch of other things and long story short it was really a red band that was born out of trying to understand and do a zero copy as possible all the way to the like NVMe SSD controller, and then we stopped there, but only because we ran out of time. At some point we even thought of writing our own, like you know, uh, basically region manager for the nvme ssd hard drives. It's like you know, how could you uh like even just keep pushing? And I still think there's like an inordinate amount of space to continue to push the envelope on the performance side. But most people, people are fine. Basically, we're still the fastest thing that exists in the world and no one is pushing the latency envelope. So long story. We stopped there before. We kept playing with the NVMe SSD controller, so we stopped that file system.
Speaker 2:Okay, I feel like there's a lot of stuff to unpack there, and I'll already tell you that I'm a machine learning engineer, so I feel like this is a bit lower level than what I'm used to. So I am going to ask a lot of questions. So I ask you already dictation with me, but uh, it sounds very interesting as well. The zero copy stuff. Um, yeah, and if you think there's still space to push the envelope even more, is this something that you're still interested in like, or even as a company? I read funda. Is there like a research part that they're trying to really see how far we can still go with this?
Speaker 1:yeah, it's a great question, I think, um, not right now. And the reason is most, I mean, we're, we're, perhaps we're like the ml and and some of these folks sit, I think, where people are trying to unify their data process in a stack is really more on the throughput than it is on the latency front. And so is there money to be made in finance and FinServ and AdTech and others by continuing to push envelope? Yes for sure, but today we're still the fastest thing that exists in the world and so we don't see really market pressure on becoming even faster, because we're already the fastest.
Speaker 1:And I think we have, like what we did well and not to toot my was really like map the hardware as a function of message passing interface, which is like okay, if you actually look at hardware, it is all asynchronous and it's all message passing.
Speaker 1:The only thing that is synchronous is the mental model from the programmer, because the kernel suspends or blocks the thread or suspends the fiber or whatever, so that it could go and do useful other things can go and do useful work.
Speaker 1:And so, even if your mental model is a synchronous model, the execution is always asynchronous. And at the very core of Red Panda. It is all like asynchronous with explicit accounting, static memory allocation, there's like a bunch of things, and so if we were pushed to continue to lower the envelope on latency right or, I guess, improve the latency mark even farther, I think the architecture is there for us to extend that. But where the world is today, it's really, I think, around pushing cost efficiencies with object storage systems, and so I think our probably upcoming release is going to be for reducing costs for this MLM style AI style workload where it's like, hey, I actually have a bunch of data right, so it's a throughput thing and I want to leverage some of more cloud-native technology, so I don't know, even though there's room on the latency side, I think like from a market perspective, there's more money to be captured by the company if we optimize on the throughput side of the house.
Speaker 2:Yeah, I see so maybe. And also you mentioned like throughput and latency, right, for people that are not as familiar with all these terms and all these things, or the differences between them, how would you explain? For, like the regular Joe, let's say what's.
Speaker 1:Yeah, I like to joke that latency is the sum of all your bad decisions. But latency is the time that it takes from point A to point.
Speaker 2:B.
Speaker 1:And so an example with an RPC, there's multiple paths, of that latency.
Speaker 1:So it's like hey, is the latency of the full round trip. There's the latency from sending the client to the server, there's the latency, obviously, of the server processing time, and then there's the latency of sending the response back from the client to the server. There's the latency, obviously, of the server processing time, and then there's the latency of sending the response back from the server to the client. Those are like the classic latency responses where people tend to sort of group it into like a full round-trip latency.
Speaker 2:That would be like an end-to-end you send the message and you consume the message.
Speaker 1:And what's the round-trip latency? That's like what seems intuitive for most people that are programming, and so if you go on a website, you see the latency spectrum. What's fascinating about latency is that it's like a rat hole where you're going to spend your lifetime really on it because you can optimize all parts of it.
Speaker 1:It's like you know, time for the client to send the first bite, in case the client is doing any sort of like batchting or optimization or compression. There's the coalescing, there's the bounce in here. Then what happens on the server side? Then time to first byte of the server. Do you stream your server? Do you chunk your data? Long story latency for those listening is the time that it takes from point A to point B to do something. Typically in computing systems, it's the time for a message to move from point A to point B. So that's latency, and then throughput. You can think of it as the capacity, like how much, and so. An example is if you have a one-lane highway and every car is running at 70 miles an hour, that's your latency. It takes you one hour to traverse 70 miles, uh. But you can have a five lane highway, five times the throughput, uh to, and so you can have five cars running at 70 miles per hour. And so throughput tends to be.
Speaker 2:you can think of it as this size, like the, the space, and and so um even though you're not getting faster cars, you get more cars across because you have wider lanes.
Speaker 1:Basically yeah basically at a more technical level. Yeah, I was going to dive into like ideas of parallelism and concurrency, where concurrency is the structure and parallelism is the simultaneity that's like related to throughput. But before we dig into that that's, we could keep it there at latency and throughput, at those analogies yeah, indeed no, so thank you for that and, uh, you also mentioned that.
Speaker 2:Uh, for red panda, uh, what you're saying, like the, the ideas that you were experimenting with, which reduce latency, basically right, but today, what you, from what I understood, what you said earlier is, like the world, we care more about throughput, in a way. So they are still related. But if you can just build more lanes and you cannot make the car faster, it's also okay. I guess is that what you said before?
Speaker 1:yeah, well, what I said is that, uh, it's it's like a that builds on b right, so the a priori, in other words, like we do care a great deal about latency Latency for us is a feature, but we are already the fastest thing that exists in the world for the Kafka API, and so we don't see market pressure where we see a bigger opportunity for the company. And this is perhaps the shift from being a principal engineer For me at an individual level than a CEO. I was like as a CEO, you kind of have to do what makes money. It's kind of, at the end of the day, it's sort of like less idyllic form of building a company where you start.
Speaker 1:You always wanted to sort of focus on the hardest technical challenge and we did right.
Speaker 1:We built our own consensus layer.
Speaker 1:We built the, I think, one of the best multi-raft implementation that exists in the world, with like tons of optimization, just probably a paper that we could write about how do we make it fast, whether it's like the literal election timeout or how do you do transactions, or there's like so many technical things that we went and we tackle first.
Speaker 1:That was sort of the really fun challenges. But I think the shift for me is like, at the end of the day, the company has to produce profits because like that's that's what gives you the room to build a great culture. Like you can't have a great culture if the company is going to fail right, and so ultimately, like it all comes down to making sure that people are being successful in production and what we're seeing is that, while we can make a certain part of the market be really successful, there continues the market, both because of use cases, keep expanding and because people are building the future differently from how they were building it before. There is a bigger market opportunity for us to continue to chase in a different dimension, that is not latency sensitive.
Speaker 2:I see, and maybe also you touched as well on the change from an individual contributor to CEO. Basically, right, so one is a very technical and you definitely demonstrate already your technical expertise as well, but today you're a CEO, right? You said even in the beginning I'm not sure if it was before or before or after that you haven't coded in a while, right? Maybe also that to me this feels like two very different profiles. How did you navigate this change or I think a lot of people as well. They also think that to progress professionally as well, you kind of end up in a less technical role. How do you feel about all these things like career progression and different profiles and the challenges and whatnot?
Speaker 1:yeah, I'm not sure that I think people would be. I think I would encourage people to, to, to divorce these two ideas, right. So like monetary attainment uh has been, I think, incorrectly attached to a managerial uh career. And then there's like sort of the things that make you happy and technical attainment, and people have classically thought it's like, oh, both can't be true. That's not necessarily true. There is like some level of that in that, like some managers tend to make a little bit more money, but not significantly.
Speaker 1:Like even when we do internal careers, like the mapping tracks to the senior IC, relative to manager, right, and so I don't have like the term, but you're going to have a manager be like an M3 and like an engineer, be like an E5. And then they could be the same pay scale altogether. The shift for me was really I was already a cto for a company. I had already was like the lead engineer for, like you know, hundreds of millions of dollars kinds of projects and and those were fun and I still continue to enjoy it. Um, but um, I wanted to try to be a ceo. I'm like, why not me? Why can't you be the MVP? Like, why do you have to hire a business person with a? By the way, it's not like the business part is particularly difficult.
Speaker 1:So, if you look at sales, especially if you take an engineer. So first principle of thinking and that's something that Silicon Valley has really, I think, thrived on it's like why, why is this hard? I just, I just want to understand right and and I think it's a lot of work it's.
Speaker 1:It's. It's still difficult, like I have like huge empathy for, for, uh, the sales team, especially because I'm in a bunch of sales calls, but it's not hard. Uh, intellectually, I think it's hard work. I think you have to, and and the analogy I like to give people is, like everyone in the world probably knows how to be healthy like you have to sleep well, you have to yield more vegetables, stop eating, you know. Like, uh, you have to exercise, whatever, at least three times per week. You have to get your heart rate above 80 percent, whatever.
Speaker 1:Like the basics everyone knows. And yet how many unhealthy people do you meet? They know and have the money and the means. So, like, even if you reduce the world to that population that that has the money that can afford a gym membership, that can afford to eat well, just very few people do. And I think that's, um, there's a lot of art in in sales as well. It's like nailing this down. And like people that are very sophisticated, are like as skilled as a principal engineer, um, but like you know, to understand the basics.
Speaker 2:It's not hard, yeah, it's like but I do think it's like a different set of skills, right, like there's also the resiliency, there's also like the like you said I do. I do see a uh, it's a trend, like people like you like to say why not? Right, and just kind of go for it and just give it a try. And I do think there's a lot of people that just kind of sell themselves short before they even start, right, like they kind of just already have that preconceived ideas, like yeah, that's not for me, that's too hard, this, this is that I don't have the background, I don't have this, and they kind of wait to be ready before they start. And I definitely see that. Uh, well, in your case, I definitely don't see that.
Speaker 2:Right, I feel like you're more like of a why not? You know, like you're experimenting with like uh, zero copy and streaming, and like all this is like why not? Why can't we just kind of push the envelope? Right, like why can't we just try to make something out of this? I think we have something valuable here. Let's give it a try, and if we're wrong, that's fine. Right, like maybe we're wrong, but why not just give it a try if I just don't have a good thing?
Speaker 1:oh, and, by the way, no one really has figured anything out like it's been fascinating to be a ceo because you have a lot more empathy from other ceos, because it's like you know you don't have a lot of people that you talk to other than other ceos for like stuff. Because it's like you're not going to talk to people that work with you about your, your problems and so, but when you talk to other ceos they're like yeah, we're all trying to figure this out like you know, the market is so dynamic uh, and so I I think that perhaps, uh, people that are in engineering uh roles, or like more icy focus role, uh, I mean really like as an engineer, maybe it's just the way I had built like, you still have no idea.
Speaker 1:You're like, hey, I have an idea where I need to go and then you're gonna go solve a bunch of little problems in between. It is exactly the same on the go-to-market side. No, no one really knows, but you set a goal. You're like I need to be here and you sort of solve all the little problems in between, whether it's like we need to create more demand or we need to whatever. We need to invite users so they connect with each other and they can help each other be successful, because in the long term, that helps the company be successful, helps the company be successful, or there's like it is. It is quite a similar creative path. The main thing that changes is the, the uh. The feedback loop in tech is super short, right, like you know, when you're writing c++, like if you compile, the thing is going to tell you you're wrong, and with people that go to market, uh, it's not like that it's, it's like well like you know, maybe this works, and then you have to wait six months, and so, like there are, differences for sure, um, but I think the creative process and the problem solving skills remain constant, uh, across both
Speaker 2:and I do think there's a creativity aspect, right, like, oh, we tried this, this didn't work, oh, but if you tried that, or if you tried this with these people, or if you tried this in a different way, right? So I do think there's a, there's a I mean, that's how I see it, at least, of course but, uh, but, okay, cool, uh, so we talked a bit about red panda, without diving too too much into, I mean, you already mentioned, like the streaming and all these things, but for people that are haven't heard of red panda yet, how would you describe it? Yeah, yeah.
Speaker 1:So the way I like to think about it now is the best way to build data intensive applications. And so the world really settled on a protocol called the Kafka API for doing real-time messaging of sorts, whether you're connecting components or you're consuming from some database. With a change to the capture feed and pushing into SnapFleek, et cetera, the Kafka protocol sort of became the lingua franca for streaming, similar to how S3, the protocol, not the implementation became the way for people to talk, or Postgres became the lingua franca of the protocol, not the implementation became the way for people to talk, or Postgres became the lingua franca of the databases. Right, like you just expect new databases to speak the Postgres protocol, whether it's good or not, it ultimately doesn't matter.
Speaker 2:It is what it is right.
Speaker 1:Yeah, it's like TCP. Could you do better? Yes, is anyone going to rewrite the world for a better TCP? No, I think people are going to work with it for a really long time and so anyway. So I would say Red Panda, it's the best way to build data intensive applications, and the reason for that is if you need to move data reliably, it's actually really challenging, and so that's what Red Panda specializes in. So for those of you in the audience that have any Kafka deployment, we should have a conversation. It's kind of the easiest thing for us is to be a drop-in replacement for Apache, kafka or any other Kafka protocol implementation in the world.
Speaker 1:That's kind of where we make the most money as a company, where we have sort of the strongest fit. However, as the company sort of evolved, we evolved in two major technical, differentiated ways. One is the connectivity piece. I think in May I bought the largest connectivity company for the Kafka API. If you need to move data anywhere to everywhere, there's probably a plug-in. We have 270 connectors. I don't even know the number of connectors, to be honest, but it's like whether you have listening on, like an mqtt protocol, or you need to push to salesforce or to snowflake or whatever, to sql or consume from mongol and push them to postgres, like if you can think of anything, or redis streams or whatever like if you can think of almost anything.
Speaker 1:If you can think of anything, or Redis streams or whatever like if you can think of almost anything that you can think of, it's connected. And all of those had been, you know, worked on for over a decade, or just around a decade, by the company that we bought, ash, who is the main contributor there. And then recently we announced this idea, remember, at the beginning of the call, we started talking about like, hey, how do you see the world being different and where's red panda's opportunities? And so we are releasing a new storage engine that allows the developer to choose between latency and cost, or throughput and cost. And so if you want super ultra low latency, there's probably no, no better, like storage system that red panda to that. And if you want high latency, low latency, there's probably no better, like storage system that Red Panda did. And if you want high latency, low cost.
Speaker 1:Because it's like, think of it like application logs, I mean, like you access them on an incident and maybe you're having one incident a week if you have tons of customers, right? Or maybe a couple of incidents per week where, like, somebody has to log in a system and you have thousands of customers and it's like something across thousands of customers, hundreds of thousands of developers and millions of applications that like one thing is going to go right and so you're going to go into that system and check out the logs, but you're not really accessing them, uh, in real time, and so they're just like that cost is spectrum exists for for developers, and I think red panda gives people the ability to dial in. You're like, hey, I care about this because I'm doing ad tech or trading or stock analytics, or I'm powering, like you know a prompt for like a, you know a like a stick, like open AI, gpt-4, and you're like typing in and if it were to give you an answer 12 hours later, it would be like useless, right.
Speaker 2:But because it's interactive, it's useful.
Speaker 1:So that's on like the real-time front, and then on the analytical front is like maybe you care about multiple seconds, it's fine, because you're just shipping application logs, and so I'll stop there. But that's what repanda is in in one, in one sentence. It would be the simplest way to build data intensive apps, and so if you're moving a ton of data in your app, then repanda is a good base. We're not everything, but we're sort of like ring zero, the smallest part build data-intensive apps, and so if you're moving a ton of data in your app, then Red Panda is a good base.
Speaker 2:We're not everything but we're sort of like ring zero, the smallest part of how you would start, but that part is the best one there is today.
Speaker 1:Yeah, it's the most fun, at least for us to be in and for me to continue to work on.
Speaker 2:Maybe a question as well, a bit of a side note, but why Red Panda?
Speaker 1:as well a bit of a side note, but like, why, why Red Panda? Why, why did you? Uh, why the name man that one is? It's so interesting. So, uh, I've been like a systems engineer for a long time and, um, so when I started the project, I needed a code name. And I asked a bunch of friends and and I think 80% of the friends that I ask I had a red panda as an easter egg because, like, I was just like, hey, people are not going to, um, they're not going to respond to this thing, just like we all get a bunch of spam, effectively, and so I just sent it to friends. I was like, hey, what?
Speaker 2:do you think?
Speaker 1:and so I gave a list and I said vote, and they voted, uh, red panda to be the main name, and um, and so then the project was named Red Panda and the company was named Vectorized for what I thought was at the time a cool nerdy thing, for vectorization instructions and so on. And no one in the world remember what in the world was vectorized. They're like I don't know what this means, I don't know if it's your accent.
Speaker 2:Is it vectorized or is it vectorized? They're like I don't know what this means. I don't know if it's your accent. Is it vectorized or is it vectorized?
Speaker 1:with a, d or, and so we just ended up calling, you know, unifying the name, direct pilot, just like Mongo did with 10 gen. No one knew what the hell 10 gen was, but everyone knows what Mongo is. And so similar thing for us read the product and the company had to unify, and everyone loved the little, like you know red panda yeah it's like this.
Speaker 1:You know, super mario bros inspired a bit character. Uh, that is just like it's cool and it's like it is one of the best things that we have in the company is the brand and the character. It's just. I mean really, for anyone listening, let us know, we can ship you a T-shirt in the US pretty easily. So there we go, it's very cute.
Speaker 2:Yeah, yeah, it's super cute, super cute and even like, if you're going to do research a bit of Red Panda, you may get some cute, actual Red Pandas as well, you know, sprinkled in here and there and all these things. So, really, really cool. So you mentioned, like data intensive applications and one thing that is yeah, for the people that are just listening, I also put the Red Panda landing page here. It says, like the unified streaming data platform right, maybe for people that are like and we did hinted a bit like streaming real-time batch, how this, like how would you describe the space how it is today and maybe how it was a bit before?
Speaker 1:uh, to someone that is not as familiar with it yeah, and I think is the better um and so, um, classically, the way we I guess I suppose this changed relatively rapidly, uh, in in college right now. But when I went to to school, like when you learn to, you know if you write your own terminal or your own operating system or whatever, like the way you interacted with these ideas of files and file handles and so you would the natural, and um, I forgot the name, the last name of the person that invented the pipe, but the idea of doug it's his last name is skipping me. Anyways, the idea is that you would do these things in files, right, and so you have an unsorted list of strings, uh, classic, like cs, uh, first, first, grm, programming kind of course, and at a university is you have like, hey, you have these unsorted strings on on on disk. You need to sort them and unique them, and so you would write a program called sort and so you would cut the file, you would pipe the content into another program called uh, sort, and then, uh, that would give you that, and then you can then pipe it into another program. If you wanted more things, like if you want to unique them, or or like dedupe them, or if you wanted to do the frequency count or whatever. It is right.
Speaker 1:And so, like we were taught to think of this uh ideas in in like this discretized chunks, uh of stages. But the world doesn't happen like that right. Like if you think about like sporting event or a, or like credit card transactions or buying food, the world doesn't happen in this neatly packaged uh horizons where like, by the way, all of your consumers are gonna come at 1 pm on friday, like that doesn't happen, right. So there's like this continuum of things, and so the reason batch was really invented is because it was easy to to reason about. I think largely we were just educated on how to think about the world in a batch. It's like intuitive to the, to the brain, I suppose by now.
Speaker 1:But you're like hey, okay to to basically take a complex problem and process it. You divide it into chunks, so the first chunk is you sort it, the second chunk is whatever. And so that's where Batch really came. And so the best companies in the space are I think independent would be Snowflake and Databricks are really good at these things. And so this idea that you process things in this time horizon, since, for those of us that have written Spark or other things or SQL queries. You know that when you log into or when you run your Snowflake queries whether it's for cost or other reasons you would like run it at some time horizons like every hour, every so often. Okay, and so that was bad, classic it's. You take the world and you discretize it in this time horizons top of the hour, bottom of the hour, six hours, 12 hours, 24, once a month, whatever.
Speaker 2:Like at the end of the quarter.
Speaker 1:How much money do we make, kind of thing, and so on the. On the other end you really have how the world works, which is this like continuum right For as long as you, your business events will happen. For most companies. Think about targetcom like there's just as long as there are stores that are open for, for targetcom or walmartcom or whatever they.
Speaker 1:there's just always going to be transactions and they happen in whatever you know. Random distribution users show up to, to the store supply, and so that's where really streaming has been really really strong is a way of modeling the world in a way that it's more natural, I think.
Speaker 1:And so now where they're finally converging is with platform-less red panel. And there are others there as well where you're sort of removing that distinction of having to process things in group of events and you can say to the engineer who's building applications hey, you can have this with one primitive, which is a timer callback. That's it, that's the key, that's like the keystone that makes the clock tick. You can process both bounded and unbounded events, and so an unbounded event is you never know when the credit cards of, let's say, targetcom is going to end, basically as long as the business ends. But you still have to have produced useful results.
Speaker 1:You cannot wait for the computation to finish, because the computation never finishes, and so you have this incrementalism built into the thing. And then for batch, it's really easy because you just say like, hey, just sum up all of the events before the bottom of this hour, and that's sort of your first report, and so long story. Batch is for processing groups of events, and streaming has been classically about processing one event at a time and the. I think the world is merging uh, where you can have the same framework giving you ways for processing both groups of events and, uh, real-time events, uh, so to speak, and yeah, so, anyways, I'll pause there.
Speaker 2:Yeah, that, that that's what I would define as, like you know, yeah, and I think in a in, well, in the data, let's say data and ai and all these things you hear a lot of like, yeah, the etl, and there's no place in all these things, whatever, right?
Speaker 2:Um, on the real time, and I think maybe this is a bit my my experience more, when you're going to deploy machine learning models or something, a lot of times they actually and that's what I see on the cloud providers and whatnot is really usually behind the REST API, right, which is a different approach. I guess they say real time, or I say real time, but I guess it's more of an on-request basis, right, it's more like of a passive, like you don't have a schedule to trigger things, it just waits for a request to do something. I have heard or I have seen, but not as much like to try to combine the streaming data with, yeah, process like machine learning, all these things. Do you think this is a good case for it? Or, if it's not, like what would be the best use case? Or when would you stay away from streams, further away from batches?
Speaker 1:well, what I can share is Red Panda powers today, some of the world's largest AI companies in like we can't talk about their names, unfortunately, but you can in the top five companies of AI in the world, and so what those companies are using Red Panda for, or like streaming technologies for, is a lot of like the fact recording and actually as a way of providing a unified access layer to data.
Speaker 1:And so sometimes you're going to need to combine things in real time, and then sometimes you need to reprocess the data, and so, as long as you could do, it cost effectively, it makes operational sense to have less moving pieces and have less access APIs around it, and so to to make it concrete. And so if you're like, every time you see a request and you want to like, do some auditing on the prompt. So let's take an example of a, a fraudulent credit card transaction, classic example, where a lot of sophisticated machine learning and AI models are being used for it, and so that is like a core use case of streaming tech. Right Like now, when you transacted on your like visa terminal at a point of sale. So if you were in like Brooklyn, new York, and you go to a bodega and you buy a piece of bread, like that happens instantaneously, right, but actually behind the scenes it's it's all powered by uh, you know technologies that look like red panda, and so what you do is you merge two streams. You merge like, hey, what's the vendor id? What is like, is that person actually located in new york, or you know, so that you don't expect a nigerian prince transaction to happen while you're living in New York, right, like all of these environmental things have to happen in real time and you have to convert those things. So, even though to the application developer it looks like a request response behind a scene, if you're, when you get sophisticated about your data modeling, streaming is sort of a really easy way to continue to extend that and make it better and better over time without fundamentally changing your access patterns and so um anyways.
Speaker 1:So when you ask about ml or ai use cases, uh, that's like one very specific example that, uh, you know, tends to be super, super popular right, so we power some of the world's largest uh uh financial companies and so like, included market makers. So you can think of a market maker as a trillion dollar fund. If you move a trillion dollars, then like kind of an arbitrary point, but but roughly, then then you can be considered sort of one of these market makers. Where you give illiquid technically is you're making illiquid assets liquid, like gold and silver. You can't, you can't ship, you know, know whatever. One ton of gold that would be absurd to have a ton of gold shop to your house, but you could trade a ton of gold, uh, online, and so long story, those kinds of companies, even though the request response from your trading terminal would be like hey, you click a button, I want to buy whatever 500 million million worth of gold Behind the scenes. That's all power by entrance, I look like Red Panda.
Speaker 2:Yeah, I mean it's a very cornerstone of the world, right, Like moving data and stuff. You get more and more data.
Speaker 1:More, yeah, harder it gets. I guess Moving data reliably is surprisingly difficult.
Speaker 2:Yeah.
Speaker 1:Or, like you know, detailed oriented. I suppose that yeah.
Speaker 2:And maybe one thing. I heard that the true ROI in AI in the enterprise is at least two years out. Is this a statement that you agree with in terms of AI in an enterprise and, if so, why would you? Why would you stand behind the statement?
Speaker 1:I think that the world is still stuck in rfc's and design documents and very few people are making money truly like generating revenue streams from ai. Like you know, the the best examples that I've seen so far is if you take one of these large and, I guess you know, probably easier on commercial, on, I guess, users than enterprise and I don't have visibility into that and so obviously take everything I say with a large screen of self which is like.
Speaker 1:Alex only talks to people that are large enterprises and so for those people, some of the best examples is you take like 10K document summarization or a lawyer, would you know, would like work that an assistant would charge $800 an hour. Now it takes like three minutes for some of these models to think through right, those are kind of the best use case for, like the most advanced companies, super forward-leaning companies around the space. But I haven't seen like by and large right. So personally I talk to like probably hundreds of large enterprises every year and the company overall, you know, with thousands. I talk to probably hundreds of large enterprises every year and the company overall with thousands. I guess probably thousands to maybe reaching 100,000 or so a year, but I'd say thousands of large enterprises, and so we just don't see it. Where we see is people are excited about it but they're not. I think they haven't graduated use cases to production, and I think so.
Speaker 1:If you look at a go-to-market for enterprises, you build the tech which, by the way, is not built, it's being prototyped right now then you ship the technology to production and then you have a six to nine month, uh, sales cycle and so, right there, it's already, uh, whatever 18 months, and so I think two years sounds about right. It's like people have the right use cases, they feel good about the technology. It's going to take them six months to get to like a production, stable build and then it takes six to nine months, or maybe 12 months, depending on the enterprise sector that you're in to go and convert and start making a ton of money on it. So I think two years seems pretty good for us to see a large part of the market actually make revenue on these AI models.
Speaker 1:But where I see it today is most people are just stuck in design documents and the reason is the enterprise envelope of authentication, authorization, access controls, audit, login, understandability, predictability, tracing of the model Like they just aren't there for for most companies. Uh, and and so that that's really the long pole. Plus, like legal has to approve whatever legal is going to approve for whatever. Like you know flavor of like that council happens to have sophistication in and like. What's interesting is I've seen all models be approved by large companies uh, except, you know, I think, quen and some of the chinese models, less so in the west, but largely all of the other models are approved, and yet every company says you can only use one of the others, and so I don't, I don't understand, uh, why?
Speaker 2:do you think it's like? Do you think it's like more fear? Because they don't. It's not proven, tested, and so people are extra cautious with these things.
Speaker 1:I think so. I think I think people don't trust.
Speaker 2:But then you think like, and again, so, cause I was going to ask what's going to change, why, why, why two years as well? But I think you already explained that. But then, to summarize, what I understood from what you said is like we're in an exciting time, people are excited about it, people are starting to tinker with these things. So two years is about the feedback loop and also people becoming more confident with these things. That's a good paraphrase.
Speaker 1:Plus, I also think things are going to get better. I mean, I think that the shift away from text generation, image generation, sound generation, video generation is going to move into agentic workloads. And I define agentic in that I'm not an AI or ML person, so people in this podcast are probably going to correct me, but I would define that as things that do an end-to-end task, and so that is basically how I see it. In my mind, agentic is you take a bunch of these things and you chain them together and they actually accomplish an end-to-end task. Um, and that's when things get really fun and interesting from an end user perspective, because it's like an example is my ea will help us book a calendar entry, uh, and so like that's. That would be like a great example for an agent to kind of come in and finish the end-to-end task.
Speaker 1:It's like this podcast was scheduled on Thursday and we want it before noon because whatever, and so I could just say hey can you get these two agents to go on and schedule it on this block of time and then they'll go figure it out, even if there's dynamic changes on the calendar? I think those end-to-end tasks and so yes, I think, because the tech is there, the security is getting quickly, getting pretty good. I actually think there is a lot of money and a lot of great companies building in this space, but also things will just simply get more interesting in ways that I cannot predict.
Speaker 2:Yeah, yeah, true, true, yeah, and I think it moves very quick. It moves very quick, I think, even for people in the space. If we were to predict five years ago that we would be here today, I think most people wouldn't guess that. Maybe another thing that I'm looking here at the notes open source versus enterprise. There's a lot of enterprise that are investing heavily in this, but also open source is catching up in some ways. Right, how do you see those things? How do you see the balance between the two?
Speaker 1:do you mean from a model perspective or?
Speaker 1:like there's just so much about it. I would say that, uh, my, my, my thinking on it and it's not no longer, I think, as insightful as it as it was, uh, earlier year, which was there's no real breakout model. I mean, even OpenAI has sort of come out and said that the way to build model in the future right, it's like the reasoning, their new reasoning, agent 01 is different than their previous models and the whole thing here is that Meta in particular crushed it with their open source models. Lama 3 and mistral followed up and there's like a bunch of other models that are like super optimized, and then nvidia takes the llama 3 and tweaks it and then produces a new model that is like really good at something else.
Speaker 1:And so the takeaway is that, on the the, the quality of output of the best closed source models is on par with the quality of output with the best closed source models, is on par with the quality of output with the best open source models, like large well, a few percentage here and there, but you know, one or 2% you're not getting a categorical improvement. In other words, you're not going from a 10% to 80%, you're going from 80% to 83% or 84% to 89%, like just like, it's fine. And when you fine-tune it this is really where the enterprise opportunities for the businesses that we talk to is that when you fine-tune it, your private data set will always be better than a general model. If you think about what some of these large-scale model companies are doing, they're hiring humans to filter and train this thing with, like you know, very specific information to to generate, basically, quality training data. And if you're a business like, if you're I don't know if you're like a credit line for a particular segment of the market, or if you're building airplanes, or if you're building this or that, like your specific data, when you take one of these open source models and you fine tune it with your data, you get better quality of responses most of the time than you would with this like generic kind of.
Speaker 1:And so that's where I see it. And so when you think about both closed source and open source models, at least on the inference side, it's side is like hey, the data is the thing that is going to be a huge differentiation. So that, um, and where we see it just to kind of like, why do I think that way? Where we see it it's like we sort of facilitate this, uh, movement and inference in, and so we're not in the model space, we're really on the runtime part of it. And it's like how do you fit these models, this real-time information, so that they can predict and be useful, so that you can make money at the end of the day with these things, because otherwise it's just a bunch of wasted CPU clock cycles.
Speaker 2:And maybe also so you already outlined the role of Rep Panda in the story. Is there also other AI features or something? Because I do see a lot of products now embedding AI somehow. Is this something that Red Panda has done, that will do, that is currently doing?
Speaker 1:Yeah, so our AI story is, I think, different and pretty strong relative to where we sit in the market.
Speaker 1:The first one is that we allow people to deploy this state-of-the-art models from Hugging Face, basically with a single line of code, and do real-time inferencing.
Speaker 1:And so, if you have an example, let's just take the case of the credit card company that worked with us and so they're shipping these things and they're like hey, want to use the llama 3 for, uh, pii, removal, right, so like I'm going to ship the sample of these logs to a different topic for a partner to consume, great. And then um, uh, and then I want to remove the, the private information for for, like, all of these transactions, transactions. And so we kind of sit and the things that were done, it's like okay, we have a computational framework that says, if you deploy a pipeline through Red Panda that has a model, we're going to automatically spin up a GPU accelerated instance, we're going to download the model from Hugging Face and we're going to like, load the configuration and then filter, so that the mental model for the engineer isn't about figuring out how to load the GPU or CUDA libraries or anything like that.
Speaker 2:The mental model is you have a YAML pipeline that says this is the input and this is the output.
Speaker 1:And I want to use this model and all of that is sort of handled, scaled, you have authentication, you have authorization, you have ACOs. That's scaled. You have authentication, you have authorization, you have acos. That's like the true value that we have delivered to the world is that sovereignty. And then we went and added api connectivity to uh, like you know, open, ai and anthropic and bedrock and gemini and pine cone and quest, db and all of these things, and so you get to choose whether you're connecting to external APIs or you're running local inferencing.
Speaker 1:It's kind of the extent that we've done from a product phase and then internally we use a ton of tools to get insight on like, whether it's product utilization or sales forecasting or like documentation or things like that. That's kind of where we're using it. And then we have a running pilot today for engineers on like. Is it claude 35 that generates the best you know aid to experienced engineers, or is it uh, open, ar or whatever? And so we're like playing with all the models to try and understand how do you build that company as a CEO? Yeah, I think it will continue to change. I don't have an answer, but I think it'll look different. I just don't know what different means today.
Speaker 2:Okay, cool. So maybe to repeat what I understood from what you said, the Red Panda has features in the model deployment part. So whether you're deploying a model from Hugging Face or something, or if you have an external API so they make it easy to hook part. So whether you're deploying a model from Hugging Face or something, or if you have an external API so they make it easy to hook up. So whenever you have streaming data coming in, you can also hook up the ML inference in that streaming engine.
Speaker 1:Yeah, exactly. And if you want to pull up airedpandacom for people listening in or watching the YouTube as they see me fidget on my chair, they'll see kind of like what I mean by that. It's like three lines of YAML, for example, if you scroll to the bottom of aibredpandacom. If you want to pull it up.
Speaker 1:Yes there we go. So yeah, if you scroll to the bottom, maybe it passed too much, go up one more time. A little bit, that's it. That is a screen that says like, hey, you're going to read from Red Panda and then you're going to use the Lama 3.1 prompt for all topics in the articles topics and then it's going to summarize the articles for you. That's all the user would type in and then you would get things like access controls, like OpenID, connect, centralized authentication, tracing, logging, auditing. It pipes into your CM system, whether you're using like a Splunk or whatever, but it's just like nobody cares about those things, and yet those things are like the linchpin into getting your models into production. To actually be able to generate money is one of the things that nobody wants to think about.
Speaker 2:Yeah, but it's very cool. So again, if it wasn't clear for the people listening, well, first, if you're listening, get on YouTube, check it out and also go to airedpandacom. It's basically like a YAML file, so very, very simple. And AIredpandacom is basically like a YAML file, so very, very simple, and you can just specify there the system prompt and everything and, as it says here on the top, yep, it's that simple, so really cool. So also, redpanda also takes care, because you mentioned, like, data movement, but Redpanda is not. It doesn't stop there, right, it also has the transformation bit as well, like as an ETL, let's say, right. Another thing also you mentioned open source, ai and whatnot. Red Panda is also open source, or am I mistaken?
Speaker 1:So Red Panda the platform, it's a pretty big code base. It's like multiple, multiple repos, and so there are. So you have Red Panda, red panda, the storage platform, right, and so if you look at connect as an example, and so connect is a collection of 270 connectors that move data in and out, so you can think of this as like five tran or air byte kind of alternative. It's written in go. The main difference is not python. Uh, it's super fast and lightweight. It's like 100 megabyte binary, right. So think of it like loading instagram page, your landing page, spam, that's. That's basically how long it is, um, so, so that's the first component. About I think, 99 or some somewhere around there percent are all permissively licensed Apache 2 components. So really, by frequency count, the most popular sources and syncs are Apache 2 licensed.
Speaker 1:Then, on the storage engine front, the main Red Panda repo is source available and then it turns into Apache 2 in four years. So that repo right there. And so the reason is we basically wanted to prevent the hyper clouds not to single out any particular cloud from not allowing us to run a company, and so the core engine is source available. People can see it, even the enterprise bits that you still need a. The core engine is source available. People can see it. Even the enterprise bits that you still need a license for are still source available. So that's another point.
Speaker 1:And then we have a UI which is the most popular UI for the Kafka protocol. It's called Red Panda Console. It's a different repo. It's a long story. By and large, we have a tremendous amount of code that is open source, and then we have other pieces there to basically making sure that we can build a company. I mean, at the end of the day, you have to pay engineers and you have to pay people and cloud and expensive bills that you have to pay. Cloud hosting is expensive, and so this is what we felt was a good balance for us.
Speaker 2:That's cool, yeah, I mean, yeah, this I don't know if it's this year. Last year there was the, the terraform stuff. There's also the elastic that became open source again, so I know it's also. It's just a tricky story and nowadays there's the, the whole wordpress stuff booming on the news feeds.
Speaker 2:So uh, yeah, it's, uh, it can be tricky, I feel yeah, I agree um maybe it's, it's hard yeah, um, we're nearing the end of the time here, but, um, yeah, maybe do you have anything else that you would like to say? Ah, maybe. One last thing I remember now. So you mentioned that Panda is source, open open source and stuff, but not all of it. So there is an enterprise. So you also have a cloud offering here, right, which I guess is like everything's taking care for you the cloud, the infrastructure and all these things there. Do you want to tell a bit of like, get people excited about it, why people should join this or who should join this? Is there anyone that shouldn't move away from Kafka, or should everyone just rush to Red Panda?
Speaker 1:That's the goal. I mean, I think on many levels, we have some of the world's largest companies running Red Panda for mission-critical workloads. I think that's where we specialize. If you really care about safety, about performance, about not losing your data, things like that, I think that Red Panda is a really great product. But where our cloud shines if you don't want to manage it right, where our cloud shines if you don't want to manage it right. And so if you want a multi-cloud deployment, if you want it in, like Azure, amazon or Google, it's super easy to get started and I wonder, actually, if you tried the free trial, we should do this live. So if you go to start free and if you go to the serverless offering, Serverless cloud, I guess.
Speaker 2:So go to redpandacom slash try.
Speaker 1:So we can count the number of seconds that it would take you to spin up a Red Panda. And if you sign in with Google or GitHub you have to accept the terms at the top. Oh, there we go, just count the number of seconds. Okay, let's see one. Okay, you just click, see one, two, three, four, basically five, six we can count live, seven, eight, nine, ten, we get the cute logo.
Speaker 2:Oh, there you go.
Speaker 1:You now have a cluster that is globally deployed and it walks you through. So it's like, let's say, 15 to 20 seconds. That is globally deployed and it walks you through. So it's like let's say, 15 to 20 seconds, that is globally deployed, and so if you really want, I think, the best developer experience, I think that's what we try to do with the cloud. The alpha for us is creating the best developer experience in the world.
Speaker 2:Wow, this is really cool. And yeah, like you said, kafka is the standard for good or bad, right? So you do have it's like a drop-in replacement. That's what you said earlier, right? So no one has an excuse really to not move to Red Panda. It's really easy to give it a try as well. I'll definitely give it a try. So, yeah, really cool, really really cool. Thanks, I'll definitely give it a try. So, yeah, really cool, really really cool. Thanks for chatting with me. I know you're a busy, busy, busy person, so I'm also trying to be very mindful here of your time. But is there something else that you want to say before we call it a pod?
Speaker 1:No, I think that's it All right. All right, thanks a lot. You have taste in a way that's meaningful to software people.
Speaker 2:You have taste in a way that's meaningful to software people. Hello, I'm Bill Gates. I would recommend TypeScript. Yeah, it writes a lot of code for me and usually it's slightly wrong.
Speaker 1:I'm reminded incidentally of Rust here Rust.
Speaker 2:This almost makes me happy that I didn't become a supermodel.
Speaker 1:Cooper and Nettie Boy. I'm sorry guys, I don't know what's going on. Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here. Rust the Data Topics. Welcome to the Data Welcome to the Data Topics podcast.