304: It’s Chile Up Here in The Cloud!

Episode 304 May 22, 2025 01:16:54
304: It’s Chile Up Here in The Cloud!
tcp.fm
304: It’s Chile Up Here in The Cloud!

May 22 2025 | 01:16:54

/

Hosted By

Jonathan Baker Justin Brodley Matthew Kohn Ryan Lucas

Show Notes

Welcome to episode 304 of The Cloud Pod – where the forecast is always cloudy! Justin, Ryan and Matt are in the house tonight to bring you all the latest and greatest in Cloud and AI news, including AWS new Chilean region, the ongoing tug of war between Open AI and Microsoft, and even some K8 updates – plus an aftershow. Let’s get started! 

Titles we almost went with this week:

Follow Up 

01:53 DOJ’s extreme proposals will hurt consumers and America’s tech leadership 

AI – Or How ML Makes Money 

09:20 OpenAI Expands Leadership with Fidji Simo 

OpenAI Hires Instacart CEO Simo For Major Leadership Role 

11:43 Introducing OpenAI for Countries  

Introducing Data Residency in Asia

13:42 Justin – “They are supposed to be in other countries…but they could be built in the US on the Stargate infrastructure for other countries as well – that’s a possible scenario.” 

14:10 Microsoft and OpenAI may be renegotiating their partnership

14:48 Matt – “It’s amazing to me that Microsoft wants to put all of their eggs in the OpenAI basket.” 

 

Cloud Tools 

17:03 Terraform AWS provider tops 4 billion downloads, 6.0 now in public beta  

21:14 Justin – “You’re going to want to make sure you don’t have secrets in the user data; because this will not be hashed in the state file – they’ll now be in plain text in Terraform plan and Terraform apply dif.” 

AWS

23:43 In the works – AWS South America (Chile) Region

24:55 Introducing Amazon Q Developer in Amazon OpenSearch Service

25:40 Ryan – “This is just adding natural text descriptions to the product; but couldn’t it just be a part of Open Search?”  

GCP

27:36 Kubernetes 1.33 is available on GKE!

29:58 Justin – “I do find it funny that it’s taken this long to get pod resizing. To be able to change the CPU memory request assigned to containers that are in a running pod seems like something that would have been needed a while ago.”

33:22 Evaluate your gen media models on Vertex AI

Azure

Just so everyone is aware – Matt is making us do this, so here goes nothing…

34:56 Build Predictions

46:46 Microsoft’s Virtual Datacenter Tour opens a door to the cloud

49:50 Empowering multi-agent apps with the open Agent2Agent (A2A) protocol

50:18 Unlock seamless data management with Azure Storage Actions—now generally available

54:32 Matt – “In AWS terms a storage account is an S3 bucket – so each bucket you might want different things to happen in. And then in Azure, because they don’t really understand the cloud still, you can say this is one zone – versus multi zone versus – replicated to DR multi zone – versus replicate to DR single zone. And each of those has to be done at the storage account, AKA S3 bucket level, not the container level.”

1:00:59 Unlock what’s next: Microsoft at Red Hat Summit 2025

1:03:48 Announcing new fine-tuning models and techniques in Azure AI  Foundry

1:06:19 Ryan – “It’s a continuance of the trend of more and more customization of these large language models. At the beginning, everyone was training their own bespoke models, but now with RAGs and RFTs and a whole bunch of grounding you can really tailor your existing model to your workload.” 

After Show 

1:07:22 Linux to end support for 1989’s hottest chip, the 486, with next release – Ars Technica 

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. [00:00:07] Speaker B: To the cloud pod where the forecast is always cloudy, we talk weekly about all things aws, GCP and Azure. [00:00:14] Speaker C: We are your hosts, Justin, Jonathan, Ryan and Matthew. [00:00:18] Speaker A: Episode 304 recorded for May 13, 2025. It's chilly up here in the cloud. We. Good evening Matt and Ryan. How you doing? [00:00:28] Speaker C: Doing well. [00:00:29] Speaker A: Yeah. Well, Tuesday. Yeah, it is Tuesday. It's always Tuesday here at the clubhot, funny enough, but the, you know, the reality is we published our 300 blog post last week written by our show note writer Heather. So you get her perspective of trying to make sense of all the craziness we talk about here on the show. So do check that out on our website if you're interested, to see what Heather had to say. It was very nice writing. I want to thank her very much for doing that as I had no time to write a 300 post. 300 episode blog post and neither did you two. No, Jonathan's out. So I was the best thing I do. I like I'm going to outsource this to somebody else. So I appreciate that quite a bit. But also, you know, I would like to call out Datadog who sent us a lovely, a lovely bottle of Whiskey and commemorative 300 episodes. I'm sure she probably talked to me about sales but you know, I still appreciate the gesture. [00:01:24] Speaker D: Hey, where's mine At Ryan's? [00:01:26] Speaker A: Well, I'm gonna save this one for when we get together so that we can all enjoy it together. So I mean, I guess Ryan and I are gonna have to go to New Jersey because. [00:01:32] Speaker D: No, no, I'll come to you guys. You're better off. [00:01:35] Speaker A: I mean there are three of us here versus one of you there but you know, so definitely. [00:01:40] Speaker D: But I choose like the cloud conference. We'll all meet up there. [00:01:43] Speaker A: Yeah, but yeah, we appreciate that from Datadog, despite the fact that we sort of sometimes give them crap about their over complicated pricing model. But it's okay. I mean they can still sponsor the COPOD if they'd like to someday, so we would gladly take their money and then I could even, I could even monitor the cloud pod with Datadog if they'd like. If they wanted to offer free because I'm not gonna pay for that. Sorry. But anyways, let's get into follow ups this week. So a few weeks ago we talked about DOJ's extreme proposal. Oh, sorry. We heard about Google's antitrust case and basically the DOJ just wrapped up their two week remediation trial and Google was not happy about it. So every day they posted multiple tweets about how wrong the DOJ was and so they've summarized it all up into a lovely article they posted to their blog which makes them look so good. So, so good. So basically they have proposed many remedies and they say that these remedies are going to only hurt consumers and America's tech leadership in the world. So a few of the things they said first of all DOJ's proposal to ignore. Oh sorry. Ignores the immense competition across the industry. So basically one of doj side is that you know, they're dominant in search, they're dominant in ads and that you know, this is a monopoly in the making and all these things. And so basically Google saying no we're not. The Department of Justice ignored how intense competition has transformed the industry. Well funded services like Chat, gbt, Grok, Deepseek, Perplexity and Met AI are all rapidly gaining users and distribution, adding innovations at a breakneck pace. Evidence of the trial showed OpenAI believes it has what it needs to win and then their ability to enter into promotional agreements isn't holding back this new generation of competition. Apple recently chose to feature ChatGPT in Apple Intelligence while Motorola has already integrated Perplexity. I mean who uses Motorola? Is that an Android phone these days I assume. Yeah, I haven't heard about Motorola in a long time. And then Microsoft Copilot apparently ended up in new Razer devices which I also didn't know existed. This is the problem being in the Apple ecosystem. I ignore all other phones same. Google goes on to say DOJ's proposals would leave consumers with worse experiences and fewer choices. They point out Apple's SCP of services. Eddie Q said Apple chooses to feature Google because it's the best search engine that it keeps improving with new innovations like AI overviews. In contrast, the part of justice proposals would make it harder for people to get search engines they prefer which I mean it's not that hard to change your search engine and I will admit that I've tried to use like DuckDuckGo and things like that and I just go back to Google because when you have all that stuff there you sort of feel like you're getting lesser results. So I mean there are so good that it's hard to compete with Google and so I do kind of agree with you Jay on this one a little bit. Sorry. Google, they also that will hurt widely use browsers because they submit a lot of money to Mozilla to be the preferred search engine of the Firefox browser. They're saying mandating data disclosures would threaten America's privacy. And this is because privacy expert Dr. Chris Culnane testified that DOJ proposals demand even more data than Europe's digital market acts, threatening to reveals people's personal information and behavior and threatening widespread privacy breaches. This is basically making their search data and intelligence about what you're searching for available to the world. They say that's a big risk and it's better if you can just sell it to you through our closed marketplace of Google Ads, which is crazy. And he said even a witness from Microsoft, the company that stands to gain the most from the Department of Justice proposals, admitted that privacy concerns are not made up. And then they say de facto divestiture would hold back innovation at critical juncture and basically saying that, you know, AI and everything around with that and amount of money that they spend on R and D, you know, allows them to be competitive in the global markets. And Chicago economist Dr. Kevin Murphy said that for sharing of data and intellectual property would reduce incentives for rivals to innovate further shortening basically America's dominance in technology. And then finally they're saying divesting Chrome would break it and many other things which I do agree would definitely make my life for Google on Google Cloud much harder without Chrome, as a lot of their IAM stuff is tightly integrated. And they said basically you'd make it a shadow of the current Chrome and that the browser likely become insecure and obsolete and replaced by others, which is maybe what the Department of Justice wants. So there you go. [00:06:02] Speaker D: The Chromium project is used on, across all web browsers at this point, or pretty much all web browsers. So, like, I don't feel like that's going anywhere anytime soon. There's too many other people that will jump in and help, you know, move it to the next level forward and even if Google has to divest it. [00:06:19] Speaker A: Yeah, well, you mentioned earlier before the show started that you just learned last year at Build, they announced an Edge browser for enterprise, which I didn't know about either. So, you know, that's built on Chromium. Edge is Chromium because Internet Explorer screwed it up so badly. So, you know, here we are. Everyone's using Chromium as basics of their browser. [00:06:39] Speaker C: Yeah, I think it's that, that sort of base that I don't, I don't necessarily agree that we'd get a shadow version of Chrome if they divested. And I'm, you know, I've been sort of anti Chrome integration into all things Google Cloud just Because it, it feels very antitrusty to me. Like where it's like my search engine, my cloud provider and my local browser all have to talk to each other for this ecosystem to work. It's. And then if you're not in that, like, how do you, what do you do? And so there's, there's access rules in Google Cloud that you can only use if you're using the Chrome browser. I don't, and I'm not really a big fan of that. So, you know, like, I kind of get where they're going. [00:07:19] Speaker D: Okay, Mr. Firefox. [00:07:20] Speaker C: Yeah, exactly. Yeah. [00:07:24] Speaker D: No, I mean, it does make sense, you know, and that having that tight, you know, it would be no different than if Microsoft said you had to use Edge, like the EU would be up in arms about that or Azure Console. So I kind of get where you're coming from too. It's like, can't force people into a platform like you're doing. There's sure businesses are one thing or another. Your security department over there says you have to use Chrome because they have their DLP or Edge because their DLP is integrated with the enterprise version. But for an average consumer, you know, it's. I don't think it's going to affect much. I guess the only thing I could think of is if you end up with forks like, you know, Redis and Valkyrie or Open Search and elasticsearch where it ends up forking and then security updates don't make it into all the browsers, you know, and all the different forks at that point. But at that point it's up to whoever's maintaining the fork or the branch, a repo to make sure that everything gets merged in. [00:08:26] Speaker A: I don't even know where to go from there. But I agree with what you said. I just, you know, thinking through like, you know, we'll see what DOJ finally recommends as their, their remedy. But they were definitely, you know, Google definitely has been pushing back pretty hard that this is the wrong solutions to the problem. Although I don't think their solutions are much better. I don't think they're, they solve a lot of the heavy problems either. I think AI is a threat, but I do think Gemini is holding its own quite nicely in the AI threat that they're seeing. So I don't think it's a situation where, you know, Google doesn't have a greater than, you know, 50% chance of being the 6, you know, the primary AI solution in the world. They invented the technology, the RMD behind it. I mean like they have a lot of the momentum. They were behind to announce it to the world with chat GPT, but they've done a lot to catch up. All right, let's move on to AI is how ML makes money. OpenAI is expanding their leadership team. They hired Fiji Asimo. She was the Instacart CEO prior to this, and she'll be coming in as the CEO of Applications. Basically, this represents a major leadership restructuring at OpenAI. Sam will continue to oversee research and infrastructure teams that are core to the company's AI development while leaving the rest of the company to cimo. So basically she's just a CEO, but they're giving her a CEO title over applications. One of the key areas SEMA will focus on is managing executives. Under Altman, turf wars festered and sometimes key decisions were delayed after receiving requests for computing or bigger headcounts. And this is an area that she apparently excels at as well, as she is very familiar with E commerce and commerce transactions from being at Instacart and other E commerce companies where she will help grow probably the revenue side of the business quite nicely at OpenAI. We'll see where this goes from here, but look forward to seeing something new potentially coming out of this with costs and changes and how AI is structured financially. For sale? [00:10:25] Speaker D: You think it's for sale or for. [00:10:27] Speaker A: I mean, just how they sell it? [00:10:28] Speaker D: Ipo. [00:10:29] Speaker A: Sorry? I mean, I don't think IPO is a nonprofit. Right. [00:10:33] Speaker D: Oh, that's true. Yeah, I forgot about that. [00:10:35] Speaker C: Yeah, it's an interesting. Kind of like a. It's a mess. Right. Like they're, they're, they're trying to run like a, you know, a typical tech startup, but then because of that, you know, nonprofit status, they don't have IPO as a target. So, you know, how do they, how do they sort of exist in that ecosystem? Yeah, so it's, it's kind of interesting. [00:10:55] Speaker A: Yeah. How do you monetize. How do you, you know, I mean, that's what they want. The, the for profit entity to be restructured underneath the open, you know, the nonprofit entity. And there's a way they calling it a for profit public benefit corporation. And then there's a way to get some type of money out of it so that they can basically liquid all those shares and options. I'm sure those people have, because if they can't do it, they're all going to run to Claude or Google or everybody else. And so they have to figure a way to monetize the investment. People have made into the company other than, you know, Microsoft, who just sells OpenAI services at a markup or, you know, or even the same price but with a discount from OpenAI. All right. OpenAI is also releasing two features for countries. First is OpenAI for countries and the second is a data residency for Asia countries. So Basically they announced OpenAI for countries, a new initiative within the Stargate project. Through formalized infrastructure collaborations and in coordination with US government, OpenAI will partner with countries to help build in country data center capacity, provide customized chat GBT to citizens, continue evolving security and safety controls for AI models, and together raise and deploy a national startup fund. I'm a little concerned that they purposely called out in coordination with the US government. Doesn't feel like spy wary at all. Like we're going to go, the US government's going to help ChatGPT or OpenAI go build data centers in other countries. Sort of a strange, like, why did you put that in the announcement? Like, even if that's what you're going to do, it still seems a little weird. So, you know, it sounds like there's some partnership with some of the countries for, you know, DOD type work, you know, where countries like Australia, et cetera. But still, it's a little bit ominous how that was written. Don't think that was what they intended. That's how I took it. And then the, the Asia sovereignty thing is announcing data residency for Japan, India, Singapore, South Korea for the ChatGPT enterprise, ChatGPT EDU and the API platform that lets the organizations meet local data sovereignty requirements when using OpenAI products and their businesses and building new solutions with AI. [00:12:58] Speaker C: Okay, I completely misread that. Originally I thought there was collaboration with the US government to in, you know, achieve data residency in Japan. Wait a minute. [00:13:09] Speaker A: Yeah, yeah, this doesn't make any sense. Yeah. What are you talking about? Yeah, no, so these, these are two announcements I kind of put together into one set of stories. Just because they were both foreign countries, I didn't think it made sense to talk about them separately. But yeah, I can see the confusion. So sorry about that. Ryan. [00:13:24] Speaker C: Is the open AI for countries foreign? Like I, I, I read that as just, you know, big government data centers and I mean, yeah, they are 51. [00:13:31] Speaker A: Where they're, they are supposed to be, you know, in other countries, basically, I believe. But yeah, I mean, they could be built in the US on the Stargate infrastructure for other countries as well. That's a possible scenario, you know, but I would, I don't know why you would have data sovereignty and then say we're going to have country Data in. [00:13:49] Speaker D: The U.S. so who knows, hosting the NBC's. What could possibly go wrong with that? [00:13:54] Speaker A: Yeah, nothing go wrong with that. Yeah, sure, sure. And then our final story again, a lot of OpenAI. TechCrunch is reporting that OpenAI is in tough negotiations with Microsoft. The AI startup is trying to restructure itself, of course, which talked about with its business arm into a for profit public benefit corporation. While its nonprofit board will remain in control. Microsoft is apparently the key holdout. After investing $13 billion, they need to approve the restructuring plan. The main issue is how much equity Microsoft will see in the for profit entity. And the companies are also apparently renegotiating their broader contract with Microsoft, offering to give up some of its equity in exchange for access to OpenAI tech developed after the current 2030 cutoff. These negotiations of course are complicated due to increasing competitive pressure between the two companies. [00:14:37] Speaker D: It's amazing to me that Microsoft wants to put all of their eggs in the OpenAI basket at this point. You know, and they did the same thing with Redis on the announcement where you know, they said we're just going to, we're going to back Redis on this, you know, on the license change. And here it kind of feels like they're doing the same because you can't get the other models in their platform. So if you want something you have to go to your competitors. And Even, you know, GitHub has Claude, but that runs on AWS. So it seems interesting that there's so much embed with OpenAI at this point and they're not looking to offer, you know, their customers more, you know, options into what they want to do and what tools they want to use. [00:15:22] Speaker A: Yeah, you know, I know the GitHub copilot, we've heard for a while that they're actually connecting that to Claude on aws. Is that still the case now? Even with. Because I know they have recently put Claude anthropic stuff into, into Vertex or sorry, into Azure's AI foundry. Sorry, these products saved my life. But that was an AI founder now, so I would assume that they would have swung over from aws. That was probably how they had to start because they didn't have it up yet. [00:15:48] Speaker D: But still on GitHub website. Just checked Cloud 3.7. Sony is hosted on AWS. [00:15:53] Speaker A: Interesting. [00:15:54] Speaker D: Anthropic and Google cloud platforms. [00:15:57] Speaker A: I wonder if that's going to change in the future. We ought to keep an eye on that one maybe. [00:16:00] Speaker D: Yeah, 3.5 is only on AWS. [00:16:03] Speaker C: Got it. [00:16:04] Speaker A: Well, I mean, 3.5 makes sense. That would. [00:16:06] Speaker D: Yeah. [00:16:07] Speaker A: Not going to deploy new 3.5. You're going to deploy 3.7 if you're going to deploy something new. But yeah, GitHub Co doesn't support 3.7. Right. [00:16:14] Speaker D: It supports 3.7 and I thought 3.5. I mean, at this point I think it's always 3. [00:16:18] Speaker A: 37 because, I mean, I logged into the console what, six weeks ago. So by now it's probably a 37. I just, I can't keep it up. [00:16:25] Speaker D: Yeah. But with all the GitHub pricing changes, there's pro, you know, there's reasons why you might want to downgrade to 3.5 depending on, you know, how many premium calls or whatever it is that you use. [00:16:37] Speaker A: Yeah. Which no one knows how many premium calls you need until you get the bill and then you find out how many you need. And it was expensive. That's. That's the challenge there. [00:16:45] Speaker D: Yay. Finops. [00:16:46] Speaker A: Yep. Moving on to cloud tools, Terraform's AWS provider has now topped 4 billion downloads and is rapidly approaching 5 billion downloads. In fact, as well as the 6.0 Terraform ads provider is now in public beta. They have already downloaded 569 million versions of the Terraform provider this year alone, which I think it's in a 12 month period. It can't be just, you know, January to May. Right. That'd be crazy. That would be crazy. But it might be true. I don't know. [00:17:13] Speaker D: I don't know how many. I've probably downloaded it alone like, like 100 or 200 times. And that's just, you know, me on side projects. [00:17:21] Speaker A: Oh, for. Yeah, because you don't. You download it for every one of your repos, right? [00:17:24] Speaker D: Yeah, every repo. If you have a lot of different folders in it to isolate your blast radius, it adds up quickly if you're running through like a CI CD pipeline. So it's a fresh build every single time, grabbing the latest cache of it. So it will add up. [00:17:39] Speaker A: Yeah, it will. Well, the 6.0 Terraform provider is now in public beta, which brings a lot of exciting changes to the provider, which set a lot of exciting changes. And then there was one. So I was like, okay, that's cool. But basically the biggest improvement in 6.0 is enhanced region support. So previously for the Terraform provider, if you needed to access multiple regions, you had to download multiple versions of the provider because you were only able to target one provider at a time or. Sorry, One region in the provider at a time. This limitation meant that practitioners had to update every configuration file individually and if they wanted to change the configuration of a particular resource. And for global companies, this could mean ending the Same parameter in 32 separate configuration files for each region. I want to know the person who's actually deployed in all 32 regions, because that's crazy but impressive. Now you can support multiple regions with a single configuration file. The new approach leverages an inject region attribute at the resource level, simplify configuration efforts and this reduces the need to load multiple instances of the AWS provider, lowering memory usage overall. So basically the single provider config lowers memory, makes it simpler and easier. The new regional attribute allows you to set that at any attribute level, which is nice because I've ran into that one a couple times, dealing mostly with the global resources like IAM, Cloudfront, Rafa D3 as they'll remain unaffected as they operate globally. But in previous versions of the provider, you actually had to specify a different configuration for them because they were all in US East 1, which is fun. And then the Terraform plugin framework updates will also now use AWS API client maximum to support per region API client mappings. And you can now use the import enhancement to allow the at region ID suffix to allow importing a resource from different regions as well as they're going to be improving the documentation and testing to ensure backwards compatibility. I can't wait until all of my resources now require a region profile in addition to everything else that I have to set up in my file. So that'd be good. [00:19:32] Speaker D: This is massive. [00:19:33] Speaker A: It's a big change. [00:19:34] Speaker D: Yeah. I mean even just, you know, when you wanted to launch something with, you know, let's say, you know, DynamoDB streams, you know, you want to set it up with your doctor region or you know, anything else along those lines. Obviously not 32 regions, because unless of you're spending all your money on aws, which I hope you're not. But you know anything, if you're doing doctor or you're read only replica of your RDS in a secondary region, it becomes a real pain real fast. And that's where I ran into it. It's also where I learned that ACM is required in US east one for Cloudfront. That one bit me in the butt a few times. [00:20:10] Speaker A: Yep, Cloudfront. And then there's something else, a couple other things that are US east one only that you have to deploy there, particularly with a WAF for Cloudfront and a couple other things and so yeah, that was the one that always burned me. It was like, oh yeah, I need to specify this special parameter and put a different configuration into the file. And it was always a pain every time you do it. There's a couple other things that are minor updates that they didn't put in the blog post, but I went and looked at the actual GitHub code. So they're going to update the diffs of the user data to show user data changes instead of hash values. And all I can say is hallelujah on this one, because that has been a change that I make a lot. It's user data and when you can't actually see what's changing, you kind of have issues. Now, I will say that this is the moment you're going to really want to make sure that you don't have secrets in the user data, because this will also not be hashed in the state file, I believe is how this will work out because it'll now be in plain text in the Terraform plan and the Terraform Apply diff. And so that'll end up probably that way in the backend. So if you haven't moved to secrets management, you should do so now, quickly with your user data. There are several services being deprecated. Amazon Chime, which we all knew about. Cloudwatch evidently, which I missed. Amazon Elastic Transcoder and AWS Elemental Media Store, which I don't use either of those, so I wouldn't have cared. I was a little sad, but evidently, because I remember making so much fun of it. It was like, why did you put this feature into CloudWatch? And apparently everyone else agreed too, because it only lasted a little while. Yeah, and then a bunch of already deprecated services, including the Elastic Inference, the Elastic graphics card, the opswork stacks and AWS SimpleDB domains are all being removed from this as well. So they're saying if you still use SimpleDB, the five of you out there, you need to stay on the 5x version of the provider as well as they will be removing the S3 global endpoints in the providers and going to the local endpoints with the new region configuration. So that's good news as well. So I'm excited about this one. Some good ones in 6.0. I don't look forward to breaking everything I have, but I do have AI to help me fix it now, so that's nice. Versus me, you know, going through hundreds of commits trying to figure out what they did to my code. [00:22:16] Speaker D: I didn't realize that opsworks was EOL too. [00:22:19] Speaker A: That was opsworks Stacks. [00:22:21] Speaker D: Oh. What's the difference which one's which? [00:22:23] Speaker A: I don't know. I don't. [00:22:24] Speaker D: Okay, cool. [00:22:25] Speaker A: I don't use Opsworks. [00:22:27] Speaker D: Yeah, I don't think I've used Opsworks in 10 years, so. [00:22:30] Speaker C: Yeah, I don't think anyone else does either. [00:22:32] Speaker A: I mean, I wouldn't be surprised that Opsworks died. [00:22:35] Speaker D: Any. Opsworks is one of those services that's in US East 1, too. Only East 1. [00:22:40] Speaker A: I mean, it's been dead since May 26th of 2024. It's been dead for a year, and we didn't know, so I'm just looking. Oh, this is. So this is a Chef feature, basically, to allow you to do stacks, container for AWS resources. You define them as a stack in Chef, and then you can deploy them in opswork. Has that. But, yeah, I don't. I can't imagine opsworks has to be around long for the world. Or maybe thousands of customers use it just like elastic Beanstalk. Every time I make fun of it, I get messages like, oh, a lot of people use it. Dozens of us. Dozens. I swear. All right, moving on to AWS proper in the series. I was say about Terraform. Sorry. As a provider through aws. All right. On aws, they're announcing plans to launch a new AWS region in Chile by the end of 2026. This will be a region consisting of three availability zones and will join the Sao Paulo and the Mexico region as the third in Latin America. So, looking forward to the new Chile region sometime at the end of 2026. [00:23:41] Speaker D: So reinvent 2026 will be G8 probably. [00:23:44] Speaker A: Yeah. Right around that time. Yeah, they don't really. They don't really tie region availability to reinvent. [00:23:50] Speaker D: No. Just thinking, like, that's how far out it is. How far out they're announcing it to. [00:23:55] Speaker A: I mean, it takes about a year, I assume, to build a data center. And it seems like it's been kind of switching out to more like 18 months now because you have to negotiate power and all the other things you need for a data center of the size of an Amazon data center. [00:24:06] Speaker C: Still think the cloud pod should send me to review the build process, Pitch. [00:24:13] Speaker A: Up an apartment next to the data center location so you can just take regular photos of it. [00:24:18] Speaker D: Sure, Chili. We might be able to afford that. We'll give you a few bucks. [00:24:22] Speaker A: Data sponsors us. We might be able to do it. I don't know. They have a lot of money over there. All right. Amazon Q developer is now being added to Amazon OpenSearch Service. This is for many companies who've made the terrible mistake of using OpenSearch as their observability platform and they're storing all their operational and telemetry signal data. They use this data to monitor the health of their applications and infrastructure and hopefully their OpenSearch cluster because it will crash. However, at the scale, the sheer volume and variety in data makes the process complex and time consuming, leading to high MTTRs. To fix this, Amazon is introducing to you Amazon Q developer support to OpenSearch. This allows an AI assisted analysis. Both new and experienced users can navigate complex operational data without training, analyze issues and gain insights into a fraction of the time. And Q Developer will help reduce your MTTR by integration of generative AI capabilities directly into open search workflows. [00:25:14] Speaker C: I'm not sure I like, this is just adding like sort of natural text descriptions to the product and, and that. But the, a lot of the tools are doing this weird branding thing where like the Q developer, you know, it's just like couldn't it just be part of open search? [00:25:30] Speaker A: I mean, that was the thing I was thinking too because I mean the elasticsearch had ML quite a while ago. Yeah, it didn't have natural language, but like did you just take the ML capabilities that you got from the open source version, then you just added in Q natural language AI processing on top of that? Yes, that's all you did. But we're, we're have to make, we have to show that we're doing natural things with AI. That makes sense. So you have to put A on everything and then if we have a Q as our developer tool, then make it a Q developer even though it has nothing really to do with Q. Right. [00:25:59] Speaker C: Yeah, it's this weird branding thing. Like I kind of feel like Copilot has gone the same way. Like every. There's like six different copilots and which one's which? [00:26:08] Speaker D: And you're a section ahead. Or maybe that's your announcement, maybe that's your prediction. [00:26:14] Speaker C: Maybe. [00:26:16] Speaker A: I mean, maybe we'll see where he gets to. All right, well, I'm not looking forward to Q and OpenSearch. I'm not looking forward to using OpenSearch General. So I try to avoid it as much as possible. But I appreciate the thought and if I was in a situation where I needed it and I was doing that terrible pattern again, which I hope to never do in my lifetime again, scars are deep. I'm glad there's at least an option that makes it maybe easier to do. [00:26:40] Speaker C: So I will never use that type of database of any technology if I am not the producer of the data. [00:26:49] Speaker A: I mean I will use it for full text search all day long. Like for, you know, actual full text use kit use cases like my website needs a search box. Okay, yeah, I'll use open search for that. I'm going to use Grafana if I need metrics and observability. Again, not going to go down this terrible path. All right. Kubernetes 1.33 is now available to you on GKE in the rapid channel, which hopefully none of you use in production. But if you do, you get access to a bunch of new features in the 1.33 version, including in place POD resizing. Thank you. That took forever. Kubernetes dynamic resource allocation container 2 container D 2.0 runtime support, multiple service CIDR support and Google themselves wanted to point out that they contributed multiple things because they contribute a lot of code, including the coordinated leader election compatibility versions, Z pages, streamlining, list responses, snapshotable API server caches, declarative validation and ordered namespace deletions. Thanks Google, we appreciate it and so does Amazon and Azure and everybody else who also benefits from their contributions, the. [00:27:49] Speaker C: Entire open source community. Yeah. [00:27:53] Speaker A: You'Ve, you're stunned with inpod. [00:27:54] Speaker C: Yeah, you know, it's, it's, it's hard to get me to care about a Kubernetes upgrade. [00:28:01] Speaker A: It was easy, I suppose. Would you be more excited? [00:28:02] Speaker D: As long as you're not doing it, you're good. [00:28:04] Speaker C: Yeah, no, I mean, no, I mean that's the thing. If it's ecs, it would be like usability features, like I like that. You know, there's always these upgrades and then it's like you can, you know, we've made an upgrade to the container D runtime. We've, you know, streamlined the responses and. [00:28:22] Speaker A: It'S just like ah, I don't care. I do find it funny that it's taken as long to get replaced pod resizing though. I mean to be able to change the CPU memory request of assigned to containers, whether on a running pod seems like something that would have been needed a while ago. I just think about all like oh well in ECS I need a new pod size, you know, a new container size, it just deploy a new thing and it pins up a new container that's the new size and shows on the old one. It's super easy. I guess it was not necessarily a running container, but I don't know if a running pod is the same thing as a running container in this context of kubernetes either. Because again, I get confused between pods and server lets and all the other kubernetes buzzwords. [00:28:59] Speaker C: Well, I mean that's the thing I don't. You have the service endpoints and those are separate from the pods and so and I believe that it's a common pattern to just replace the pod and then have the, the new pod sort of take over the traffic percentage. [00:29:13] Speaker D: Yeah, but that's even what ECS does. It just you launches a new one and brought to the traffic. So yeah, I think here it's actually. [00:29:23] Speaker A: Resizing the running container, resizing the existing one. [00:29:26] Speaker D: So it's not doing a, for lack of better term, blue green deployment. It's just, hey, you now have, you know, 512 megabytes versus 256. [00:29:37] Speaker A: This is going to be a problem for all my starter scripts that basically dynamically change parameters based on the container size that I built. Because if you change it dynamically, then my container, my configuration for my memory and stuff is now no longer valid. So that's a problem. [00:29:52] Speaker D: So yeah, sounds like you use a lot of Java. [00:29:55] Speaker A: Not a lot of Java, but a lot of Apache tuning. In my day. [00:30:00] Speaker D: I've had enough Java scar tissue where I see, oh, why isn't this work here? Why is it only using X amount of megabytes on the box? We launched it with a 48 gigabytes. Why is it only using two? Well, you have this hard coded in there. [00:30:14] Speaker A: Yeah. Reading the container to containerD 2.0 release notes. And what's that? [00:30:20] Speaker C: He beat me to it. [00:30:21] Speaker A: It's like, it's like reading kernel release notes. It's so boring. I don't know what any of these things are you're talking about and I don't know that I want to care or need to know. So just tell me why you need to update to 2.0. Because you're deprecating 1.0 and I'll be happy because adding nothing option to boost BoltDB performance on ephemeral environments. I don't know what that means. [00:30:44] Speaker C: You can now use multiple CNI plugin binders. [00:30:47] Speaker A: Yeah, they added content, create events. Okay, maybe I know what an event is. I like content. But container D 2.0 is definitely like we're playing in kernel land. And I'm like, I'm out. I made this take a long time, you know, subscribing to the Linux kernel mailing list and I. It was about a day and I was like, I'M good. This is it for me. I was looking. Either it's them arguing, which is annoying, or it's so technically complicated that I have no idea what they're talking about. I'm like, I don't know enough C for this. [00:31:18] Speaker C: And sometimes it's both. [00:31:20] Speaker A: Yeah, exactly. [00:31:23] Speaker D: Oh, Ryan watches the kernel. Nelly list. I just. I just learned. [00:31:28] Speaker C: I used to. I used to. Especially the FreeBSD days. I was very interested. [00:31:34] Speaker A: I typically will get linked to it from a hacker news article where they're talking about someone arguing with somebody and. Or someone quitting, rage, quitting. And so that's how I typically get to the. To it nowadays. And then I'll read like, you know, whatever the thread was, and I'm like, I'm good. I scratch the itch for that. I don't need to join this. It's all on the web, so you don't have to even subscribe to it anymore if you don't want to. [00:31:52] Speaker D: I think the last time I really started was with the kernel was when I like played with Gentoo and I ran Gen 2 for a while. But like, you know, enabling or disabling every single different kernel mod, you want it on or off, and, you know, optimizing that last tiny bit. [00:32:08] Speaker A: It's sort of funny to me that you just told us that you ran Gentoo because I'm pretty sure like five episodes ago you have this exact same story where you told us about you running Gentoo and it's very rap for how you had to manage your Gentoo instance. Like just forget what you knew because it changed yesterday. So I kind of enjoy the Emerge world, you know, the parallels to the real world that you don't remember. You talked about that five weeks, five episodes ago. [00:32:27] Speaker D: I don't know what I did three days ago. [00:32:29] Speaker A: You did have a baby between that then and then, so it makes sense that you've forgotten. But it was just funny because you literally said it almost the exact same way you just said it there. [00:32:37] Speaker D: I have no memory of that. But I also can't tell you what I did last week. [00:32:41] Speaker A: So very nice. [00:32:46] Speaker B: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference? Other cost management tools may say they offer commitment insurance, but remember to ask will you actually give me my money back? Our chair A will click the link in the show notes to check them out on the AWS Marketplace. [00:33:25] Speaker A: All right, well, if you are evaluating your media models using vertex AI, Google has a new tool called Gecko, which I will never remember the name when I actually need the tool. This is one problem with this name, but Gecko is available through the Cloud Vertex AI evaluation service. Gecko is a rubric based and interpretable auto rater for evaluating generative AI models that empowers developers with more nuanced, customizable and transparent way to assess the performance of image and video generation models. This is ideal to replace traditional human evaluation which is the gold standard of evaluation and it can be costly and slow. Hey, hey, come on Google, be nice to me. Hindering rapid development cycles as generative AI innovates rapidly 1 of the challenges that Gekko solves is that when traditionally using automators, they lack the interoperability needed to understand the model behavior and pinpoint areas for improvement. For instance, when evaluating a generated image depicts a text prompt, a single score doesn't reveal why the model succeeded or failed at generating that model that thing. Gekko offers you a more fine grained, interpretable and customizable auto rater and this is based on a DeepMind research paper that an auditor can be reliably evaluate image and video generation across a range of skills, reducing the dependency and costly human judgment. Notably, beyond its interoperability, Gecko exhibits strong performance and has art of it instrumented this rental and benchmarking the progress of leading models like Imagen. There you go. [00:34:42] Speaker D: I heard this Geico when I first read this and that's all I still think about every time you said it. [00:34:46] Speaker A: I As I was coming up with Showtas tonight I was thinking about Geico quite a bit but I couldn't make it work. All right. Matt's doing us making us do a terrible, terrible thing. He's mad at us for making him do GCP predictions and so he said we have. You did win. I know. That's why is this our punishment for you winning? Like come on. He's asking us to do build predictions so we have attempted to come up with ways and so Matt is the best prep to win this one which. [00:35:15] Speaker D: Would be no I'm not. [00:35:16] Speaker A: Which just makes it really good if Ryan are a win because we don't use ads like Matt does. But I I use the tools available to me which Means I use Gemini Deep research to go do heavy research on this. And as I was reading through that, I had a couple other ideas as well. So I've got a few. I think Ryan came up with some and then Matt came up with some. And so in triple fashion we did roll the dice before the show. Ryan rolled 11, which he was very mad about. [00:35:42] Speaker C: We need more time. [00:35:43] Speaker D: Justin rolled first and he was very mad about his though. [00:35:46] Speaker A: That was mad because I was last because I think I have a good one that I'm pretty sure one of you is going to take. But we'll see. Maybe, maybe I'll make it through this. But. So anyways, we're back on draft order. Ryan, your first. Ryan, what's your first build prediction? [00:36:00] Speaker C: I think they are going to announce an enhancement to GitHub Copilot that allows for agentic co development and tracking of agentic tests via GitHub issues or some sort of work management process. [00:36:16] Speaker D: Mad, because that's actually one of mine and I really thought that was not going to be something that you would have pulled up. [00:36:26] Speaker A: See, we're all gonna all get to this agentic thing. That's what's happening. All right, Matt, that puts you on the clock. [00:36:33] Speaker D: I'm gonna go with the ARM processor that they have called copilot. They're gonna release a new series of it. So I think it was like 100, so maybe 200 or whatever they wanna call it. But essentially a new ARM processor or like a next gen V2, whatever you. [00:36:50] Speaker A: Want to call it isn't. Isn't there ARM? You said their ARM processor called Copilot. I didn't think it was. [00:36:54] Speaker D: No, Cobalt, Sorry. [00:36:55] Speaker A: Oh, cobalt. Sorry. [00:36:58] Speaker D: It's highly possible I said the wrong thing. [00:37:00] Speaker A: You might have. I don't know. Well then good, you guys didn't take mine. So. All right, so basically I gave you a bunch of clues at the beginning of the show to help you try to figure this one out. But basically if you look at Microsoft and OpenAI, things are not going very well. And I gave you a bunch of stories about this in the last few weeks. And so, you know, I'm looking at it and I'm saying to myself, if you're in the middle of negotiating with OpenAI for equity and these other things that you want, wouldn't this be a great time in negotiations to announce your LLM? I think Microsoft is going to announce their official LLM that will be competitive with Anthropic and Gemini and OpenAI to put further pressure on OpenAI and their negotiating tactics. That's my prediction. [00:37:48] Speaker C: Yeah, it's a pretty good one. [00:37:50] Speaker D: Isn't Phi their AI? [00:37:52] Speaker A: That is their SLM small language model. I think they're going to have a true large language model. You guys look angry. That's all I can see. [00:38:02] Speaker D: Well, I had one related to that, but it was because my thought process was incorrect, that that was their SLM versus their LLM. Yeah. [00:38:10] Speaker A: There. Fair enough. All right. And that puts you on the clock for your second choice. [00:38:14] Speaker C: All right, which one do I want to use? I'm going to do this one because I think this is the one that most likely you guys are still. I think they're obviously going to announce more quantum computing exaggerations like they did the last time. [00:38:27] Speaker A: I did have something about quantum computing. So you're thinking a new chip or you're not thinking a new chip? Right, because they just announced that, I assume something around quantum something. Ish. [00:38:38] Speaker C: Well, so it was based off of the last chip. Yeah, you know that they just announced it and so. And they. Not only did they announced the chip, but it was like the new type of matter and. And there was a lot of like, sort of suspicion that maybe they were exaggerating the capabilities there. So I. I suspect that they'll double down and they will continue to announce quantum computing abilities that are far beyond what is considered to be the normal sort of progress. [00:39:12] Speaker A: Yeah, it's not exactly what I was going to say, but it's close enough that I won't use it now. All right, Matt, you're on the clock for your second. [00:39:21] Speaker D: You're still on those. I'm going to go completely left and go with a new Surface hardware. Feel like it's. They've been pushing a whole lot, so I think they're going to see something in a new generation of Surface hardware. I'm not sure if that's this conference or the other conference. I'm at a disadvantage because I think. [00:39:42] Speaker A: It'S the other one. Yeah, I think it's the other one. [00:39:44] Speaker D: It might be. [00:39:45] Speaker A: I could be wrong again. This is your idea. [00:39:49] Speaker D: I don't have good ideas. Haven't you guys learned this? I work in Azure. I thought that was clear. [00:39:54] Speaker A: Yeah. So for my third choice, I think we're going to see a major upper upgrade. Oops, sorry. What? [00:40:02] Speaker D: Second choice? [00:40:03] Speaker A: Yeah, my second choice. I think we're gonna see a major upgrade to Microsoft Office Copilot with inclusion of MCP capabilities inside of it. [00:40:14] Speaker C: That's close to what I have. That's. But that I Like your specificity on mcp. That's cool. [00:40:21] Speaker A: Yep. [00:40:22] Speaker C: I hope they do that. [00:40:23] Speaker A: Me too. Because right now, co pilot from Office is terrible. So if they could make some major upgrades to that, I'd be much happier with it. All right, Ryan, that puts you on your third and final. [00:40:33] Speaker C: All right, I'm going to stay in theme with the Office products, and I'm going to say that they're going to announce some sort of augmented or virtual reality experience for virtual meetings and teams with AI. Something, something, something. You know, craziness. [00:40:47] Speaker A: Augmented virtual reality for teams. I will hate you if this is true. [00:40:52] Speaker D: I just thought it was like, ride in a PowerPoint, like moving a PowerPoint slide, like move. Waving his arms as he's talking. [00:41:00] Speaker C: Totally gonna be a thing. I swear to God. [00:41:03] Speaker A: I mean, like, when did they get. Come meta. Like the multiverse is a thing. I don't know. That one feels like a Hail Mary, but I'll. We'll see where it goes. See where it goes. [00:41:13] Speaker C: You should. Well, I've got my. My honorable mentions. You'll see that I was pulling pretty deep. [00:41:21] Speaker D: We've covered most of Aida is the sad part. [00:41:24] Speaker A: Yeah. Matt, you're up for your third. I did. I mean, I said, do you really need three? And you're like, yes, I really need three. I was. [00:41:30] Speaker D: I was trying, like, seven. And yeah, we managed to use them. Apparently, we all think alike. I'm gonna just make one up as I'm talking right now. [00:41:41] Speaker A: Cool. Best way to go. It's probably the one that's gonna win you the game. So, yeah. [00:41:46] Speaker D: I think they're gonna make a major update to the app services service in Azure. So there'll be a new release of the app service platform. Might be me wishing a little bit, but, you know. [00:42:04] Speaker A: All right, I mean, that sounds cool. Oh, puts me on for my third one. And things are getting. Things are getting dire at the bottom of the barrel here. Microsoft. Oh, I know what I'll go with. I think they're going to come up with a competitor slash killer for Agent Space from Google, or Glean, as we know, because they need something to cross all of their knowledge silos that they've created for all the IT workers in the world that'll become AI centric. So Agent Spaces or Glean Type competitor or Amazon Q Business, if you remember that. Interesting. All right, so that thing brings us to our tiebreaker, which we probably will need, which we said that we were going to go with. The number of times copilot is mentioned in the keynote I think this is reverse order. Correct. So I got to go first. [00:43:01] Speaker C: Typically. [00:43:02] Speaker A: Yeah. Oh, man, it's going to be a lot. [00:43:10] Speaker D: Like, is a thousand too high? [00:43:11] Speaker A: Right? Like, I mean, how long is this keynote? Like, I don't know anything about. [00:43:18] Speaker D: I think it's an hour and a half. It's slotted for, I think it's noon, east coast time on Monday. [00:43:25] Speaker A: Okay. You're going to have to watch this and like, because I don't know if I'll be able to stomach it, but I'm going to go with 55. All right, that puts it to you, Matt. How many times? [00:43:36] Speaker D: Well, I think it's gonna be more than 55. I'm gonna go 75. [00:43:39] Speaker A: Okay. Oh, and then Ryan goes 76. [00:43:46] Speaker D: Are you. [00:43:47] Speaker C: Yeah, I thought about it. Yeah. [00:43:50] Speaker A: Does he go one, does he go 76 or does he go 56 or does he go completely off the reservation? [00:43:56] Speaker C: So I was only a few away from 55, my first number that I was sort of imagining. So I, I think it's going to be somewhere in that ballpark, which puts, it, puts me in that uncomfortable middle. [00:44:08] Speaker D: 256. [00:44:14] Speaker A: Like 63 and a half. Right. Between Matt and I. Yeah. [00:44:21] Speaker C: Actually, I will, I'll do 62. [00:44:23] Speaker A: All right. There you go. [00:44:24] Speaker C: Yeah. [00:44:24] Speaker D: So then the next question is, what happens if we're all wrong and it's under 55 somehow? [00:44:29] Speaker A: Then you know what? Jonathan gets a point. How's that? Bravo one, Jonathan. Who isn't here there. Solved. Solve the problem. There we go. All right. Yeah. [00:44:43] Speaker C: I love it. [00:44:45] Speaker A: I don't, I mean, I, I'd be happy with any of these things you mentioned, other than the augmented virtual reality for teams. I will, I will cut you, Ryan. But other than that, I, I, I don't hate any of these, really. Um, I was so my, my major one was, I was thinking some type of like, major upgrade to quantum development toolkits to leverage the new Majora chip that was only mine. But you, yours, yours was vague enough that, and broad enough that I was like, ah, it's too close to mine. Um, so I didn't go that way, but I did, I did mention at the beginning of the show that I did use deep research to help me out with this. They did not give me the LLM one. In fact, they told me that you're probably wrong about that. When I asked it, you think we've created LLM? And it's like, probably not because they have a partnership with OpenAI. I was like, okay, you're not very good at business, politics. But yeah, I got quite a few things. Lots of mentions, all of the heavy investment in Azure AI Foundry. It told me that there's probably going to be some major things there. Advanced AI agent platform which you guys covered out. Expansion of Copilot integration, which I think Microsoft 365 was in that vein. Serverless data services which we actually have later in the story today. So you're a little bit ahead of time. Advanced in hybrid AI with Azure arc, which I think has already kind of been released as well. So again it was stuff I've seen in the last few weeks. So again I don't know when Gemini's data set how current it is, so I feel pretty good. I did pull a lot of websites and data together which was good and we'll see what happens. Well, maybe the story teed you off on the idea of virtual reality, Ryan, because Microsoft is giving you virtual data center tours. So if your auditors love that data center tour or if you have general curiosity about what a Microsoft data center looks like, which I have zero desire, I've seen enough rooms with blinking lights in it to be done. Microsoft is giving you the new Virtual Data Center Tour where customers can explore the infrastructure and data center design that powers over 60 data center regions and 300 plus data centers globally. Microsoft wishes they could take you to the data center, but it's prohibitive security, safety and staffing issues prevent this from happening. And so they're bringing the data center to you with the new Virtual Data Center Tor Microsite that includes a 3D self guided virtual journey that allow you to interact with Microsoft Data center firsthand. You check out recent innovations like Microsoft Zero Water Cooling Data Center Design, which eliminates water and use in Data center cooling + Majora1, the world's first quantum chip powered by a topological core. I did click into it because I was curious what that looked like and it was just a promotional video for the Major one. So the reason Innovations Room is not as exciting as it sounds, so don't get too excited. I do think it'd be kind of cool. This is available in like Meta Quest or Oculus or whatever they call those things now, or the Apple Vision Pro. It'd be kind of cool to be able to just kind of wander around it, but it's not that kind of 3D. But I clicked on it far enough to get to the lobby and then I was. I lost interest pretty quickly because I was like this is kind of slow and not really wow. You can jump around if you find a little menu that tells you all the sections you can jump to. So you can go see things like the mechanical room, if you're so inclined. But nice. I, you know, these are always interesting for people who are super into data centers. Oh, it does include an AI assistant, so you can ask AI questions every day. Center too. [00:47:56] Speaker D: Yeah, the first thing I asked it was what's the address of the data centers I can go visit? It will not tell you. [00:48:03] Speaker C: I sort of envisioned like a Google street view type of interface, you know, where it's like click the arrow, go forward in the row a little bit. [00:48:10] Speaker A: I mean it does sort of have that like, I mean it has little boxes you can click and there's an arrow and then you can go into the different things. But yeah, it's, you know, very canned photos that are, you know, or canned things that are 3D shots. So you can kind of have an idea. But even some of the things look like they're digitally inserted into the picture. [00:48:28] Speaker D: Yeah, like the person in the lobby. [00:48:31] Speaker A: Yeah, like the person in the lobby or like in this. Now I'm in some. Oh, the receiving room where they receive the circular room as they call it. And like there's racks and it looks like they're digital, like digitally inserted racks, like they're real. So, you know, I appreciate the approach, but maybe, maybe a little miss on execution. But Amazon has similar things where you can go see photos of their data centers. Google has similar things. So this is just the next evolution of that and I'm okay with that. If I don't ever have to visit a data center again in my lifetime, I'll be happy. [00:49:04] Speaker C: They gotta make it cold and noisy or it's not accurate. [00:49:06] Speaker A: Yeah, well, I spent a very long outage in data center, so every time I walk into a data center my throat starts constricting. It's like a triggering event for my body. Like, no, I can't do this again, don't do it anyways. All right, well, Microsoft knows a good open source project when it sees it and wants you to know that it is committed to advancing open protocols like agent to agent, which they do not create. Anthropic. Coming soon though, to Azure, AI Foundry and Compilot Studio. They will enable agents to collaborate across clouds, platforms or entries with support for agent to agent. So thanks, thanks for coming. And also one from my list for next week. [00:49:44] Speaker C: Yeah. [00:49:47] Speaker A: Now Matt, you're going to tell me on this one a little bit. So Azure is announcing the general availability of Azure storage actions their Fully managed platform that transforms how organizations automate data management tasks. For Azure Blob and Data Lake Storage. Today, customers use disparate tools to manage their data states. And depending on dataset sizes and use cases, they may use analytics queries with inventory reports programs or scripts to list all objects and metadata, or subscribe to storage events or charge feed for filtering. Apparently the key advantage of storage actions is eliminating complexity, boosting your efficiency, drive consistency and hands free operations. Now, before the show you said this is Amazon batch and so I suspect that you're going to probably tell us how that is the case. [00:50:26] Speaker D: Oh, steal my fender, sorry. Yeah, so essentially from the little bit of research I did, because I kind of read this article multiple times trying to figure out what it was and essentially it's the management of the data. So if you have billions of objects, you're actually able to, you know, run some computation. But it's all serverless, unlike batch, which launches an ECS cluster for you. So I think they might have changed that by now anyway, so lets you kind of go through and operationalize stuff. So if you have a bunch of data, you want to process it, it kind of lets you process it and then visualize the output and you know, put the data somewhere else. So if you have, you know, hey, I'm collecting all of this metadata about, you know, my endpoints or you know, some Iot thing and put something into Blob storage. Now you're able to go take a look at it and query it all, run your computation and maybe store it in, you know, a Cosmo DB or something else. Lets you kind of do that without actually having to manage computer or anything else. [00:51:30] Speaker C: I mean, I don't know how many times, you know, we've created like, you know, S3 actions to Lambda things, you know, which to do stuff to objects, you know, as you're storing them. Maybe it's data enrichments or adding tags. [00:51:43] Speaker D: Or, or, you know, that's different though, replication, because that's S3 events essentially is what you're thinking of. [00:51:50] Speaker C: Well, but it's the S3 events plus lambda, right? [00:51:54] Speaker D: No, the S3 events is real time as it comes in batches after it's already in and you want to process it, the object is there. [00:52:03] Speaker C: So this isn't, this isn't based off of the, the object going into Azure. [00:52:10] Speaker D: I didn't think so it could be wrong. [00:52:13] Speaker C: I don't know. [00:52:15] Speaker A: So clicking into the article there is a frequently asked question section and there's a question here that they answer and Then I just tried because it doesn't make any sense to me. So there's an AI powered assistance created with Copilot Studio here on the website. And I was like, oh, I'll just ask you, explain this to me. Like I'm dumb. And I tried to copy paste the answer and then it has a character limitation, so I can't actually ask it the question. So I'm going to have to ask you, Matt. [00:52:38] Speaker D: It says like a bad quiz. [00:52:40] Speaker C: Yeah. [00:52:40] Speaker A: How many storage actions resources can I create in my subscription? So I know a subscription is a billing account, so I got that part. [00:52:51] Speaker D: Yes. [00:52:51] Speaker A: So it says I can create up to 5000 storage action task definitions in my subscription. [00:52:58] Speaker D: Okay. [00:53:00] Speaker A: Each task of those 5,000 may have up to 5,000 task assignments. [00:53:06] Speaker D: 25,000. No more than that. [00:53:10] Speaker A: And each task, each subscription may have up to 10,000 task assignments. [00:53:16] Speaker D: While you're Microsoft licensing expert. That's right. [00:53:22] Speaker A: So I'm like, okay, so you can have 5,000 storage action tasks and you can have 5,000 task assignments, but you can't have more than 10,000 per the subscription. But then. So I'm like, I sort of think I understand this. But then they throw this last line here, which really throws. Each storage account though, can have up to 50 enabled task assignments. [00:53:41] Speaker C: Oh my God. [00:53:42] Speaker A: How do you get to 10,000 if you can only put 50 on a storage account? How many storage accounts can you have in a subscription? [00:53:49] Speaker C: I think you can have a lot. I don't know. [00:53:51] Speaker A: Is that a pattern that you're doing, like with Blob Storage? [00:53:54] Speaker C: Yeah. [00:53:55] Speaker D: Well, because, okay, in AWS terms, a storage account is a.is S3 bucket. So each bucket you might want different things to happen in. So if you want your data to be. And then in Azure, because they don't really understand the cloud, still you can say this is one zone versus multi zone versus replicate to Dr. Multi zone versus replicate to Dr. Single zone. And each of those you can has to be done at the storage account aka S3 bucket level, not the container level. Closest example is prefix to that. And did I officially lose you yet? [00:54:38] Speaker A: Yeah, like you lost me in storage accounts equal to a bucket. And I was. My mind was like, what? So now I'm okay. But I did. I did see this other question that's very helpful for Ryan. The question is, what are the charges for using Azure storage actions? And so it says with Azure Storage Actions, you pay only for what you use. When your task assignments are executed, you are charged based on the count of objects targeted for scanning and the count of operations performed and creating task definitions, previewing their effect on your data, and monitoring task assignment Execution are free. So if you use the gui, Ryan, you would know how much we were about to charge me. [00:55:14] Speaker C: How dare you bring that up. It was only 70 or 700k. [00:55:23] Speaker A: Only 700k over budget for that month. [00:55:28] Speaker C: And the only valid excuse for using a UI over API that I've ever had. Yep. [00:55:33] Speaker A: They would have warned you. This cost you a lot of money. Yeah. Okay. Well, that's a terrible blog post, by the way. Microsoft should be ashamed of how bad that is. [00:55:45] Speaker D: I still have tried to understand the frequently asked question, how many storage resources can I have in my subscription? [00:55:50] Speaker C: Oh, there's no way I'm gonna figure that out. [00:55:52] Speaker A: I'm gonna have to go put it in Claude later. I was very disappointed that the co pilot on the website though, wouldn't let me paste the entire answer to explain it to me. Like, it's just shameful of their AI. I'm like, I'm gonna have to go put this in cloud later and make. [00:56:04] Speaker D: You ask my Microsoft account rep to explain this to me. [00:56:06] Speaker A: Yeah, because it again, like if a storage account is a bucket and then you can only put 50 of these things on a bucket, I'm just like, oh my goodness. Okay. [00:56:17] Speaker D: Storage also is like fsx, where they could be file shareups. [00:56:22] Speaker A: Well, but yeah. Okay, then is Azure SAN also would be a storage account? [00:56:27] Speaker D: No, I think that's its own service. I'm not dumb enough to. I'm dumb. I'm not that dumb in my day job. [00:56:33] Speaker C: I think you were right when you. Yeah. Because this is like the only Azure service I've ever really used is why. [00:56:39] Speaker A: You used Azure san. [00:56:41] Speaker C: Blobstore. Blobstore. [00:56:43] Speaker A: Okay. [00:56:43] Speaker D: I was like, why would you use Azure Sand? [00:56:46] Speaker C: So there's. And there's. There's no configuration that I remember reading about or being answered about of the SAN service. And so it really is just sort of like, do you want to manage this as an object or as a directory? You know? [00:56:57] Speaker D: Yeah. And SAN is its own service register console outside of storage. [00:57:03] Speaker A: Yeah. [00:57:04] Speaker C: Which I assume you're, you know, plugging in your vm, you know, block storage into or something. [00:57:10] Speaker A: All I know is I don't want to use Azure. This is all this is confirming to me. But why would they. Why would they make a bucket a storage account? Like, why. [00:57:20] Speaker D: Like, that's a long time ago. I was managing a customer and they were going to CIS harden their Azure. And one of the things that got flagged was about like minimum TLS version for storage counts. It took me a really long time to understand until I actually started doing Azure Daily. The term storage account isn't actually accurate because my brain on AWS went, why do I have to have a different account for each storage container? And then I finally used it. I finally realized it's just called a storage account because Microsoft hates everyone. [00:57:58] Speaker C: Well, yeah, the storage account is like. Like if you think about the, you know, you could set up separate permissions for the bucket, right. Like, you know, access to objects. It's kind of outside of the AWS IAM sort of ecosystem. Those button. And so like the storage account is sort of a takeoff on that. You know, like in that direction where you're. You can manage access to either your object storage or your. Your file file system storage very granularly. [00:58:27] Speaker D: Yeah. [00:58:28] Speaker A: So now I'm even more mad at this billion. So Claude. Claude has helped me explain this like I'm dumb. So I pasted the question and the answer and I said, I explained this to me like I'm dumb. So it explained it simply, you can create. Which was nice. Thanks. Thanks for changing dumb to simply. I appreciate you, Claude. [00:58:47] Speaker C: We're friends. [00:58:49] Speaker A: So it says you can create up to 5,000 different storage action tasks in your subscription. And you should think of these, like different types of. Of jobs you can set up. And for each of these tasks, you can assign it to work up to. On up to 5,000 different things. So one task can be used in multiple places, but your whole subscription can have up to 10,000 total assignments across all of your tasks. Okay. For each storage account. For each storage account you have, you can only have 50 active task assignments running at once. And so then I said, okay, so then how many storage accounts would I need before I hit the limit on the subscription or task? And it goes. Well, that would be taking the 10,000 number in subscriptions and divided by 50 active task assignments, which means you can only do 200 storage accounts. I've had companies that have had millions of buckets. AWS. [00:59:37] Speaker D: Yeah, but the default for AWS was like 50 or 100. I thought. [00:59:42] Speaker A: I mean, yeah, the initial. But you can get up to thousands and thousands of buckets. Like deal. But yeah, I don't know. Like, it's, It's. We gotta move on because I can't. My brain. My brain can't handle this anymore. [00:59:56] Speaker D: I mean, I definitely would say I have over a thousand at work, if not more than that. [00:59:59] Speaker A: Yeah. So like, you would have to have now multiple subscriptions to break up to use this process across all of those accounts, storage accounts that would bet the. [01:00:06] Speaker D: Soft limit you could adjust. [01:00:08] Speaker A: I mean, maybe, but why wouldn't it be in the FAC that way? This is a soft limit you can. Anyways. All right, we got to move on. Red Hat Summit 2025 is next week, competing with Microsoft Build. And apparently Microsoft has so much money, they're a platinum sponsor of the Red Hat Summit and doing Build at the same time. And they are showcasing several new capabilities with Red Hat. So first up, one that I thought was going to come a long time ago is the Windows subsystem for Linux, which has always supported Ubuntu, will now support Red Hat if you have a Red Hat developer subscription, which you know, hey, I liked having options. Azure, Red Hat OpenShift is going to get some updates. There's a new Red Hat landing zone for Azure. So you can find all of your Red Hat resources in one simple Azure landing zone. Application awareness and wave planning in Azure Migrate to migrate all of your Red Hat workloads to Azure. And for those of you who are still stuck on JBoss, you poor fast ones, there's new JBoss EAP on app services and JBoss EAP support on Azure virtual machines. And you should also then look at LinkedIn to find our job. Because if you're still doing JBoss at this point in time, I'm not sure you should work there still. But that's nice. [01:01:15] Speaker C: Unless you're like the JBoss guru. [01:01:18] Speaker A: Yeah, you gotta really know JBoss. [01:01:19] Speaker D: Yeah, that sounds horrible. [01:01:21] Speaker C: Maybe then it's okay. Yeah, exactly. I mean, I'm kind of excited about the Windows subsystem, but again, the developer license sort of kills that for me. [01:01:34] Speaker A: How much does a Red Hat developer license cost these days? [01:01:36] Speaker C: I have no idea. [01:01:40] Speaker A: Red Hat Developer, it's just a vm. [01:01:43] Speaker D: I thought that they run for wsl. [01:01:45] Speaker A: It is. It's a really stripped down version of Linux. [01:01:48] Speaker D: Right. [01:01:49] Speaker A: Oh, for. It says here there's a no cost Red Hat Enterprise Linux, individual developer subscription. Oh, okay. So you don't even have to pay for it. That's nice. [01:01:57] Speaker D: Yeah, you just have to get their spam forever. [01:02:00] Speaker C: Yeah, so I'm probably already on that list. [01:02:03] Speaker A: Yeah, so it's been a while since I've actually looked at what supported and wsl because I don't use Windows that much. [01:02:08] Speaker D: That was a lot better from one. [01:02:09] Speaker A: Of My Ubuntu, Debian, OpenSUSE, Kali Linux and Fedora Remix all available. [01:02:14] Speaker C: Oh really? I didn't know that at all. I thought it was. [01:02:17] Speaker A: And on the ubuntu it's supporting 18.04, 20.04 and 22.04, so quite a few options apparently. Oracle Linux as well. [01:02:25] Speaker C: Yeah, I mean that's, that's all the stuff that's on the extended support for Ubuntu, so that makes sense. Yeah. [01:02:29] Speaker D: Well they missed 2404. No, said it. [01:02:32] Speaker A: I said it. Oh no, 2404, they don't have that one yet. Sorry. Yeah, 2204 is available. [01:02:39] Speaker C: Oh 20. Yeah, 2404 is brand new, right? [01:02:43] Speaker A: Yeah, it's. Well, I mean as of last year. [01:02:45] Speaker D: But sure, yeah, we call it new anymore for their LTS versions. [01:02:50] Speaker A: I don't know, I just installed 25 something which wasn't a long term support on my so I was playing with it. All right, moving on, Azure is announcing three enhancements to model fine tuning with the Azure AI Foundry. First up is reinforcement fine tuning or RFT, with the O4 Mini coming soon. Supervised fine tuning for the GPT 4.1 Nano and llama for Scout model capability available now. For those of you who don't know what any of those things are, Reinforcement Fine tuning introduces a new level of control for aligning model behavior with complex business logic, rewarding accurate reasoning and penalizing undesirable outputs. RFT improves model decision making in dynamic or high stakes environments, and RFT is best suited for use cases where adaptability, iterative learning, and domain behavior are essential. RFT should be considered from the following scenarios. Custom rules where decision logic is highly specific to your organization and cannot be easily captured through static prompts or traditional training data. Domains Specific operational Standards where internal procedures diverge from industry norms, and where success depends on adhering to Those bespoke standards, RFTs can effectively encode procedural variations such as extended timelines or modify compliance thresholds into the model behavior. And finally, high decision making complexity. RFT excels and domains with layered logic and sub and variable rich decision trees. When outcomes depend on navigating numerous sub cases or dynamically weighing multiple inputs, RFTs help models generalize across complexity and deliver more consistent, accurate decisions. And luckily there is a customer use case. All I could think is like wow, you could really take a lot of bad things and codify them. The first one is this is like Wealth Advisory at Contoso Wellness. So basically they're saying Contoso Wellness, which is their fictitious wealth advisory firm, uses RFT and the 04 mini model learned to adapt unique business rules such as identifying optimal client interactions based on nuance patterns like the ratio of a client's net worth to available funds. This enables Contoso to streamline their onboarding processes and make more informed decisions faster. So my move on that was probably something like insurance denial processes like here's all the ways that we want you to deny insurance claims and here's all the rules that we use to deny them. That was what I was thinking, you know, because again, I'm a bad person. But you know, that's where we're at. And then finally, the supervised fine tuning allows you to install your models with a company specific tone, terminology, workflows and structured outputs all tailored to your domain. This is well suited for large scale workloads like customer support automation or models handle thousands of tickets per hour of consistent tone and accuracy, as well as internal knowledge assistants that follow company style and protocol and summarizing documentation or responding to FAQ questions. They do have a use case for that one too, but I think that one's more self explanatory. [01:05:24] Speaker C: I mean it's just I continue to the trend with like, you know, more and more customization of these large language models, right? So it's at the beginning everyone was sort of training their own bespoke models, but now, you know, with rags and RFTs and a whole bunch of grounding, you can really tailor the existing models to your workload. [01:05:46] Speaker A: It's kind of cool. Indeed. Well, we made it guys. That's the end of the show for the week. [01:05:52] Speaker C: Excellent. [01:05:53] Speaker A: And predictions. [01:05:54] Speaker D: That's pretty good. [01:05:55] Speaker A: Yeah, not bad. [01:05:56] Speaker C: Yeah. [01:05:56] Speaker A: Well, I guess we will see you next week here on the Cloud pod. [01:06:00] Speaker C: All right, bye everybody. [01:06:01] Speaker D: Bye everyone. [01:06:05] Speaker B: And that's all for this week in Cloud. We'd like to thank our sponsor Archera. Be sure to click the link in our show notes to learn more about their services. While you're at it, head over to our [email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback and ask any questions you might have. Thanks for listening and we'll catch you on the next episode. [01:06:38] Speaker A: Well, I do have an after show for you guys. So there was a headline on Ars Technica that I had to double read because I was like what? And that headline says the Linux kernel is leaving 486 CPUs behind only 18 years after the last one was made. I was like what? So apparently the Linux kernel supports 486S and has for quite A while. And I first of all want to say I had no freaking idea. I didn't even know you could still make a 486 run. I'm sure it works if I had one, but I got rid of it a long time ago. [01:07:12] Speaker C: They test it. How do they get it? [01:07:13] Speaker D: I was thinking of what's their regression testing protocol for that. They have automated tests for it. [01:07:18] Speaker A: Maybe their last chip finally died. That's why they're dropping support. I don't really know. So I was curious, can I still get a 486 chip? And so I went online and yes, I can get them on ebay and I can get one even built late as 2007, which is when intel stopped building them. Which I was like, 2007? That's. That seems crazy to me. So yeah, this is the end of an era apparently. And basically Ingo Moller quoted Linus Torvalds regarding zero real reason for anybody to waste 1 second on 486 support. Submitted a patch release to the 615 kernel that updates its minimum supported features. Those requirements now include TSC timestamp counter and CX8 fixed CMPX, CH8P. Its own whole thing features that 486 lax. As do some early non pentium 586 processors as well. So yeah, if you have some old hardware lying around and you want to run something on it, apparently Linux is the way to go. Which I already knew. But wow, that's impressive. I'm. [01:08:17] Speaker C: Yeah, I'm shocked that it's, you know, like this far like it sounds. Sounds crazy to me like how long that's taken. [01:08:26] Speaker A: I mean, I don't think there's any distro that you can get that supports the 486. So you'd have to be running this as like a custom. Either a really old Linux and you're updating the kernel manually by compiling it on the system or something else. I have no idea. I have no idea how you're running this. But yeah, crazy that you would want to do it at all because what are you using this computer for? Like some really old, old terrible system that only runs on 486. Like wow, virtualization is your friend. [01:08:54] Speaker C: I think maybe you can, you know, replace finally that one server you're afraid to unplug. [01:08:59] Speaker A: Right? I mean if it's been around that long and so your business still requires a 486 to run your business. I'm. I'd be concerned. You should also go to indie.com?. [01:09:06] Speaker C: Yeah, yeah exactly. I was surprised that i386 was. Was removed in 2012 which is the other thing in this article Like I was. [01:09:16] Speaker A: I mean at least that like 2012 I'm like okay, like I don't know if I'd be as shocked I mean I probably would. Yes I. I lied to you I. [01:09:22] Speaker C: Would say about that too but I mean it's, it's. And then 486 being 10 years later than that is crazy. [01:09:29] Speaker D: All right, so FreeBSD, Alpine, Gen 2 tiny core Linux. Damn small Linux, LV and Plop Linux all support 486. [01:09:41] Speaker C: Well. [01:09:43] Speaker A: New versions of them or like what's the last version of those? [01:09:46] Speaker C: Until this Linux kernel change goes on no one's going to support it right? [01:09:52] Speaker A: Well Gen 2 no shock there as we know Matt's a big Gen 2 fan I really haven't used the years. [01:10:00] Speaker D: Alpine Linux I know is pseudo based on Gen 2 it has a lot of the same people that go from one go to the other so FreeBSD I mean only. [01:10:09] Speaker A: I mean most people go to Alpine only go there because they docker told them to. I don't think anybody who's using Venture was like oh I want to go to Alpine. [01:10:17] Speaker D: I'm thinking of Arch Linux. [01:10:18] Speaker A: Sorry. Ah yes, that makes sense. [01:10:20] Speaker D: Okay yeah sorry my brain. [01:10:21] Speaker A: The only reason why anybody adopts Alpine is because they're trying to do containerized development and someone said their container was too big. Yeah that's really the only way finally feels. [01:10:31] Speaker D: Feels like they have extra thing like are these all oss that you can customize the kernel more? [01:10:37] Speaker A: No, I have no. [01:10:38] Speaker C: I don't know. [01:10:40] Speaker A: Anyways end of an era. I'm sure Jonathan has a 486 somewhere running like he would be the probably. Yeah See it seems like a guy never throws away anything maybe not running. [01:10:48] Speaker C: But in his house somewhere. [01:10:50] Speaker A: Somewhere just waiting or in a storage shed somewhere. You know just waiting for the day when I can come back. [01:10:54] Speaker D: What's older than the 486? [01:10:56] Speaker A: 386 and then a 286. [01:10:59] Speaker D: The 386 has more support than the 486. That's according to this random website. [01:11:05] Speaker A: So the 486 was the first one that ran Windows 3.1 which is really like kind of the moment that Windows kind of took off and the personal PC era really began and got out of Hobbyists 386 was out there but other one was big at the time was the Apple II which was also Pretty popular with the tech community at the time. I had Apple too. That's fun. Boo. Off the floppy drives. I had a TI86 before that. That was fun too. [01:11:34] Speaker D: No, this is Distrowatch. I used this pubs in years. [01:11:39] Speaker A: I say that's a name I haven't heard in a while. [01:11:41] Speaker D: I know, right. Place it a random website. If I looked at what the website was, I was like, oh, I've heard of this. [01:11:48] Speaker A: I'll put a link to this in the show notes so people can check it out. If they're using. They have an old computer somewhere they want to get their kid. That I can't actually do anything on the Internet with. This is your. [01:11:57] Speaker D: Apparently you can run NetBSD on your Dreamcast. [01:12:01] Speaker A: Oh, I mean, that's something I've always aspired to do, I suppose. Nice. You're gonna tell me I can run Linux on my fridge. Crazy talk. [01:12:12] Speaker C: It's weird. The more I dive into this, the more weirdness. This was replaced by the cell you're on. [01:12:21] Speaker D: It seems like, oh, I forgot about that. [01:12:23] Speaker C: It just seems like despair in time. Like these things. Like mid-1990s. Like I didn't think cellular on was came out in the mid-1990s, but it must have if this is what this is saying. [01:12:34] Speaker D: Yeah, I assume there's someone in the kernel team that worked somewhere that needed support for this and they just shut it down for some reason, you know, like an old like mainframe or some, you know, random software that they wanted they needed up. Like why else keep it in there? Or we're all just lazy developers. [01:12:56] Speaker C: I mean I. I think that was sort of Linus's. Linus Torvald's point was like, there's no reason to do this. Yeah, they were probably. It's probably just in the. In the process, you know. [01:13:07] Speaker A: Well, I mean like if. If people are actually actively testing it, I would be shocked. Like, would you keep. [01:13:13] Speaker C: You'd have to emulate it at this point, I imagine. [01:13:15] Speaker A: Oh yeah, you can buy the chips. Like I said, I found them on ebay. I can buy a whole 486 computer if I want to for like nothing. [01:13:21] Speaker D: It's like watching more in shipping, I guess. [01:13:25] Speaker C: You know, maybe they do that for testing apparel. [01:13:27] Speaker A: I don't know. Yep. I can get a blue chip rare 46 DX computer with Colorado 350 tape drive untested for 137. Wow. Or vintage. Just keyword. This one here is a vintage Turbo 4680 desktop computer. Mobo and case $179.99. It's good times. [01:13:49] Speaker C: What a steal. [01:13:51] Speaker A: Kind of expensive, actually, for what it is. [01:13:52] Speaker C: Yeah, No, I was kidding. That was. That was. That was sarcasm. I'm like, that's almost 200 bucks. [01:13:57] Speaker A: Yeah. [01:13:58] Speaker C: For a calculator. [01:13:59] Speaker A: It's a. It's a. I mean, if you want just the chip, I can get you an Intel 486 DX2.66 MHz CPU for $23.75. [01:14:07] Speaker C: They don't look all that pretty, so. Which is the only thing I could do with it. Exactly. [01:14:14] Speaker A: I didn't actually know AMD had a 46 equivalent. Really? Yes. I can get one. Yeah, it says right here I can get one for. I can get one for 9.99. EBay. [01:14:23] Speaker C: This is so weird. [01:14:25] Speaker D: X is instrument also. [01:14:26] Speaker C: Yeah, I feel like. Like this is as like an alternate reality for me. Like I. I don't know, we won. [01:14:35] Speaker A: The lottery and all of a sudden, like, what's Ryan doing these days? Like, well, he's just collecting all computers because, you know, that's what he decided to do with his fortune because he was. I need to live in a different time world now. Yeah. [01:14:46] Speaker D: I didn't realize AMD was. Was and intel went back that far because I remember like AMD with like, they're like 2500s and one of those. When it was like they were. The thing, like, I felt like that point was when they were coming into the market. [01:15:03] Speaker A: What if I told you, I guess. [01:15:04] Speaker D: They'Ve been around forever. [01:15:05] Speaker A: What if I told you that intel was founded in July of 1968 and AMD was founded of May of 1969. AMD is only one year younger than Intel. [01:15:19] Speaker C: The actual. That doesn't. See this again, none of this makes sense. Did I. Did I fall asleep and wake up in a different timeline? This doesn't make sense. [01:15:31] Speaker A: Now, amd, when they first started, was not. Their early products were memory chips. They did not get into processors in the microprocessor market until 1975 to compete with intel, its main rival at the time. So they've been. They've been fighting each other since 1975. That's crazy. Although I think AMD is winning at this point. Intel's not doing well. [01:15:51] Speaker D: Yes. [01:15:52] Speaker C: Yeah, intel seems to have had its peak. [01:15:54] Speaker A: I mean, I don't know if AMD is technically winning either, other than they make expensive video cards. Intel does not. But on the processor, Friend arms. Winning all. [01:16:05] Speaker C: Yeah, well, I mean, AMD was winning for a little while just because it was the same performance as intel, but cheaper. [01:16:10] Speaker A: Yeah, yeah, exactly. But yeah, no, they're definitely winning on GPUs. Not so much on processors. But they do make some nice Ryzen chips. I do like them quite a bit. All right, gentlemen. Well, that was a fun memory lane of 486 computers. I can't believe that Linux supported it till 2025. Just. That's an argument I want to see on the. I should go find that, that thread on the Linux kernel. [01:16:35] Speaker D: Yeah, see, when they were arguing to remove it or not, if anybody was. [01:16:40] Speaker A: Arguing for it would be great. What do you need, please? [01:16:43] Speaker C: Yeah, if, if, if you find that report back. Because that sounds awesome. For sure. [01:16:48] Speaker A: All right, gentlemen, have a great week. We'll see you next week. [01:16:51] Speaker C: See you next time.

Other Episodes

Episode 202

March 10, 2023 00:35:56
Episode Cover

202: The Bing is dead! Long live the Bing

On this episode of The Cloud Pod, the team talks about the possible replacement of CEO Sundar Pichai after Alphabet stock went up by...

Listen

Episode

July 08, 2019 47:55
Episode Cover

Episode 29: The Cloud Pod Re:Inforces Security

We recap the AWS Reinforce conference from Boston Massachusetts.  Draft results, overall impressions of the conference and we break down each announcement. Sponsors: Foghorn...

Listen

Episode 299

April 06, 2025 01:20:01
Episode Cover

299: We Predict Next, for Next Week’s, Next-Level Google Next Event. What’s Next?

Welcome to episode 299 of The Cloud Pod – where the forecast is always cloudy! Google Next is quickly approaching, and you know what...

Listen