283: You’ve Got Re:Invent Predictions

Episode 283 November 27, 2024 01:13:36
283: You’ve Got Re:Invent Predictions
tcp.fm
283: You’ve Got Re:Invent Predictions

Nov 27 2024 | 01:13:36

/

Show Notes

Welcome to episode 283 of The Cloud Pod, where the forecast is always cloudy! Break out your crystal balls and shuffle those tarot decks, because it’s Re:Invent prediction time! Sorry we missed you all last week – the plague has been strong with us. But Justin and Jonathan are BACK, and we’ve got a ton of news, so buckle in and let’s get started! 

Titles we almost went with this week:

A big thanks to this week’s sponsor:

We’re sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You’ve come to the right place! Send us an email or hit us up on our slack channel for more info. 

General News  

01:27 The voice of America Online’s “You’ve got mail” has died at age 74

AWS 

03:04 It’s Time for RE:Invent Predictions!

Matt

  1. Large Green Computing Reinvent
  2. LLM at the Edge
  3. Something new On S3

Ryan (AI)

  1. Improved serverless observability tools
  2. Expansion of AI Driven workflows in datalakes
  3. Greater Focus on Multi-Account or Multi-region orchestration, centralized compliance management, or enhanced security services

Jonathan

  1. New Edge Computing Capabilities better global application deployment type features. (Cloudflare competitor maybe)
  2. New automated cost optimization tools
  3. Automated RAG/vector to S3

Justin 

  1. Managed Backstage or platform like service
  2. New LLM multi-modal replacement or upgrade to Titan
  3. Competitor VM offering to Broadcom 

Honorable Mentions

Jonathan:

Deeper integration between serverless and container services

New Region

Enhanced Observability with AI driven debugging tool

Justin:

Multi Cloud management – in a bigger way (Anthos competitor)

Agentic AI toolings

New ARM graviton chip

How many times will AI or Artificial Intelligence be said: 

Justin – 35

Jonathan – 72

And now it’s time for Pre:Invent announcements: 

20:09 Introducing Express brokers for Amazon MSK to deliver high throughput and faster scaling for your Kafka clusters

21:10 Jonathan – “it seems like would be a no-brainer if you’re running enough single brokers to meet their capacity, then switching to these as long as you maintain your redundancy would be kind of a no-brainer. I wonder what they’ve done exactly to make this new class of instances. They’re not just bigger instances, surely.”

22:13 Amazon EBS now supports detailed performance statistics on EBS volume health

22:44 Justin – “So, you know, in the early days of auto scaling, one of the things that a lot of customers would do was they would create testing when the node would come up and they would actually test the IO throughput to the EBS volume because they were not always created equal. And so if you got a bad EBS volume, you create another one or rescale or kill that node and try again until you get one that performs to your specifications. So now, at least exposing this to you so you can actually just monitor it from CloudWatch, which is a much simpler way than running a bunch of automated tests.”

24:00 EC2 Auto Scaling introduces provisioning control on strict availability zone balance

24:24 Justin – “…one of the things, if you are in a region with three zones and you want three nodes in your auto scaling group, it’ll spin up A and B and then they say C doesn’t have the capacity. It’ll just keep spinning away at C – letting you know that it’s not launching that server forever, which is just terrible. So now you at least say like look, I still want segmentation. I would still want at least two regions, but that third node can’t spin up in C. You can just put it in B or A.”

25:55 Amazon Bedrock Prompt Management is now available in GA

26:19 Jonathan – “ Yeah, you can always ask A.I. to write a prompt for you, which has always worked really well for me. Yeah, this is kind of nice. I’ve been using Langchain in Python recently. I think it’s also available for TypeScript as well. But Langchain supports creating prompt templates, and then you can string a whole series of things together and build agents and all kinds of stuff. So it’s nice to see that they’re kind of catching up with what the open source community already has in terms of usability for this.”

27:03 AWS Snow device updates

28:11 Jonathan – “It’s interesting, kind of in the hindsight, we wondered who really used these things to begin with. And maybe it was just a good idea. Maybe it was internally used and they thought other people would want to use them and there just wasn’t a market for it.”

29:57 AWS Lambda SnapStart for Python and .NET functions is now generally available

30:58 Jonathan – “Wow, mean, just think of the cost saving. In usage, let alone the virtual capacity increase they’ve just got if everyone just suddenly starts using this. Even if it’s just two seconds per invocation that they’re saving, that’s two seconds they can sell to somebody else.”

31:51 AWS Lambda turns ten – looking back and looking ahead

36:15 Centrally managing root access for customers using AWS Organizations 

39:12 Jonathan – “It’s wonderful. No longer have to explain to the security team that setting the root password at some 64 character random password and then discarding it was actually a secure option, which I still think was a secure option after use.”

40:30 Introducing Amazon Route 53 Resolver DNS Firewall Advanced

41:35 Amazon DynamoDB lowers pricing for on-demand throughput and global 

tables      

41:58 Justin – “…one of the interesting things I have found in this article was that it points out that while provisioning capacity, where those were reasonable in the past, the new on-demand pricing benefit will result in most customers achieving a lower price with on-demand nodes. We’ll still meet the capacity need without having to capacity plan or do scaling of that capacity throughput. So they’re actually saying that, because of this price adjustment, the cost benefit is much better. And so you should definitely consider moving back to on-demand Dynamo DP.”

43:52 Introducing resource control policies (RCPs), a new type of authorization policy in AWS Organizations 

45:54 Justin – “…it sounds boring, but then when you think about it, it’s like, this is actually really cool.”

GCP

46:31 Dataplex Automatic Discovery makes Cloud Storage data available for Analytics and governance

47:41 Justin – “…you know, data is the new currency. So finding your data and your organization can be somewhat a needle in the haystack; because everyone stores data where they think they need it. And then you have different enterprise systems, different SaaS applications are using… so, you know, to have a system that’s kind of inside of your environment, that’s able to automatically scan and find your data assets and then pull them into a data lake. Even if you don’t need them, that’s just incredibly valuable just for discovery.”

49:39 Shift-left your cloud compliance auditing with Audit Manager

50:56 Jonathan – “I wonder if compliance auditors in general will eventually die off, not literally, but I wonder if Google or Amazon or somebody else could actually build a tool which you say, I want to be compliant with X framework will reach a point where it can be trusted enough to go and do assessments, collect data, generate reports, and then give you findings without the involvement of the PWCs or anybody else of the world.”

53:20 65,000 nodes and counting: Google Kubernetes Engine is ready for trillion-parameter AI models

53:51 Justin – “You’re gonna need to communicate with your account rep before you spin up your 65,000 GKE nodes.”

Azure

55:55 Windows Server 2025 now generally available, with advanced security, improved performance, and cloud agility   

53:51 Jonathan – “Wow, that’s lot of new stuff. guess I was thinking, well, who, you know, in the cloud, they typically don’t allow virtualization anyway. So who would need all these features? Well, they need it for themselves. They need it for them. They built this, this is Windows 2025 Azure release.”

1:02:35 Enhance the security and operational capabilities of your Azure Kubernetes Service with Advanced Container Networking Services, now generally available

1:05:04 Unlocking the future: Azure networking updates on security, reliability, and high availability

1:07:00 Announcing the availability of Azure OpenAI Data Zones and latest updates from Azure AI  

1:07:52 Jonathan – “Prompt caching is probably a poor name for it actually, it really isn’t. Well, it’s kind of caching the… I guess it’s caching parts of Prompt. It’s caching… it’s like not reloading tokens into memory before inference. It’s like you can reuse the same or common parts.”

1:08:57 Introducing Hyperlight: Virtual machine-based security for functions at scale  

1:10:04 Jonathan – “I think it will complement Firecracker really nicely because it’s meant for function-based workloads, not VM-based workloads. so, a millisecond startup time, just… That’s almost… It’s close enough to zero to be zero compared with 125 milliseconds for a Firecracker cold start time. And to be fair, an eighth of a second to start up a VM is amazingly impressive, but…But one to two milliseconds to fire up a virtualized function that can run is just great. Wow.”

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod

View Full Transcript

Episode Transcript

[00:00:07] Speaker A: Welcome to the Cloud pod where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure. [00:00:14] Speaker B: We are your hosts, Justin, Jonathan, Ryan and Matthew. Episode 283 recorded for the week of November 12, 2024. You've got reinvent predictions. Actually it's 21st, I guess, but we didn't record last week, so. Because we all came out with a plague. So I had no voice. I think you were dealing with migraine headaches. Ryan had no voice. Matt was like, I'm not doing this by myself, so don't blame him for that. [00:00:39] Speaker A: Yep. [00:00:41] Speaker B: And unfortunately they're still missing because everyone's heading out for Thanksgiving. So we, but we, we had to get a prediction show in before Reinvent. We are not recording next week for Thanksgiving as we've decided to start taking that off because then we get into wall to wall coverage for Reinvent and then, you know, basically have a very busy end of the year for all of us. And you can still hear my voice still a little croaky at times, so apologies for that. So John is going to help us out with some hosting duties here through the show today, so I don't have to talk the whole time, which I appreciate because it is two weeks of news, even filtered down. It's a lot of stories, so we should get right into it. But first of all, Jonathan, I'm sorry to report that Elwood Edwards has passed away at the age of 74, one day before his 75th birthday. And if you don't know who that is, I can help remind you. A little sound clip here for you. You've got mail. Yes, yes. Sadly, the creator of the iconic you've Got Mail from AOL has passed away. This started in 1989 when Steve Case, who was the CEO of Quantum Computers at the time, who later became a American Online or aol, wanted to add a human voice to their Quantum Online service. Karen Edwards, who apparently worked as a customer service representative, heard Case discussing the plan and suggested her husband, Elwood, who was a professional broadcaster in Virginia. Edward recorded all the famous phrases that you know, including the welcome files done and goodbye audio on a cassette recorder in his living room, was paid $200 for his services. His voice is still used today to greet users to the current AOL service, which is a long legacy for your voice to be used. I don't think our voice will be used that long, Jonathan. [00:02:15] Speaker A: Probably. Probably not. And $200 per time that's been played. [00:02:18] Speaker B: That'S a very low royalty rate for sure. Did not negotiate well on that one. [00:02:26] Speaker A: Yeah. What an iconic piece of history. [00:02:29] Speaker B: Yeah, it's gonna be a lot of that these days. Things that we're in the 90s that we grew up with will be resulting in people passing away and it'd be sad, but. Yeah, but you were there. You got mail. And it even inspired a movie, which is not very good. Does not hold up well. [00:02:47] Speaker A: No, that's right. I forgot about that movie. [00:02:52] Speaker B: Well, we have to do predictions. We have predictions from both Ryan. Well, not really Ryan. We have an AI version of Ryan that we've created, Jonathan's created in the lab, and Matt has provided his predictions to me offline, so I have them so we can do our Reinvent predictions once again. So in our virtual rolling, Matt scored the highest roll, Ryan scored the second highest, Jonathan the third highest, and I came in fourth, which is always a bummer. We won't have a tiebreaker other than Jonathan and I. So, I mean, they had a negative one point already just because they weren't here to tell us what their predictions were directly. That's how I see it. So nothing. Nothing like the future to get you excited about Reinvent, which is sadly just two weeks away. So. So any thoughts before we get into predictions? Jonathan, about creating your picks? This year. [00:03:42] Speaker A: It's been. I think it's tough this year. I think this year has been the most detached I've been from AWS in general. So apart from keeping up stuff on the podcast, I think it's been hard to know where they're going. And also the cloud of AI stuff just gets in the way of everything. [00:03:58] Speaker B: I mean, just looking at the pre invent stuff, I'm just like, that's stuff that would have made stage so many times and then like relegated to how we announced it before Reinvent, which is kind of sad. [00:04:08] Speaker A: So, yeah, my. My strategy was kind of look at what's been announced in the past six weeks and figure out what that could lead to as a. [00:04:18] Speaker B: What are the blank spots they haven't announced? [00:04:20] Speaker A: Yeah, yeah, around that. [00:04:21] Speaker B: All right. [00:04:22] Speaker A: Yep. So we'll see. But I'm. I'm not missing going to Vegas this time of year. I'm. [00:04:27] Speaker B: No, I am definitely not. I. You know, we're much more attached to the Google world now. Even then, I don't think I'm good at Google predictions as much as I used to be good at AWS predictions. But their conference is at least in April, which is so much better. [00:04:42] Speaker A: Yeah, I just remember the bone dry lips and various other things from Legal though they had snow recently, which is just insane. [00:04:49] Speaker B: That is crazy. Well, and right now, F1 races were just happened or happening right after. It's going to be a mess in Vegas. [00:04:58] Speaker A: That's right. It was like that last year as well, wasn't it? [00:05:00] Speaker B: I think the race is right after re Invent last year and so there's a bunch of construction. I think it's before Reinvent this time. But you know that that time frame in Vegas is starting to not become very much fun to go. And I know it's cheap for Amazon, but you know, you have to get people to go there and it's already expensive to fly because you're typically flying Thanksgiving, you know, post Thanksgiving weekend, which is very busy travel schedule. Plus it's right before the holidays and it's kind of a bad period of time. [00:05:24] Speaker A: Yep. [00:05:25] Speaker B: All right, well, getting to predictions. Matt's first prediction that he gave me was a large green computing initiative. He's really pulling for. Pulling for something there. So he's saying they have some large green computing initiative that they're going to announce on the main stage at Reinvent, which, you know, it could be something. They might talk about their nuclear ambitions, they may talk about their climate pledge goals. I think I've previously tried to do a climate thing and failed miserably. So better luck to Matt this time. [00:05:55] Speaker A: Yeah, you know, I, maybe, maybe I'll adjust my first one to a not large green computing rampant prediction because with, with the, with the cost, with the energy demands for AI, everyone's backtracking on their climate pleasures. So I don't see how that's a thing. Especially with the news that it looks like the nuclear reactors are not going to be approved, at least for quite some time. So it's good. [00:06:25] Speaker B: Are better. He might, he maybe should have changed his order up, but you know, he put it there. So. First one he gave me. So that's how it works. [00:06:33] Speaker A: Okay. [00:06:33] Speaker B: All right, I'll let you take your Ryan virtual AI pick. [00:06:37] Speaker A: Yep, yep, Ryan. Ryan AI picked some good ones, actually. I should have used a crappier model, probably. So with CloudWatch application signals now available for Lambda AWS will likely extend this capability to other serverless frameworks such as Step Functions or Amazon EventBridge and introduce more advanced tools for predicting performance bottlenecks in distributed apps. So the prediction is improved serverless observability and performance optimization tools. [00:07:11] Speaker B: All right. I don't know if that's a mainstage topic, but I like it. [00:07:15] Speaker A: Who knows? [00:07:16] Speaker B: You never know. All right, well, Then. And then your first pick? [00:07:20] Speaker A: My first pick was new edge computing capabilities and maybe like better global application deployment type features. I think Cloudflare is beginning to eat some of their lunch and I think they'll put more focus back on edge. [00:07:42] Speaker B: Yeah, I haven't looked at Cloudflare's earnings to see are they talking about seeing a lot of business coming that way from the edge compute stuff. But yeah, I don't think it's a bad pick. I think you have a better chance. You'll argue with me here for Matt's next pick because he has something that's kind of similar to yours. So we'll see if you think it's too close to yours. I'll just say another one of his picks. [00:08:04] Speaker A: All right, so what's yours? What's your pick? [00:08:05] Speaker B: So my first pick. So you know, this year has been all about platform and building platform teams, platform capabilities. And so I could see Amazon giving a managed backstage service. [00:08:21] Speaker A: Okay. [00:08:23] Speaker B: I just think it's the. It's the, you know, dev. Dev rel thing. You know, everyone's talking about it that the next evolution DevOps. I just think it makes sense. So just naturally they don't have anything like that. I checked today because I was like, maybe they have something, an idea, maybe they come up with something more decision, you know, AI related. But, you know, they have some stuff already kind of in that space. But this one, they don't really have anything. So I felt like this is right up their alley to steal. [00:08:47] Speaker A: Yeah. That's nice. I like it. It's something everyone needs. Well, maybe not everyone, but a lot of people need and nobody hosts as a managed service. So that's. That's cool. [00:08:55] Speaker B: Yeah. All right, moving on to Matt's second pick, Matt. And this one, if you think it's too close to yours, let me know. He said an LLM at the edge that runs on the Edge computer. [00:09:08] Speaker A: Now you can have that one. [00:09:12] Speaker B: You said like new edge computing capability. I'm like, oh, LLM a capability. [00:09:17] Speaker A: Yeah, he can take that one. [00:09:21] Speaker B: All right, Ryan, AI number two, write. [00:09:24] Speaker A: AI is expansion of AI driven workflows in data lakes. [00:09:34] Speaker B: In data lakes or. [00:09:35] Speaker A: And data lakes in data lakes. So the advancements in cloudtrail lakes, such as natural language querying and dashboard customizations, suggested that AWS will enhance other data centric services like Redshift or Athena with gen AI capabilities in enabling easier data exploration and automated insights. [00:09:55] Speaker B: Wow. All right. It's definitely not a pick that Ryan would have made normally. [00:09:59] Speaker A: It's not. No. [00:10:02] Speaker B: All right, and your second pick. [00:10:05] Speaker A: Mine is new automated cost optimization tools. [00:10:10] Speaker B: I said I have some more on my list, so take that one from me. New automated cost optimization tools. Yep. I definitely agree. All right, well, that he took that one. I think they're going to announce on main stage a new multimodal LLM model. Either an update to Titan or a full replacement for the Titan model. [00:10:31] Speaker A: Okay. It's been a while since I released. Released one. [00:10:35] Speaker B: Well, and Titan, the original one is not multimodal. So I think that's a key thing. [00:10:40] Speaker A: Yeah, that's interesting. They have so many different AI services for individual things, text and speech and language. Bringing it all together would be an interesting idea. [00:10:52] Speaker B: Well, I think part of the reason why we question Amazon's commitment is they don't have a solid foundational model. They have good partnerships with Anthropic, they have good partnerships with others. But you know, like Amazon, even Microsoft is building their own foundational model in addition to being a partner of OpenAI. And then you have Gemini on the Google side. So I feel like each of the major cloud provider has to have a world class foundational model that empowers a bunch of other things that you want to do off of it. And so I just, I feel like Titan has to be a big part of Reinvent this year. [00:11:23] Speaker A: Okay, that's a good one. I like it. [00:11:25] Speaker B: All right. And then Matt's the final one. He has three here. I don't like any of these last three that he gave me. So I'm gonna go with. He said something new on S3. It's very vague. I don't think it's gonna happen. So I'm gonna. Even though it's very vague, the other choice was something a dollar amount figure over a million dollars discussed. Which like that could be anything on a slide or a new region. And I don't think a new region is a good choice. But I mean they probably might announce one, but. So I'll give him something new on S3. But it has to be something really new on S3. [00:12:04] Speaker A: Some earth shattering. Yeah, yeah. [00:12:06] Speaker B: Like you like we built a fully managed AI service built on top of S3 that you. All you do is upload data to a bucket. Something like that. [00:12:15] Speaker A: I will go with this LLM thing at the edge. Maybe I can see a AI integration like RAG to S3 maybe from somewhere. [00:12:26] Speaker B: RAG test 3. Be cool. That'd be. [00:12:28] Speaker A: Should have picked that one. [00:12:28] Speaker B: Well, if it happens, if it happens, we can, we can debate if we want to give that to him or not because he wasn't here to. [00:12:33] Speaker A: Oh yeah. Automatic Embeddings creation from S3 documents for Vector search. [00:12:39] Speaker B: Yeah, that would be good. You should have had that as your pick. [00:12:42] Speaker A: I should have done well, funnily enough. [00:12:45] Speaker B: All right, well, Ryan AI is up for his third pick. [00:12:48] Speaker A: Ryan I. Number three. A great. A great focus on secure multi account and multi region sort of orchestration. So I think like. [00:12:56] Speaker B: So something beyond organizations. [00:12:58] Speaker A: Yeah, I think improvements to centralized compliance management or enhanced security services designed for large multi account setups. [00:13:08] Speaker B: Okay. Yeah, that's. That's not a bad one. That one I feel like Ryan would have come up with. [00:13:13] Speaker A: Yeah, yeah, I had some options. I picked that one because I thought that one was sort of kind of thing you would care most. [00:13:17] Speaker B: Ryan. Yeah. How about your third pick? [00:13:20] Speaker A: Well, I don't know about. Because like maybe, maybe I should go with the. I don't like my number three. [00:13:28] Speaker B: Now that's the problem when you do these is that you know, things kind of come into play. [00:13:33] Speaker A: Yeah. Well, I will go with some kind of automated rag to S3. [00:13:41] Speaker B: Yeah, I think you should service. You came up with it? [00:13:43] Speaker A: Yeah. [00:13:43] Speaker B: Give that to you. [00:13:44] Speaker A: Yeah, I'll go with that one. So like automatically creating like embeddings for vector search from. [00:13:50] Speaker B: Yeah, RAG or vector. [00:13:51] Speaker A: Yeah, yeah. [00:13:52] Speaker B: Task three. [00:13:53] Speaker A: Yeah, that's what I'm going to go with. Yeah, it's way better than. That's a great choice. [00:13:58] Speaker B: All right, that brings me to my final pick. I've got a couple of good ones to pick with. I'm going to go with this one because I think it's something. It's very on brand for Amazon. I think they are going to announce a VM platform other than VMware for their customers to move their Broadcom workloads to. [00:14:19] Speaker A: Ooh. Oh wow, that's cool. [00:14:23] Speaker B: That way they can still UI, still boot your VMs don't lift and shift, still do all that. But you come to our platform, it's going to be based on probably one of the open source models, but we do the migration and management for all that for you and you get out of your VMware licensing nightmare that you're in. I think it's very on brand for Amazon. [00:14:40] Speaker A: Why wouldn't they just migrate people to EC2 though? In that case what would the advantage be of a non vm? [00:14:46] Speaker B: You get the VM hypervisor, you get the lack of transformation. Pluria Platform EC2 instance is a little. [00:14:51] Speaker A: Bit of work versus the same kind of ecosystem compatible with existing images. And maybe a live migration. [00:14:59] Speaker B: Yeah, exactly. [00:15:00] Speaker A: Yeah. That be. Yeah. [00:15:03] Speaker B: And it's kind of like, you know, when they poke the eye of Oracle, you know, do the same thing to Broadcom. Did you have any honorable mentions you want to talk about? [00:15:12] Speaker A: I. Let me check my list. I think I'm kind of wondering if they'll go for a new region. So I was kind of agreeing with Matt. I don't, I don't think I use that for my honorable mention, though it may be. Nah, I'll skip that one. I don't like that one. [00:15:33] Speaker B: It's honorable mention. You don't get any points. So you can say region expansion. [00:15:36] Speaker A: Wow. I know. But it's still, it's still worth something. [00:15:41] Speaker B: Bragging rights really. [00:15:45] Speaker A: I think with the, with the new VPC lattice integration with ecs, I think we may get some deeper integration between serverless and container services. [00:15:56] Speaker B: Yeah, that's a good guess. Between serverless and containers. Oops. [00:16:02] Speaker A: Okay, maybe, maybe like a unified deployment or management layer across Lambda and Fargate is kind of what I'm thinking there. [00:16:11] Speaker B: Got it. Makes sense. Any other honorable mentions you want to add to the list? [00:16:17] Speaker A: No, nothing. That's nothing. That's likely. [00:16:21] Speaker B: I had a couple others. I do sort of feel this could be the year that Amazon goes big on multi cloud with multi cloud management tools. Like maybe an Anthos competitor, other type of things that will help you kind of centralize so that Amazon, some other shit. But the workloads are wherever you want them to be. [00:16:41] Speaker A: Okay, I'd like that. [00:16:43] Speaker B: I would too. [00:16:44] Speaker A: I don't think it's likely, but I do like that. [00:16:47] Speaker B: I mean they sort of like have kind of come around to the fact that people are going to be multi cloud. So I feel like if you've now admitted that like it opens up the possibilities for product. [00:16:56] Speaker A: That's what I think. [00:16:58] Speaker B: Let's see. I also thought maybe an agentic AI studio, runner builder, slash tooling capabilities. I was looking at their portfolio. They have, you know, AI agents, but it does sort of some of this. But it is not really what I envisioned to be an agentic AI that runs on itself. So that's kind of some tooling around that perhaps. And then I. You had, you think you mentioned something about automated cost optimization tools. So I had dynamic Spot instance management system that would just seamlessly move you between on demand and spot and those things without having to do all the heavy lifting you do today with Fleets. But I'm hoping yours is going to get it. So I didn't use that One. And then of course I had a new ARM processor. I assume they're gonna have a new Graviton this year. [00:17:43] Speaker A: But I did have one honorable mention, which is with the, with the enhanced observability in Lambda, I kind of wonder if they're going to go with some kind of AI driven automated like debugging tool or something. Some. Some way of helping it find problems. [00:18:01] Speaker B: A Q has some of that in the code debugging. But yeah, like at a app, at a operational, like in production level, that'd be cool to have something there. [00:18:08] Speaker A: Yeah, yeah, yeah, yeah. [00:18:10] Speaker B: So that's, I think that's, that's a pretty good list. [00:18:12] Speaker A: Yeah, I like it. [00:18:13] Speaker B: I don't feel terrible. I think there's maybe a winner or two in there. We'll see. Or we'll be, you know, we'll be here after Thanksgiving and saying, well, we were wrong 100%. Sorry. [00:18:23] Speaker A: What should we pick for the tiebreaker? We got to go with the number of mentions of AI gen AI again. That's kind of. [00:18:29] Speaker B: How many times say AI on stage? Yeah, we should do that. How many times will AI or artificial intelligence be said? I'm going to go 35 times. [00:18:40] Speaker A: Main stage, first day or both days? [00:18:43] Speaker B: All three. All the keynotes. Line them up. [00:18:47] Speaker A: Wasn't it hundreds the last time they did it? It was an enormous amount. [00:18:51] Speaker B: Yeah, I know but like we razzing pretty hard. I'm thinking maybe we'll take the feedback. [00:18:57] Speaker A: Is it price is right still, so. [00:18:58] Speaker B: Yeah, prices. [00:18:59] Speaker A: Well, I'm gonna, I'm gonna go with like. Well since there's nobody else here and you pick 35, I suppose you could you go 36? 36. [00:19:06] Speaker B: Yeah, that'd be rude. [00:19:08] Speaker A: No, I'll go with 70. 72. [00:19:12] Speaker B: All right, sounds good. All right, well let's get into the rest of the pre invent announcements which there has been a bunch again things that would have typically been announced on main Stage that we luckily, you know, they took off the table for us so we didn't make those dumb predictions. First up is introducing Express brokers for Amazon MSK to deliver high throughput and faster scaling for your Kafka clusters. This new express broker is designed to deliver up to 3 times more throughput per broker, scale up to 20 times faster and reduce recovery time by 90% as compared to standard brokers running Apache. Kafka Express brokers come pre configured with Kafka best practices by default support Kafka APIs and provide the same low latency performance that Amazon MSK customers expect so that they can continue using existing client apps without any changes to your code. That's pretty good. The cost wise for this the standard broker versus express broker the express M7G 4X large is a 16V CPU and 63 memory is $3.26 per hour versus the standard broker which is 16VCPU or 64 gigabytes at $1.63 an hour. Now if it's 20 times faster in cluster rebalancing or 3X throughput per broker, you're getting that then that is a cost savings for you if you're actually getting if you actually can use 3x throughput per broker. If you don't need the throughput then it's much more expensive. So definitely make sure you know your workload before you choose either the Express Broker or the standard broker for msk. [00:20:37] Speaker A: Yeah, it seems like it would be a no brainer if you're running enough single brokers to meet their capacity, then switching to these as long as you maintain your redundancy would be kind of a no brainer. I wonder what they've done exactly to make this new class of instance. They're not just bigger instances surely? I guess they're moods to graviton, I'm sure. [00:20:57] Speaker B: Yeah, I'm going to say it's probably combination of CPU architecture and optimized silicone and then yeah, things you can do in code and such to streamline and multithread. [00:21:07] Speaker A: Yeah, cool. [00:21:08] Speaker B: But yeah, it's kind of weird that that wouldn't just become part of serverless too. But maybe that'll come later that those things will move into serverless as well. [00:21:13] Speaker A: Yeah, as we go through the list of things today, we should write down the things that we should have put down as reinvent predictions. [00:21:21] Speaker B: I definitely would not put that on a reinvent prediction, but I would have definitely have seen it. You know, five years ago I'd reinvent that would have been an announcement. Yeah, best brokers. That's a huge throughput thing. Amazon EBS now supports detailed performance statistics on EBS volume health. This is in the category of thank you. Finally. Check that box for that feature request I had 10 years ago this week. Cloudwatch gets detailed performance metrics for EBS volumes, which allows you to see real time visibility into the performance of your EBS volume, making it easier to monitor the health of your storage resources and take action sooner. There's 11 metrics up to per second granularity to monitor input output statistics of your EBS volumes, including driven I.O. and I.O. latency histograms. So, you know, in the early days of auto scaling, one of the things that a lot of customers would do is they would create testing. When the node would come up, they would actually test the IO throughput to the EBS volume because they were not always created equal. And so if you got a bad EBS volume, you create another one or rescale or re, you know, kill that node and try again until you got one that performed to your specifications. Now at least exposing this to you so you can actually just monitor it from CloudWatch, which is a much simpler way than running a bunch of automated tests. [00:22:26] Speaker A: Yeah, that is cool because I remember lots of hassles with trying to build dashboards on those metrics that they used to provide because it's like integrated over time and they're like, what does this really mean? Exactly. So yeah, that's nice. Of course, testing also changes the way it works because some of the stuff is thin provisioned, I think. And so as you're running your tests and then test again the second time, it works faster. [00:22:50] Speaker B: But there was also a very distinct. Like when you got a bad EBS volume, you got a bad EBS volume. It was very clear. So yes, there you also got some of the benefit, you were no longer being thin provisioned as well. But that's, that's kind of gone away. I mean, that was a problem. Early EC2 days, that sounds like a problem now. It was. Well, in a future that I am actually very excited about. Amazon EC2 auto scaling is introducing a new capability for customers to strictly balance their workloads across azs, enabling greater control over provisioning and management of Eclipse. Previously, if you wanted to do strict balancing, it would require you to create multiple autoscale groups and invest in custom code to modify the ASGs based on what the lookups were and lifecycle hooks and maintain multiple of all of this mess. And I'm just glad this is here because one of the things, if you are in a region with three zones and you want three nodes in your auto scaling group, it'll spin up A and B and then let's say C doesn't have the capacity, it'll just keep spinning away at C, letting you know that it's not launching that server forever, which is just terrible. So now you at least say like, look, I still want segmentation, I would still want at least two regions, but if that third node can't spin up in C, you can just put it in B or A, I'll be okay with that. [00:24:00] Speaker A: Yeah, so it's sort of less strict than you would have thought, but ensuring that you still have the nodes available. [00:24:07] Speaker B: Okay, yeah, I think there's some customization. I haven't looked at this too farther yet, but I think there's some options of like how you want that to be done, you know, what your policies are like, how you want, you know, maybe you still want, you know, most of your nodes to be spread across three, but you'll allow, you know, everything past three to be more stacked if you want it to be. So you have a lot more options. And yeah, to do this. Previously it was a lot of lambda code and spackle. [00:24:31] Speaker A: Yeah, I remember deploying like Mongo servers and things like that and you couldn't do the auto scaling because it wouldn't place them in the right place and if a zone went down you'd have quorum problems and all kinds of things. So I guess if you're going to enforce balancing across zones or nothing at all, then that would be great for those kind of things. If you want to bring up services that need some kind of quorum to. [00:24:52] Speaker B: Work reliably, it's also good way to probably get screwed by a cross az transfer fees. So just keep an eye on that too. [00:24:59] Speaker A: Yep. [00:25:01] Speaker B: Amazon is announcing the general availability of Amazon Bedrock Prompt Management with new features that provide enhanced options for configuring your prompts and enabling seamless integration for invoking them in your generative AI apps. Amazon Better prompt Management simplifies the creation, evaluation, versioning and sharing of prompts to help developers and prompt engineers get better responses from foundational models for their given use case. Which is sort of ironic because I saw an article this week that prompt engineering is dead. So prompt Engineers, a new career that was created by AI, is now already going away because you can just run the prompts against two different AI models and get better AI prompts. [00:25:34] Speaker A: Yeah, you can always ask AI to write a prompt for you, which has always worked really well for me. Yeah, this is kind of nice. I've been using LangChain in Python recently. I think it's also available for TypeScript as well. But LangChain supports creating prompt templates and then you can string a whole series of things together and build agents and all kinds of stuff. So this is. It's nice to see that they're kind of catching up with what the open source community already has in terms of usability for this. Yep. [00:26:03] Speaker B: Well, Jonathan, Amazon is the big bully on the playground today. Knocking your snow cone to the ground. [00:26:09] Speaker A: Knocking my Snowman down. [00:26:10] Speaker B: Yeah. Yeah. So apparently effective November 12, 2024, AWS has discontinued three previous generations of end of life snowball device models. Specifically the storage optimized 80 terabyte, the edge compute optimized with 52V CPU and the compute optimizes GPU devices. And you will no longer be able to order these models. And if you have one right now, you need to return it within the next 12 months or else you will not be able to get your data off of it as well as they're retiring our lovely snowball or snow cone, which is so cute, you know, Single drive hardened enclosure, dead, not gonna come anymore. The only snowballs that you will continue to be supported are the storage optimized 210 terabyte devices with the NVMe storage and the compute optimized with 104 VCPU with full SSD 28 terabyte NVMe for edge workloads. If those two options don't work for you, they recommend you look at AWS outposts, which now come in 1U, 2U and 42U configurations, which I guess I missed the 1U and 2U announcement for outpost. But it does give you a lot more flexibility than what these non rackable devices could do in your data center. [00:27:12] Speaker A: It's interesting, kind of in hindsight we wondered who really used these things to begin with and maybe it's just a good idea. Maybe it was internally used and they thought that people would want to use them and there just wasn't the market for it. I know that the small one we had was, I mean it was novelty, but you couldn't run anything intensive on it in slices. [00:27:35] Speaker B: You couldn't run serious things on it. But I mean, I think it really made sense for like Edge locations that don't have good Internet connectivity, where you want to be able to do sort of a data collection and eventually get it to the cloud with some type of eventual persistence. But I mean the reality is 5G has become so ubiquitous now. And if you don't have that, then you can just go that with Amazon too. They can just give, they can sell you a 5G network transfer to a satellite dish. And then of course you had Outpost too, which gives you a lot more flexibility because now you're getting full AWS services, not just limited EC2 capabilities or GPU capabilities of the snow cone or the snowball. So I think it had its moment in migration time and I think, you know, we're seeing these go away because the use cases are getting solved. By other methods. [00:28:17] Speaker A: Yeah, I mean, it's hard to compete against $80 Raspberry PI when you charge such a lot of money for basically the same thing. Yeah, yeah, it's sad. I kind of wish they'd sell them off or something so people can repurpose them. I wonder what their plans are going to be. [00:28:32] Speaker B: Maybe you can buy one on auction somewhere. They'll show up on ebay or something. You can own one forever. Or you just order one I guess right now and then just not return it. [00:28:41] Speaker A: Just not return it. [00:28:42] Speaker B: But then they're going to charge you a lot of money for it. [00:28:44] Speaker A: Yeah, yeah. I recall they were obnoxiously expensive if you didn't return them. [00:28:48] Speaker B: Yes, even the small snow cone was very expensive to even rent on a monthly basis. Like, I can't imagine what it cost to buy. [00:29:00] Speaker A: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a wild 3 year AWS savings plan with a commitment as short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask will you actually give me my money back? Our chair will click the link in the show notes to check them out on the AWS Marketplace. [00:29:38] Speaker B: All right, do you remember a little feature called Lambda Snapstart? Jonathan Barely. [00:29:43] Speaker A: Barely. [00:29:44] Speaker B: Yeah, barely. I saw this come across something and I said oh yeah, isn't that the thing only work for Java? And yes it is the thing that only worked for Java two years ago. This was announced in 2022 and I think even then we said oh, it'll most likely support other languages eventually and then eventually it's finally gotten here in 2024. Two years later they now support Python and Net Lambda Snapstart for those who don't remember what it is because they don't use Java, it caches and reuses snapshot of memory and disk state of any one time initialization code or code that runs only the first time the lambda function is invoked. For Python functions, startup latency from initialization to code could be several seconds and when you added dependencies this can balloon up to 10/2. Snapchart can reduce latency from several seconds to as low as sub second for these scenarios and for NET functions they expect most use cases to benefit from it because net just in time Compilation takes up to several seconds to do, and Lane's variability associated with the initialization of lambda functions has been a longstanding barrier to lambda adoption for the. NET community. [00:30:38] Speaker A: Wow. I mean, just think of the cost saving. [00:30:40] Speaker B: Yeah. [00:30:42] Speaker A: In usage, let alone like the capacity. Like the virtual capacity increase they just got. If. If everyone just suddenly starts using this, even if it's just 2 seconds per invocation that they're saving, that's 2 seconds they can sell to somebody else. [00:30:55] Speaker B: Yeah. To use this, you do have to do some changes the way you compile the code. And there's some libraries you have to plug in from Amazon to help it define those things that had to get put into the memory store state. But it's not too difficult to enable it in any of these applications. Examples I saw in the code, there's really like a line of code in most cases, but. Yeah, that's nice to see. I don't know why it took so long. [00:31:19] Speaker A: No, no, that's. That's crazy. Amazing, though. I just wish I had a use for it anymore, because I did. I did a few years ago, but not. Not anymore. [00:31:28] Speaker B: Yep. Well, speaking of lambda, it's having a birthday. I mean, a lot of Amazon Core Service are having birthdays lately, you noticed. But this one is the Lambda is now turning 10, and I'm sure we'll talk about all of the milestones for all of the services as they turn 10, because there's a lot of them. But at least the ones that were, you know, changed my life or changed your life. Maybe we'll talk about the ones that are sentimental to us. Jeff Barr writes in this blog post that over 1.5 million Lambda users out there have ran tens of trillions of function invocations per month. In its journey across the last 10 years, it's had quite a few milestones, which I think it's kind of interesting because you forget what it didn't have when it first came out. So it was announced originally in 2014 in preview ahead of Re Invent. It was not a mainstage announcement with support for only Node js, which I forgot that it only supported Node JS originally, and it had the ability to respond to event triggers from S3 buckets, DynamoDB and Kinesis Streams. In 2015, it went general availability, supported SNS notifications as triggers, and now supported functions also written in Java. So Originally only Node JS and Java 2016, which is where I started using it, I introduced Python support, increased function duration to up to 5 minutes, which was then later increased to 15 minutes. Ability to access resources in a VPC and the serverless application model was launched as well as the launch of step functions in 2016. So this is why Step Functions was a pretty big deal for me. I remember Python support was a pretty big deal. So this is probably when I started playing with. It is right around this time. 2017 I got X ray support and we didn't care. 2018 SQS support came Cloudformation extensions and ability to write lambda functions in any language was supported. So if you want to run Go or Ruby or any other language that you want to, you start doing that in 2018, which doesn't seem that long ago. I mean, I guess it's almost six years, but it seems like it was just yesterday. 2019 you got provision concurrency to start helping with some of the issues with quick starts, 2020 savings plans and private link support. So you can stop burning up all your IP addresses in your VPCs, which we introduced here. That was the side effect. 1 millisecond billing granularity. And you can now use up to 10 megs of memory and 6 CPUs in your Lambda function. And remember, this is also very controversial as people are like, oh, that much memory and 6 CPUs that's just too much for a service function. 2021 S3 object Lambdas, 2022, 10 gigs of ephemeral storage, which was also very controversial. [00:33:55] Speaker A: Yep. [00:33:56] Speaker B: Because that was like, oh, you shouldn't be putting data onto your lambda function. And there was a big debate about, you know, are we eliminating the use cases or not? [00:34:04] Speaker A: Is it serverless or not? That's what it was. I thought this was supposed to be serverless. [00:34:08] Speaker B: Exactly, yeah. And then 2024 you got new observability capabilities logs, Java functions that use ARM processors, recursive loop detection and new IDE methods in the AWS console. And Jeff Barr said, looking ahead across the next couple of decades of serverless, he believes serverless will become the default choice. There'll be a continued shift towards composability of applications. You'll get automated AI optimized, infrastructure management, extensibility and integration of lambda and security, threat detection and AI system remediation will work to make your serverless apps more secure over time. So look forward to the next 10 years of Lambda. And the more and more we don't talk about cold start, the happier I am. [00:34:43] Speaker A: Yeah. Or DNS changes that they made and broke a bunch of stuff. Yeah. It had its fair share of problems. [00:34:51] Speaker B: It was a new computing paradigm that no one had ever dealt with and really started. I would say it started a big drive towards event based architectures was the. You could do this. [00:35:00] Speaker A: Definitely. I really think it was. It's like the yardstick by which we measure anything serverless now. Yeah. Fantastic. What a great tool that. I mean it's my go to if I'm going to build something now in Amazon, it's going to be a lambda function. [00:35:18] Speaker B: Yeah. Unless you need all the heaviness of a container or server, heaven forbid. Lambda just makes it easy. Why add all the stress and work. And then they didn't mention the RDS proxy for lambda, which is. I think this is an RDS feature technically. But the ability to reduce pool exhaustion on your database servers from lambda functions. I mean they've done so many things to this thing that just makes it just the default choice. All right. We've been seeing Amazon slowly tiptoe towards this for years now and it's finally here. IAM is launching a new capability to allow security teams to centrally manage root access for member accounts. And AWS organizations finally easily manage root credentials and perform highly privileged actions forever. AWS accounts have been provisioned with a highly privileged user, which is terrible. And it shows up in all of your logs. If you're using a cnapp which had unrestricted access across the entire account, how powerful? It posted a significant security risk. Many customers built manual approaches to ensure MFA was enabled on their root accounts. Regular root credential rotations and secure storage or credentials and things like LastPass and 1Password. However, this became problematic as you scale into the hundreds of accounts that most enterprises run. How do you deal with a big problem during a pandemic? Well, if the MFA device that's shared across all the cloud people is in the office and we're not at the office, how do we get access to the root accounts? There's lots of challenges that really cause a lot of problems. In addition, specific actions such as unlocking S3 bucket policy or SQS resource policy still required the root credentials. But now with this new ability, you get central management root credentials and root sessions. Ngara they offer security teams a secure, scalable and compliant way to manage root access across AWS organization member accounts. So central manager root credentials basically remove the long term root credentials, prevent credential recovery so you can't recover the root password once you've removed it. Provision secure by default accounts for all your new accounts and helps you stay in that compliance. And then those rare occasions where you do need that root access and for that they're launching root sessions which is a secure alternative to maintaining long term root access. Now you gain short term task scoped root access to member accounts and the root session benefits include tasks of root access, centralized management and alignment with AWS best practice. A new capability isn't giving you full root access, just temporary credentials to perform one of the following actions including auditing root user credentials, re enabling account recovery, deleting root, redeleting root user credentials, unlocking S3 bucket or unlocking an SQS queue policy. So thank you Amazon. It's only taking you 14 years. [00:37:49] Speaker A: Yeah. What a great feature in such a long time coming. I say they've slowly been eating away at the things that did require root access to begin with so. But yeah, I guess the bucket policy thing's still a bit. Still a big issue but I mean. [00:38:00] Speaker B: If you give you vend out temporary use for when I need to do it from a central security manager account, that solves the problem. [00:38:07] Speaker A: Yeah, I mean the fact that you needed a root account to do it in the first place by design was a little strange. [00:38:13] Speaker B: I mean the fact that it's S3 and SQS which are such old services. This tells you how they were like well we have two choices. We can fix this problem and move it out of root credentials or we can create a whole root vending account in the system. That's going to be less work for us to do than undoing the core thing we messed up, you know, 15 years ago. [00:38:30] Speaker A: Yeah, I guess it makes sense because S3 S3 existed before, you know, the majority of the existing IAM ecosystem. So it was probably quite closely tied in with the old root account. But yeah, it's wonderful. No longer have to explain to the security team that like setting the root password@Some64 character random random password and then discarding it was. Was actually a secure option which I think, I still think it was a secure option have to use. [00:38:56] Speaker B: Yeah I mean as long as you're monitoring people doing root password resets, as long as you have that. But you know there was, there was all kinds of ways you could try to tackle this issue and they're all good. Like they were. They met the requirement but they were not perfect. [00:39:10] Speaker A: Yeah, they were required individual people which is you know, just like the MFA device. Like the pink. Pink. What was it we had wasn't the iPhone, was it? It was the, the ipod. The pink ipods we had with Authy installed on for the longest time. [00:39:24] Speaker B: Yeah, it was great worked we had two of them and because it was Authy. They synced between the. It wasn't a phone version of an iPad or ipod. It was the. Basically it looked like an iPhone, but it wasn't an iPhone, so it had all the same features. And then. Yeah, where's one locked in the safe at the office and one locked in the cloud office and you need it, you went and got it and you use other root credential and then you're good. [00:39:46] Speaker A: Yeah, cool. [00:39:47] Speaker B: Finally no longer have to do that. Retire the pink pod, the pink iPads or ipods. All right, this is a lesson to Amazon to be careful when they hire Azure product managers because they're announcing another flavor of Route 53 Resolver DNS firewall, this time Advanced, which is a new set of capabilities to the existing firewall that allow you to monitor and block suspicious DNS traffic associated with advanced DNS threats such as DNS tunneling and domain generating algorithms or DGAs, designed to avoid detection by threat intelligence feeds or difficult for threat intelligence feeds alone to track and block in real time. So yeah, so you can now switch from DNS firewall to DNS Firewall Advanced to get these two capabilities which why they're not as checkboxes in the UI and additional costs I don't understand. But again, this is the Azure model, so again, be careful about those Azure product managers. They bring bad habits sometimes. Yeah, case here. That's what it feels like. [00:40:43] Speaker A: Yeah. At least it was advanced and not premium. That just have been. Yeah, what a. What a horrible way to add new features. Like why. Why make it a new thing? It's. [00:40:52] Speaker B: Yeah, why a new sku? Like please just make it optional features that cost me more money. I much prefer that. Well, if you want to save money. Amazon DynamoDB engineering team has been focusing on efficiency and throughput and they have identified a savings and they're going to pass some of those savings onto you effective November 1st. So it's already happened and you're good at in this bill if you're a Dynamo DB customer, they've reduced prices for on demand throughput by 50% and global tables by up to 67%, making it more cost effective than ever to build, scale and optimize your applications. One of the interesting things I found in this article was that ad points out that while provision capacity workloads were reasonable in the past, the new on demand pricing benefit will result in most customers achieving a lower price with on demand nodes would still meet the capacity need without having to capacity plan or do scaling of that capacity throughput. So they're actually saying that because of this price adjustment the cost delta is not that great or sorry, the cost benefit is much better and so you should definitely consider moving back to on demand DynamoDB versus using provisioned as it'll save you a bunch of headache and toil and you do the same throughput at lower costs which. That's pretty darn good. [00:41:59] Speaker A: Yeah, that's. That's great. No commitment. Although if you're on the commitment then. Then that could have got you discount too I guess. [00:42:05] Speaker B: Yeah, I mean yeah, you can negotiate those and savings plans and things like that but again if you're. You don't want the hassle of having to manage the provision through because that's one of the things like when they started doing all these provisioned throughputs or you know, the IO. I don't want to do that. I don't want to capacity plan that. I just want it to work and I want to meet my needs when I need it to do. [00:42:23] Speaker A: Yeah, it's very uncloudy, isn't it too. [00:42:24] Speaker B: Yeah. [00:42:25] Speaker A: Give us your three year forecast for staying on this table for this service that you haven't even launched to your customers yet. It's very pretty hard to plan that kind of thing. I wonder what's driving this cost change. Do you think they've moved it all to Graviton on the back end or something? [00:42:39] Speaker B: I'm sure Graviton is part of it. I'm sure they maybe figure out different models for how they store the data and replicate the data. That's cheaper. Maybe they're passing some bandwidth savings to us because bandwidth, despite what they charge you on egress, has actually got cheaper in most of the world. Oh, that should have been someone's reinvent prediction. [00:42:58] Speaker A: Low bandwidth costs now I don't know. [00:43:00] Speaker B: It's never going to happen. It's. That's how. That's the margin of AWS right there. [00:43:03] Speaker A: Yeah. Las Vegas will freeze over before they lower the bounded costs. [00:43:09] Speaker B: But I still dream. That's why at least on my honorable mentions. [00:43:12] Speaker A: Yeah. [00:43:13] Speaker B: All right then, the last Amazon story. They're introducing a new Resource Control policy, or RCP as they call it for short, which is a new authorization policy in your AWS organization that can be used to set the maximum available permissions on a resource within your entire organization. They are a type of preventative control that helps you establish data parameters in your environment and restrict access to resources at scale. It currently only supports S3STS, KMS, SQS and Secrets Manager. And so you might be asking like what's the difference between a service control policy and an RCP or service control policy? And the answer is SCPS limit permissions granted to the principal so the IAM role or the user where the RCP limits the permissions granted to the resource itself. RCPs are evaluated and resources are accessed regardless of who is making the API request. Some key use cases they gave us examples in the article Again, the organization Wide resource control ensuring S3 buckets can only be accessed by principles within your organization Prevent unauthorized external access even if developer accidentally configures overly permissive policies on their bucket. Combining SCP and RCPs gives you an ability to set maximum viable provisions from different angles. Principles versus resources and use Together they create a comprehensive security baseline for organizations needing strict access controls. [00:44:25] Speaker A: That's a really cool feature. [00:44:26] Speaker B: Yeah, it sounds boring, but then when you think about it, it's like really cool. [00:44:30] Speaker A: Yeah, it's like. What's interesting to me is having done a lot of work in the IAM world lately is that Amazon's IAM is fantastic and Google tried to do a different thing and they went down the path of attaching policies directly to resources which they're backtracking on. And so I think we're kind of like meeting in the middle. It's like Amazon adding the resource specific policies while Google are sort of going to reinvent their own IM stuff so that we can have policies existing as like native objects rather than having to be attached to something or bound to something. So it's like they're all kind of meeting in the middle and realizing after 15 years that they all needed each other's features all along. [00:45:14] Speaker B: You need both ways. [00:45:16] Speaker A: It makes total sense that you would want to be able to put a policy on a resource to say nobody should ever have access to this and nobody should ever be able to do something to this. So what a great protection to do to have. So yeah, it's great indeed. [00:45:30] Speaker B: All right, to save our listeners my voice, which is still straining here, I'll let you take the GCP stories and I'll comment for change, something I've never done before. I'm kind of looking forward to it. [00:45:42] Speaker A: Well, the first couple of stories are normally lots of laughing and giggling because I'm out of practice reading things, but Dataplex automatic discovery makes cloud storage data available for analytics and governance. Your ever growing structured and unstructured data continues to make it a challenge to locate the right data at the right time. And a significant portion of enterprise data remains undiscovered or underutilized, often referred to as dark data. To help address this, Google's announcing automatic discovery and cataloging of Google Cloud storage data with Dataplex, part of BigQuery's unified platform for intelligent data to AI governance. It automatically discovers valuable assets residing within cloud storage, including structured and unstructured data such as documents, files, PDFs, images and more. It can harvest and catalog metadata if you discovered assets by keeping schema definitions up to date with built in compatibility checks and enables analytics for data science and AI use cases at scale with auto created big lake external or object tables, eliminating the need for data duplication or manually creating table definitions. [00:46:44] Speaker B: Yeah, so that's you know, data is the new currency and so finding your data in your organization can be somewhat a needle in the haystack because everyone stores data where they think they need it and then you have different enterprise systems, different SaaS applications are using and so you know to have a system that's kind of inside of your environment that's able to scan automatically find your data assets and then pull them into a data lake even if you don't need them. That's just incredibly valuable just for discovery like oh and now know I don't need it but at least I have it if I ever do need in the future or I can now, I at least now have objects. I know it exists, I can secure it properly because a lot of times customers don't even companies don't even know where the data lives or that it needs to be secured because they don't have this capability. [00:47:25] Speaker A: Yeah, I guess it's great security. Somebody takes a copy of some customer data and puts it in a bucket somewhere or something. Who knows it's going to be good to be able to find those things. I guess the goal of this is not really security focused though. It's more about indexing for AI, I guess. [00:47:44] Speaker B: Yeah, there's those side benefits that they don't talk about but the fact that it's there now you can run DLP against it, you can run all kinds of other Google services. [00:47:52] Speaker A: Yeah, I wonder what the price is like because usually anything that touches cloud storage is expensive just because of the per. [00:47:58] Speaker B: Well they'll scan it for free. They shove them that big leg though, they're going to charge you a pretty penny for that. BigQuery I'm pretty sure. So scanning probably comes for free. I didn't look at the pricing on this one, but yeah, it could get pretty pricey if you're not careful. [00:48:09] Speaker A: Yeah, I wonder if they scan and write or whether it's like it would make sense to scan and write, that's the opportune time to do it and index it somewhere. Rather than going back and iterates, you. [00:48:18] Speaker B: Do a one time like scan of existing objects and then you go back, you know, then you basically tie yourself into the upload process and just pull it as you upload. Makes sense. [00:48:27] Speaker A: Yeah. Underutilized data is great though because that's a prime candidate for deletion or moving some, moving to a different tier or saving money. So that's cool. All right, shift left. Your cloud compliance auditing with Audit Manager Audit Manager from Google is now generally available. It can help you accelerate your compliance efforts by providing clearly shaped responsibility outlines, which is a matrix of shared responsibilities that delineates compliance duties between cloud providers and customers. Offering actionable recommendations tailored to your workloads. Automated compliance assessments can evaluate your workloads against industry standard technical control requirements in a simple and automated manner. Audit Audit ready evidence can generate automatic and comprehensive verifiable evidence reports to support your compliance claims. That's fantastic. And actionable remediation guidance. [00:49:22] Speaker B: Yeah, anything that makes me not have to provide ready audit ready evidence is a win in my book. And this is actually something that I've seen a lot more now in compliance organizations as they, they're buying tools to help with the, you know, evidence collection, the continuous compliance monitoring, the ability to see these things. And the great thing about this tool is you can give it to your audit people. It's designed in a way that you can give it to them and they can do all the stuff themselves without having to bother me, the cloud team or Jonathan or anyone else in the group. [00:49:51] Speaker A: Yeah, that's great. I wonder if like compliance audiences in general will eventually die off, you know, not literally, but like I wonder if Google or Amazon or somebody else could actually build a tool which you say, you know, I want to be compliant with X Framework. We'll reach a point where it can be trusted enough to go and do assessments, collect data, generate reports and then give you findings without the involvement of, you know, the PWCs or anybody else of the world. [00:50:25] Speaker B: Well, what I think it does is I think it reduces the cost of the PwC auditor because you still need, need PDCs, brand and they're trusted. But they, you know, if they trust the tooling and they can poke at it and they can do what they need to do and they can get evidence quickly, then it reduces the amount of time they have to be on site, it reduces the amount of time it takes to do it. And so then you get to, you know, probably reduce costs and they get high and PwC likes it because they'll get higher number of audits they can perform per year per resource and that'll help increase revenue for them. So it's a win win for both sides. They get cheaper compliance testing and that process. And they get cheaper, they get more, you get more of them, which is what they want. [00:51:01] Speaker A: Do you think, do you think there's more prestige around having some companies do your audits than others? [00:51:06] Speaker B: Yes, 100%. [00:51:07] Speaker A: Yeah. [00:51:07] Speaker B: So if you, you know, if you're a SaaS company and you hire a small firm, right, and you're selling to companies that don't know who that firm is, all of a sudden now they're going to scrutinize you more versus if you have EY or PwC or Deloitte or one of the other big three consulting partners who do it, they have a different opinion of them because they have more trust because they're typically inside of their companies doing audits of them and their financial statements and their things. So it's much easier for them to trust a big three consulting company versus a mom PA audit shop. Now if you've never done a SOC or an ISO or those things, some of those they're not as it's much cheaper to go with a small mom and pop the first year, get through all the challenges you're going to have in the first year getting to compliance and then pivot to one of the bigger companies the year later after you've figured out how you're all messed up and fixed it. So that's pretty nice too. [00:52:00] Speaker A: Yeah. What? It's PwC that's interesting. [00:52:03] Speaker B: Probably one of the other ones. They probably have a three way partnership. They think we audit, you audit them. [00:52:11] Speaker A: I'll be looking forward to this story. 65,000 nodes and counting. Google Kubernetes engine is ready for a trillion parameter AI models. And for the masochists out there, you can now Support up to 65,000 nodes which GKE believes leases 10 times more what either AWS or Azure can do. And why would he want 65,000 nodes? Well, they say AI, but I have to interject here and say how would you get a quota allotment for that kind of thing? [00:52:43] Speaker B: I mean you're going to have a problem with that. I assume you're going to need to communicate with your account rep before you spend up your 65,000 GKE nodes. Or you need to be very diversified on your instance types. I like to use all the C type and all the N types that you have available, please. Yeah, I know the spot market on GCP is bad. [00:53:07] Speaker A: I mean the forecasting, which I've heard of people being asked to do in terms of, especially in new regions of compute use. It's like they have a very tight belt of these things. They're not going to give you 65,000 nodes. [00:53:24] Speaker B: Not that coordinating in advance, that's for sure. [00:53:26] Speaker A: No, no, this is like this is a pre commitment, this is. I'm going to need 65,000 nodes. Take my money and then they'll deliver it in 12 months time or something. [00:53:36] Speaker B: Who really needs. If your AI workload is that big, you had to be anthropic or you know, Gemini itself. Like I don't, I don't see most. Like if you're, Let me put it this way, if you are not building a foundational model and your AI project requires 65,000 nodes, I'm going to say you probably did something maybe wrong or unless you're like man, you know, like you're doing DNA, you know, modeling or you know, doing something really advanced scientific, I just don't see how you would need that kind of capacity in a day to day enterprise opportunity. [00:54:07] Speaker A: Yeah, it's cool. The fact that it can support 65,000 nodes. I don't know why other than sort of like general optimization in the performance of managing that many nodes in cluster. I don't know what the real obstacle was that they had to solve in this. But yeah, that would be a massive cluster thinking of each node as its own instance. Yeah, that's a lot. There's a lot of coordination. That's a lot of network traffic, that's a lot of GPUs that they don't have. [00:54:37] Speaker B: Yeah, you're definitely not getting 65,000 TPU nodes, I can tell you that much right now. [00:54:41] Speaker A: No, Wonderful. [00:54:44] Speaker B: Good. All right, well let's finish this off with Azure, shall we? [00:54:48] Speaker A: Yep. [00:54:49] Speaker B: All right. First up, Windows Server 2025 is now generally available with advanced security, improved performance and cloud agility. The only reason why this really matters to me is that because this now means Windows Server 2019 is entering its end of servicing period. Wow. [00:55:03] Speaker A: We just migrated to Windows Server 2019. [00:55:05] Speaker B: I'll reach end of support in January 2029, at least a few years away. Windows 2016 will end its support period in January 2027. So if excited you got off all those 2012 boxes, 2019 or 2016 you get to go start over your project, so you're welcome for that. Microsoft goal is delivery secure and high performance Windows server platform tailored to meet the diverse needs of their customers. This release is designed to let you deploy apps in any environment, whether it's on premises, hybrid or in the cloud. And I think when Azure says hybrid in this context, I mean edge. So you know I had looked this up Some of the key investment areas of Windows 2025 advanced multilayered security so AD gets new security capabilities including improvements in protocols, encryption hardening and cryptographic. You mean you're actually using hardened tls now for ad? Who would have thought? File service message block, SMB hardening, including support for QUIC to enable secure access to file shares over the Internet. SMB security also has hardened firewall defaults, brute force attack prevention and protections for man in the middle relay and spoofing attacks. And thank you for that because it's been forever delegated management of service accounts or DMSA. Unlike traditional service accounts, DMSAs don't require manual password management. Since AD takes care of all of that for you, that's great. Yeah, it's pretty nice. Cloud agility anywhere Hot patching enabled by Azure Arc so if you need to get hot patching so you don't have to reboot, you can do that with Azure arc. Now you get SDN multisite feature versus software defined SDN and unified policy management allowing for centralized management of your network policies on the Windows 2025 boxes. And then of course because it's Windows 2025 it has to support AI, so it has built on support for GPU partitioning, the ability to process large data sets across distributed environments. Windows Server 2025 offers high performance platform for both traditional applications and advanced AI, workloads of live migration and high availability on top of Hyper V NVME storage performance with Windows Server 25 delivering up to 60% more storage apps performance compared to Windows Server 2022 identical systems, which means they had really bad unoptimized drivers before. Storage space is direct and storage flexibility with Windows supports a wide range of storage solutions such as local, NAS and SAN for decades and continues With Windows Server 25 delivers a more storage innovation with native refs support deduplication compression. Then they provision storage spaces and storage replica compression. Now available in all editions of Windows Server 25 and the Hyper V. It now scales up to 240 terabytes of memory per VM and 2048 CPUs per VM in Hyper V. So Pretty big upgrades to Windows Server 2025. We didn't mention it in the Amazon section because we knew we're going to talk about here. Amazon has AMIs available for you for Windows Server 2025 already. And I'm sure Google and I'm sure Azure already has them too. But Google I assume has them as well, though they have not made an announcement yet. [00:57:48] Speaker A: Wow, that's a, that's a lot of new, new stuff. I guess I, I was thinking, well, who know in the cloud that typically don't allow virtualization anyway, so who'd need all these features? I'm like, well, they need it for themselves. [00:58:02] Speaker B: Exactly. [00:58:03] Speaker A: They need it. They built this. This is Windows 2025 as your release. There's a whole, whole ton of features. [00:58:08] Speaker B: Here you need for all those, you know, all those companies going back to on prem. Right. They need it too. [00:58:12] Speaker A: Yeah, yeah. Basecamp looking at you. No, that's great. I still wish you could join the domain without having to reboot the machine. [00:58:20] Speaker B: Yeah. They said there was ad improvements. I was like, yes. Is this finally the moment I'm going to get all the ad? No, it got hardened though, because that's, that's annoyance is that you set up a new Windows domain and Windows Server 2022 and the domain is not. Still has security vulnerabilities. It's like, come on, like, please default to higher security Microsoft instead of lower. [00:58:38] Speaker A: Yeah, it still bothers me, I think the fact that things like new cryptographic support like that, that should have been something they rolled out years ago and they haven't because they're tied to this release cycle for, for the whole os. [00:58:53] Speaker B: Well, and breaking the change of ciphers, it could break things and they want to be backwards compatible and so they maintain all this bad legacy garbage code that gets compromised and exploits and it's just people get annoyed that Apple kills things when they upgrade to new versions of macOS. But at least deprecating APIs gets you into a more secure place. [00:59:13] Speaker A: Yeah, it kind of makes me appreciate the value of package managers analytics. You know, egregious is one thing. Test it, roll it back, whatever. In Windows, it's, it's just not, not good. [00:59:25] Speaker B: You have NuGet and you've got Chocolatey and those kinds of services, they give you sort of a package manager Windows. But the core Windows is still, still very problematic. And then like, you know, I was kind of thinking they would eventually move away from the Registry and that still exists Windows Server 2025. So like there's still. They're still, you know, the things that are always. You've always hated about Windows, they still exist now. [00:59:43] Speaker A: I, I kind of like the Registry. I mean the Registry is better than 5,000 any files on disk until you corrupted your Registry. Yeah, it is an issue and that's why they keep more than one copy. [00:59:53] Speaker B: In your Apple computer you have application folder and you have application container where inside that container all those configuration files live. If the application corrupts itself, you just throw it away and you reinstall it. I don't have to reinstall my whole OS where the Registry is like, oh, we crafted the Registry, this may be a rebuild. [01:00:10] Speaker A: Yeah, that's fair. Those plist files are. That's awkward to work with sometimes on Apple. Yeah, I know. I like the Registry. It's nice to have one place to go to and one place to store configurations for everything instead of having to rely on random files on disk. But yeah, I appreciate that it has its issues, but I wouldn't like to see it go away. [01:00:34] Speaker B: Yeah, I mean I think at some point Microsoft had to figure out that that platform can't be built on further and they're going to have to do something. But I don't think they're there yet. They probably another 10 years of Windows as it is. [01:00:45] Speaker A: Yeah, I mean I kind of wonder what's. What's next for Windows really. [01:00:49] Speaker B: I mentioned disappearing probably. [01:00:53] Speaker A: Yeah, interesting. You know, side. Side topic. I saw Microsoft was being challenged for their pricing in Europe over server licensing in different clouds and hopefully they lose that battle against the EU and they're forced to have fair pricing for. Either they put their prices up for Azure customers or they lower the prices for other cloud customers. That'd be really nice to see. [01:01:21] Speaker B: Well, you're getting enhanced security and operational capabilities of your Azure Kubernetes service with advanced container networking services now generally available from Azure. Advanced Container Networking Services, or ACNS for short, focuses on delivering a seamless and integrated experience that allows you to maintain robust security postures and gain deep insights into your network traffic and application performance. This ensures that your containerized applications are not only secure, but also meet your performance reliability goals, allowing you to confidently manage and scale your infrastructure. ACNS includes observability features including node level metrics, Hubble metrics, DNS and POD level metrics, Hubble flow logs and service dependency mapping. And the ACNS container network security feature includes fully qualified domain name filtering and security agent DNS proxying, the Cilium agent and security agent DNS proxy. There's a quote here from Magnus Wilson, Engineering Manager, Container Platform Agent M Group At Agent M Group, Platform engineering is a core practice supported by our Cloud Native internal developer platform, which enables autonomous product teams to build and host microservices. Deep network observability and robust security are key to our success, and the advanced Continuing Network Secure Services Networking Service Sorry feature helps us to achieve this. Real time flow logs accelerate our ability to troubleshoot connectivity issues A fully qualified domain filtering ensures secure communications of trusted external domains only. [01:02:36] Speaker A: I don't have a whole lot to say about that, honestly. [01:02:40] Speaker B: It's pretty complete or you're just not impressed? [01:02:44] Speaker A: I'm trying to figure out exactly how that repositions AKS against something like this is a catch up GKE or eks. [01:02:54] Speaker B: Gke. You already have this. This. So did eks. This is a catch up feature. Fully qualified domain filtering Outbound Eras has been there for a while. Hubble metrics are just a nice way of doing Prometheus type metrics which you've already been able to do on EKs and GKE for a long time. Node level metrics have always existed in cloudwatch for example. So I yeah, it's great. I'm glad you have it. But it's very much a me too feature. [01:03:19] Speaker A: Yeah and stuff like the silly agent. It's like that they rely on a lot of third parties I find when they deliver their own services and that kind of bothers me rather than being a native. [01:03:29] Speaker B: I mean Cilium is part of the Cloud Native Computing foundation so like it's pretty popular in the overall ecosystem. But I can get the Cilium agent on eks, I can get it on gke, I can get anywhere. So nothing special. [01:03:42] Speaker A: All right. [01:03:43] Speaker B: All right then Azure has several networking updates as well for all your security reliability and high availability needs security enhancements that you get this week. Bastion developer SKU is now generally available so you don't pay so much for your Bastion hosts dev. You get virtual network encryption which is an FPGA powered encryption from VM to VM communication. If you can't do TLS for some reason and DNSSEC is now in preview and the reliability front you get ExpressRoute Metro Skews. You get maximum resiliency of four independent ingress paths to Azure and a new guided configuration for multisite express routes which are basically if you're doing private activity. Load balancer improvements give you an admin stage cross subscription support so you don't have to have multiple load balancers anymore and health enhanced health status monitoring with detailed Reason codes for why it's failing a health check as well as for scaling increased IP address support up to 1 million writable IP addresses per virtual network. IPAM now preview because you can't manage 1 million writable addresses without it. And virtual network verifier static analysis of packet flow validations all at the network layer for your networking team. [01:04:46] Speaker A: That's cool. I think, I think that the biggest one for me there is encryption fpga powered encryption between VMs on the virtual network. Even without TLS, that's finally makes me more comfortable about terminating TLS on load balancer and then having it quotes unencrypted back to the VM if it's actually still going to be encrypted on the wire end to end. That's a great feature. [01:05:08] Speaker B: This is why you would do hands of weird tunneling and all kinds of things. This is really for legacy apps. If you have an app you're currently developing, you should support tls, but this is a legacy thing that you've had forever and you have it in the cloud because you're getting out your data center. This is a great alternative to fixing some piece of code that your business relies on that you don't own. [01:05:28] Speaker A: Yeah, it's encryption without having to change the code. That's a great migration option. [01:05:34] Speaker B: All right, we're going into the Data Zone with the new OpenAI data zone for the US and EU, which gives you a new deployment option that provides enterprises with more flexibility and control over data privacy and residence needs. Ensure that your data is stored and processed within specific geographic boundaries, ensuring compliance within a regional data residency requirement while maintaining optimal performance. Azure has also enabled prompt caching for the 0.1preview, 0.1mini, GPT 4.0 and GPT 4.0 mini on Azure OpenAI services. So between the prompt caching and this new capability, you're getting 50% discount on cache input tokens and you're getting a provisioned global deployment offering that keeps the data relevant to the region that it needs to be in based on gdpr, et cetera. And also they mentioned in this note that they have multiple new models including Ministry Cohere and fine tuning of the Phy3.5 family. [01:06:23] Speaker A: It's AI stuff that's interesting. Prompt caching is probably a poor name for what it actually is, really, because it really isn't. Well, it's kind of caching, I guess. It's caching parts of the prompt. It's caching. It's like not reloading tokens into memory before inference. It's like you can reuse the same reuse. [01:06:44] Speaker B: I don't think it makes sense for like a chat app, but like if you're using AI and an existing application to do like Doc Summarizer and you're always going to provide these prompt inputs, you know, to cache that part of the response, you know, that's helpful. [01:06:56] Speaker A: Prompt caching. Oh man. It's like store procedures for AI, except. [01:07:02] Speaker B: For with us business logic. Thank God. [01:07:05] Speaker A: Yeah, yeah. But a huge discount on cash tokens as well. But so if you can make use of it, then that's great. Which means it's a large cost in loading the context into the model in the first place, which makes sense. That's neat. [01:07:19] Speaker B: All right, you're going to have to walk with me on this one for a little bit because it's a little complicated. [01:07:25] Speaker A: All right. [01:07:26] Speaker B: But I think I figured it out and we'll see what you think. So. Microsoft is introducing hyperlite, a virtual machine based security for functions at scale. Microsoft Azure Core Upstream team is excited to announce the Hyperlite project, an open source Rust library you can use to execute small embedded functions using hypervisor based protection for each function call at scale. If you can do it this at speed, that enables each function request to have its own hypervisor for protection. Hyperlite is a library to execute functions as fast as possible while isolating those functions within a vm. Developers and software architects can use Hyperlite to add serverless customization to their application that are able to securely run untrusted code. Hyperlite enables these for IoT gateway function embedding, high throughput cloud services, and so on. Hyperlite can create a new VM in 1 to 2 milliseconds. While this is still slower than using sandbox runtimes like V8 or WASM time directly with Hyperlite you can take those same runtimes and place inside a vm, direct you in the event of a sandbox escape. Hyperlite is so fast that a 1 to 2 millisecond cold start for each VM is fast enough that it becomes practical VMS as needed in response to events, also making it possible to scale to zero, meaning that you might not need to keep idle VMs. Microsoft is planning to submit this to the Cloud Native Computing foundation. And I reading this, I said, well, it's just Firecracker. Oh. Then I went and did some research and Hacker News told me, no, it's not quite like Hyper, it's not quite like Firecracker. It's a bit different so basically let me find where they said that. Where was that one that I saw? Sorry, I should put the text right in the thing, but I didn't do that. What do you think Jonathan? Why I find this. [01:09:02] Speaker A: I think it will complement Firecracker really nicely because it's meant for sort of function based workloads, not VM based workloads. And so I mean a millisecond startup time that's just, that's almost, it's close enough to zero to be zero compared with like 125 milliseconds for a firecracker. Cold start time and to be fair, an eighth of a second to start up a VM is amazingly impressive. But one to two milliseconds to fire up a virtualized function that can run it's just like Great. [01:09:39] Speaker B: Yeah, I found the comments here. So it basically said that Firecracker can run ordinary Linux GPS VM and UnicornL and Unicorns can run inside of Firecracker. Unicorns of course are focused on single applications, whereas general purpose operating systems are focused on multiple applications since this is focused on running functions embedded inside a host program. So it's a fairly different than other things out there in a class of its own from Firecracker. And this person goes on for. They say that Firecracker boots up a runtime that has a full blown OS in it. LAN just happens to call a known program with a known function. And that sense, sure it provides similar functionality but it's really quite different. That's not why Fly uses Firecracker for instance, think about KMU and Firecracker are in the same space where this is completely different from how it thinks about running a guest function inside of a host program operating system. [01:10:23] Speaker A: Yeah, I guess the downside of this is that it's, it's new. [01:10:28] Speaker B: Yeah. [01:10:28] Speaker A: And it requires Firecracker. You can, you can run anything in a small VM without sort of specialization, whereas this will require specialized guest programs to support it. But why, why wouldn't you with that, with that kind of benefit on startup times? [01:10:47] Speaker B: Yeah, I mean I definitely think it's interesting. It'd be curious to see as it, you know, I don't know that Firecracker gets a lot of support outside of Amazon. Like I don't know of anybody who's running Firecracker. I don't know if anybody run this but you know it's going to get donated to Cloud, the Cloud native foundation. I don't know if Firecracker was ever donated to an open source foundation, but we're curious to see kind of adoption and what happens to this over time. So we'll keep an eye on it. See just knowing it. [01:11:12] Speaker A: Yeah, that's cool. I can see things like something like this being just fantastic for Edge Compute, where you've got constrained resources and you've got tens of thousands of customers who want to use those resources. And so the ability to swap things in and out quickly is going to be crucial for optimization. Yeah. Oh wow, that's exciting. And the fact that they're announcing it and open sourcing it and they have plans to to donate it already is very unusual for Microsoft. [01:11:43] Speaker B: Yeah. But apparently it's important so I'm glad to see something in the space that they're trying to do something right by the community. [01:11:51] Speaker A: Awesome. [01:11:52] Speaker B: Well Jonathan, my voice made it through. [01:11:54] Speaker A: Excellent. [01:11:54] Speaker B: Thank you for your help through the GCP section. And you have a great Thanksgiving. We will not be recording at Thanksgiving and we will be recording late after the Reinvent keynotes. Obviously how we did. And so we'll get our next episode out. This will come out during week of Thanksgiving and then our next episode will actually come out after reinvent, probably around the 9th or 10th of December, early Christmas present. And then we head into our wrap up of the year looking at our favorite announcements and predictions for 2025, which I don't know where it's going to go. We'll see. [01:12:26] Speaker A: Give me an A. Give me an I. [01:12:29] Speaker B: But yeah, we'll catch you up on the latest cloud news. Again, typically the episode after Reinvent is just a Reinvent episode because they will announce a ton of stuff that we'll cover on one episode and then we'll get back to normally scheduled programming after that to cover out on our Azure and GCP at the end of the year and then recap out so exciting last events of the year here. Hopefully all of our hosts will be not sick and not dealing with other issues and it'll be a fantastic wrap up to the fantastic 2024. All right, Jonathan, Happy Thanksgiving. [01:12:58] Speaker A: Thanks. See you later. And that's all for this week in Cloud. We'd like to thank our sponsor Archera. Be sure to click the link in our show notes to learn more about their services. While you're at it, head over to our website@the cloudpod.net where you can subscribe to our newsletter, join our Slack community, send us your feedback and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.

Other Episodes

Episode

May 16, 2019 59m42s
Episode Cover

Episode 22: The Cloud Pod Increases listener limit to 1 million

Azure suffers an outage, AWS Snowballs drive block storage at the edge, S3 Batch Operations and Fully Managed Blockchain all this week on the...

Listen

Episode 243

January 17, 2024 00:30:17
Episode Cover

243: WHOIS The Cloud Pod? We’ll Never Know

Welcome to episode 243 of the Cloud Pod podcast - where the forecast is always cloudy! It’s a bit of a slow new week,...

Listen

Episode

April 16, 2019 1h15m25s
Episode Cover

Episode 18: It’s a Google Next, Next Level Recap

Google Next has wrapped up in San Francisco and we break down the announcements, talk about Google’s enterprise play, and more. Special guest Ryan...

Listen