330: AWS Proves the Internet Really Is a Series of Tubes Under the Ocean

Episode 330 November 21, 2025 00:50:27
330: AWS Proves the Internet Really Is a Series of Tubes Under the Ocean
tcp.fm
330: AWS Proves the Internet Really Is a Series of Tubes Under the Ocean

Nov 21 2025 | 00:50:27

/

Hosted By

Jonathan Baker Justin Brodley Matthew Kohn Ryan Lucas

Show Notes

Welcome to episode 329 of The Cloud Pod, where the forecast is always cloudy (and if you’re in California, rainy too!) Justin and Matt have taken a break from Ark building activities to bring you this week’s episode, packed with all the latest in cloud and AI news, including undersea cables (our favorite!) FinOps, Ignite predictions, and so much more! Grab your umbrellas and let’s get started! 

Titles we almost went with this week

Follow Up 

02:08 Microsoft sidesteps hefty EU fine with Teams unbundling deal

General News 

08:30 It’s Earnings Time! (Warning: turn down your volume) 

Amazon’s stock soars on earnings, revenue beat, spending guidance

06:14 Justin – “There’s a lot of investors starting to question some of the dollars being spent on (AI). It’s feeling very .com boom-y. Let’s not do that again.”

06:46 Alphabet stock jumps 4% after strong earnings results, boost in AI spend

08:03 Matt – “The 15 % of revenue for Google search year over year feels like a massive growth, but I still don’t really understand how they track that. It’s not like there’s 15 % more people using Google than before, but that’s the piece I don’t really understand still.”

08:27 Microsoft (MSFT) Q1 2026 earnings report

09:27 Azure Front Door RCA

14:23 PREDICTIONS FOR IGNITE

Matt

  1. ACM Competitor – True SSL competitive product
  2. AI announcement in Security AI Agent (Copilot for Sentinel)
  3. Azure DevOps Announcement

Justin

  1. New Cobalt and Mai Gen 2 or similar
  2. Price Reduction on OpenAI & Significant Prompt Caching
  3. Microsoft Foundational LLM to compete with OpenAI

Jonathan (who isn’t here)

  1. The general availability of new, smaller, and more power-efficient Azure Local hardware form factors
  2. Declarative AI on Fabric: This represents a move towards a declarative model, where users state the desired outcome, and the AI agent system determines the steps needed to achieve it within the Fabric ecosystem.
  3. Advanced Cost Management: Granular dashboards to track the token and compute consumption per agent or per transaction, enabling businesses to forecast costs and set budgets for their agent workforce.

How many times will they say Copilot: 

Honorable Claude:

23:00 Matt – “

Cloud Tools  

26:47 Apptio expands its FinOps tools for cloud cost control – SiliconANGLE

33:03 Matt – “I’ve set these up in my pipelines before… It’s always nice to see, and it’s good if you’re launching net new, but for general PR, it’s just more noise.  It kind of needed these tools.” 

AWS 

28:44 AWS rolls out Fastnet subsea cable connecting the U.S. and Ireland

29:24 Matt – “The speed of this is ridiculous. 320 plus terabytes per second – that is a lot of data to go at once!” 

30:20 Introducing AWS Capabilities by Region for easier Regional planning and faster global deployments | AWS News Blog

30:36 Justin – “Thank you. I’ve wanted this for a long time. You put it in a really weird UI choice, but I do appreciate that it’s there.” 

32:10 Secure EKS clusters with the new support for Amazon EKS in AWS Backup | AWS News Blog

33:07 Matt – “It’s the namespace level that they can deploy or backup and restore to that, to me, is great. I could see this being a SaaS company that runs their application in Kubernetes, and they have a namespace per customer, and having that ability to have that single customer backed up and be able to restore that is fantastic. So while it sounds like a minor release, if you’re in the Kubernetes ecosystem, it will just make your life better.”

33:53 Jupyter Deploy: Create a JupyterLab application with real-time collaboration in the cloud in minutes | AWS Open Source Blog

34:51 Justin – “A lot of people, especially in their AI workloads, they don’t want to use SageMaker for that necessarily; they want their own deployment of a cluster. And so there was just some undifferentiated heavy lifting that was happening, and so I think this helps address some of that.”

GCP

35:09 Agentic AI on Kubernetes and GKE | Google Cloud Blog

36:49 Matt – “Anything that can make these environments, especially if they are ephemeral, scale up and down better so you’re not burning time and capacity on your GPUs – that are not cheap – is definitely useful. So it’d be a nice little money saver along the way.”

37:09 Ironwood TPUs and new Axion-based VMs for your AI workloads | Google Cloud Blog

38:57 Automate financial governance policies using Workload Manager | Google Cloud Blog

40:06 Matt – “Having that very quick, rapid response to know that something changed and you need to go look at it before you get a 10 million dollar bill is critical.” 

Azure

41:50 Generally Available: Azure MCP Server

42:50 Matt – “So I like the idea of this, and I like it for troubleshooting and stuff like this, but the idea of using it to provision resources terrifies me. Maybe in development environments, ‘Hey, I’m setting up a three-tier web application, spin me up what I need.’ But if you’re doing this for a company, I really worry about speaking in natural language, and consistently getting the same result to spin up resources.”

45:50 A new era and new features in Azure Ultra Disk

47:38 Matt – “There wasn’t any encryption at the host level? Clearly I make bad life choices being in Azure, but not THAT bad of choices.” 

48:21 Announcing General Availability of Larger Container Sizes on Azure Container Instances | Microsoft Community Hub

49:40 Generally Available: Geo/Object Priority Replication for Azure Blob

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod, where you can join our newsletter, Slack team, send feedback, or ask questions at theCloudPod.net or tweet at us with the hashtag #theCloudPod

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. [00:00:06] Speaker B: Welcome to the cloud pod where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure. [00:00:14] Speaker A: We are your hosts, Justin, Jonathan, Ryan and Matthew. [00:00:18] Speaker C: Good evening, Matt. It's amazing when you tell us what to do. Ignite predictions that the other hosts all suddenly can came down with migraines. [00:00:27] Speaker A: Migraines Overslept their alarms. You know, choose a reason. I'm surprised you weren't immediately like throwing up in the bathroom or something. [00:00:36] Speaker C: I mean, if I hadn't already done the research multiple days ago because I knew it was going to suck, I probably would have also faked the flu or something. But I. I use the power of AI so we'll see how it goes. Because I needed. I have a very good prompt for it too. I'll share it with you later because I don't want you to underwhelm me at this moment. [00:00:56] Speaker A: I never thought about what our tiebreakers just number of AI that you know. [00:01:04] Speaker C: We typically go with that or how many announcements they had on stage. I don't know what I never watched ignite. So you had to tell me, like, what do they normally talk about? [00:01:10] Speaker A: Copilot. Copilot. Copilot. Copilot. Are you sure? I think copilot is the answer to the question. Copilot. [00:01:20] Speaker C: Okay, I'm down with that. Jonathan said he did the homework, but he didn't provide it to us in advance. So, you know, do we take his word for that he did the homework? I don't think he did. I know Jonathan well enough, so we'll see. All right, we'll get to that section here after we get through some other news. Nice job last week with you and Elise. I have not listened to the full episode. I am halfway through. But it was good. Good you Jonathan and Elise last week. Always good to have her on the show. Sorry I missed it, but I was enjoying myself. So I was drunk somewhere in the Pacific. I'm pretty sure at the time. [00:01:52] Speaker A: I'm pretty sure you were messaging us while you were drunk in the Pacific. [00:01:58] Speaker C: It varied. Most definitely. Could have happened. All right, follow up. Microsoft has avoided a potentially substantial EU penalty for their bundling of Teams from Office 365 and Microsoft 365. Um, settlement follows a 2202023 complaint from Salesforce on Slack alleging anti competitive bundling practices that harmed their rival collaboration tool. Commitment is for seven years and it requires Microsoft to offer Office and Microsoft 365 suites without Teams at reduced prices with a 50% larger price difference between bundled and unbundled versions. Customers with long term licensing can switch to teams free suites. Addressing concerns about forced option of the collaboration platform to get, you know, forced adoption of copilot free suites, because that'd be good. Microsoft must provide interoperability between competing and collaboration tools and its products, plus enable data portability from teams to arrival services. These technical requirements aim to level the playing field for competitors like Slack and Zoom. Does Zoom actually have a chat feature? News to me. [00:03:00] Speaker A: Zoom does Zoom chat. I feel like some client I had at one point had it. That was like the main way they communicated. And I was always confused because I thought it was Zoom was always just like a meeting tool, but they have like 15 other tools that nobody uses heavily. I feel like, like, well, I guess if their webinar and stuff like that. [00:03:19] Speaker C: They, they, that's just, it's just an extension of their main product, but with more hardware underneath this, you can handle the concurrency and bandwidth requirements at a higher cost. So. Yeah, but yeah, I, I, I have just opened the Zoom app, which makes me laugh because of course it needs to be updated first. [00:03:36] Speaker A: Uh, does it ever not need to be? [00:03:38] Speaker C: Uh, I don't know. I feel like every time I open it because I only open it once in a while, it has to be updated. So it's my kind of my joke when I'm late to meetings, I'm like, oh, Zoom had to update. Sorry. [00:03:48] Speaker A: I just blame teams, but that's fine. [00:03:50] Speaker C: Yeah, no, I do see, I do see Team Chat in here. And I had no idea. And Zoom Docs. What's Zoom Docs? [00:03:57] Speaker A: Never heard of that. [00:03:58] Speaker C: Okay, me neither. All right, well, I don't need to go through the Zoom product line. I guess that's how they're still growing because I assume they would have already gotten through all of that. Matt, because I'm being kind to you because you showed up today. You should probably plug your ears right now. You were too slow. [00:04:18] Speaker A: I had them about a foot and a half away from me and I could still hear it overly loud. [00:04:23] Speaker C: I don't know if I'm a listener. Someday I'll be like, I blew out my speakers on my car because of your stupid air horn. [00:04:27] Speaker A: I don't know. [00:04:27] Speaker C: It could happen. I could see it. [00:04:29] Speaker A: I was trying to figure out last week because I was logged in as the host of where that is. You have to show me one of these times. So when you're not here and I have to round up the show. [00:04:37] Speaker C: Is that why you guys didn't cover this last week? Because you couldn't find the button. It makes sense it would check out. [00:04:42] Speaker A: I mean it wasn't. Not the reason, but, you know. [00:04:47] Speaker C: All right, well we had to cover today because this happened two weeks ago, so sorry for the delay on this but, you know, so now everyone cares about earnings like I do. First up was AWS, who grew revenue 20% year over year to 33 billion in Q3 during 11.4 billion in operating income, which represents two thirds of Amazon's total OPER profit. All this growth has trailed Google Cloud and Azure as recently. AWS maintains its position as the leading cloud infrastructure provider. Amazon did announce they are increasing their 2025 cattle expenditure forecast to $125 billion, up from $118 billion, with the CFO Brian Olafsky indicating further increases expected in 2026. Spending exceeds Google and Meta and Microsoft's CapEx guidance and signals Amazon's commitment to AI infrastructure despite concerns about missing out on high profile AI cloud deals. I mean we did hear, was it last week about OpenAI? I think you guys covered it last week, maybe that they signed a deal with Amazon as well. So yeah, they definitely need to get all that hardware. [00:05:47] Speaker A: Yeah, I mean they're about to, I feel like have a massive capital expenditure. I mean all these cloud providers are as. They have to be able to support all these AI workloads and then, you know, hope to God the AI bubble has embursed and all of a sudden they have lots of capacity, which would be great for me on the spot market. I'm not going to. [00:06:02] Speaker C: Yeah, the spot market would be great. I mean it definitely is. There's lots of investors trying to question some of the dollars being sent on AI. They're feeling very dot com boomy. And I was like, oh yeah, let's not do that again. That was bad last time. [00:06:16] Speaker A: Well, I feel like I'm not sure if I like it's because I clicked on an article. But in the last two weeks I swear I've seen like 15 articles pop up in my news feeds and various things about that. And it's one of those things, once you start to click. Yeah, once. How much is it that or how much is it? [00:06:31] Speaker C: Just that, you know, say yeah, Alphabet increase their infrastructure spending guidance to 91 to 93 billion for the year, up from 85 billion previously talked about. This is all driven by strong cloud demand. And CEO Sundar Pichai reported a $155 billion backlog for Google Cloud at quarter end. So that's people who have commitments who haven't spent their commitment. Yet it's not anything more than that. With CFO signaling significant capex increases expected for next year and 2026. Google Cloud contributed to Alphabet's first ever 100 billion revenue quarter, with total Q3 revenue reaching $102.35 billion and beating analysts expectations by 2 1/2 billion dollars. This meant they earned $3.10 per share, significantly exceeding the $2.33 those pesky analysts failed at Google Search. Revenue overall grew 15% year over year. And Wall street analysts raised price targets substantially following the results, with Goldman Sachs increasing from $288 to $330 for the stock. Whew. Should have got in. I regret it now. [00:07:34] Speaker A: There's a lot of things I feel like Microsoft when ChatGPT first came out I was like I should invest a little bit of money. And they never did. And then it just kept going up and up and up and sometimes I just wish I had a lot more money years ago to make all that work be worthwhile. The 15% of revenue for Google Search year over year feels like a massive growth, but I still don't really understand how they track that. It's not like there's 15% more people. I mean maybe there is using Google than before, but that's the piece I don't really understand. Still. [00:08:11] Speaker C: That'S great. Microsoft azure revenue grew 40% year over year in Q1 fiscal 2026, beating analysts expectations of 38.2% growth and driving the intelligent cloud segment to 30.9 billion DOL total revenue. Company's AI infrastructure investments continued to pay off as Azure cloud services reached over 75 billion in annual revenue for the fiscal 2025. Microsoft took a $3.1 billion accounting hit in their divorce with OpenAI equivalent to $0.41 per share impact on earnings. Despite this, the company still be earning expectations at 372A share versus $3.67. So without that write down their. Their shares would have been $4 and. [00:08:48] Speaker A: Something I was gonna say does that mean that the analysts were off by. [00:08:52] Speaker C: Like by a very large. Yes. [00:08:55] Speaker A: Like that feels like a lot to be off. I still, I kind of want the job of the analysts that I can be that wrong and everyone still is okay with me doing my job. [00:09:03] Speaker C: Yeah, exactly. The quarter's results that were overshadowed by a significant Azure Microsoft 365 outage that you guys talked about last week with Azure front door which I have to ask Matt, can you update your front door yet or is it still locked? And sealed. [00:09:18] Speaker A: Oh I never dropped those that in the follow up section because we were talking about it. We'll talk about that after. Yes, but Front door now takes 45 minutes to an hour to update but. [00:09:28] Speaker C: Oh, that's not bad. It was like Cloudfront then. [00:09:31] Speaker A: Well Cloudfront back in the day before they dropped it down to like seven minutes but we'll talk about that after. [00:09:35] Speaker C: It still takes a thousand years in my mind. [00:09:40] Speaker A: It's like rds. It just never, never finishes provisioning. Sorry. [00:09:45] Speaker C: All right, we've reached the part of the show that I am dreading. Predictions for Microsoft ignite. [00:09:50] Speaker A: Where do you want to do the Azure Front Door follow up? We just did it before or after this. [00:09:55] Speaker C: That was good enough for you? [00:09:56] Speaker A: Oh, there's more. They posted their entire RCA online and to their credit they were also just following AWS's. They did a good job on it. I'll find the link to and I'll drop it in our show notes. But they talk about they kind of walk through a lot of it. I wish they went a little bit more technical in it in spots and in depth kind of way like AWS did where they talked about hey there's this service that monitors the DNS for the Zone or the AZ and kind of went from there. But you know, it sounds like they ended up having two versions of Known Good and they had to like slowly revert everything forward. So I'm just sitting here thinking like a SQL database in the back of my head with reapplying every transaction log. [00:10:41] Speaker C: Forward store procedure went wrong. [00:10:42] Speaker A: Yeah, yeah, I mean I know that's not at all what it was, you know and when you're dealing with any sort of CDN like it's just scale and endpoints and everything else that take time to update. But they talked about how like Front Door wasn't able to up sorry, the portal should have had better fail back and where it was supposed to be and they tried to do it and it didn't work and there was another service or two where it didn't. So you even as a premier customer that was reliant on Front Door you couldn't actually open a support ticket. So even if you were at their highest tier customer they took down a lot of it. The fun part about it was they just, they got it back up on the 5th I think where you could actually make changes. But you know if you go into their FAQ and everything else after that they are pretty much saying that all changes they added additional delays in order to make sure that doesn't break everything again where they it will now take 45 minutes. Like at minimum, like you have to expect that to do anything. You want to purge your front door, you want to add a route or anything else. [00:11:47] Speaker C: 45 minutes for purging is a long time. I mean configuration updates, they shouldn't change that often. But yeah, for invalidation, that's purging, that's pretty bad. [00:11:54] Speaker A: They said SSL updates can take over an hour is what they're expecting. [00:11:59] Speaker C: And definitely. Or is it temporary? [00:12:02] Speaker A: It sounds like it's going to be till like early next year until they kind of fix a bunch of the problems. So let's say. Oh yeah. So making improvements to reduce data plane recovery time further to restore customer configurations within approximately 10 minutes. Estimated completion date March 2026. So end of Q1 next year. I mean but for such a critical service to have that type of delay is still insane. And it might have broken some of our pipelines at my day job because we didn't have a timeout set for an hour for front door to purge a cache that didn't feel like something that we would ever need to adjust the default of like 30 minutes on or whatever it was. So it broke some of my day job pipelines at one point. But they, they walk through it. I mean still wish they went a little bit more technical publicly on a lot of this. I'm sure if you push your account rep they'll give you more and more but you know, there's a lot more still coming. I haven't watched the video. It's kind of quirky, but they do a YouTube video where they talk about the postmortems. [00:13:05] Speaker C: It's actually a live webinar, it says. I'm just looking at it. [00:13:08] Speaker A: Oh, I think I had one of the recordings open. So maybe it's a live webinar that they then record. [00:13:15] Speaker C: I'm not sure exactly because it isn't really clear other than Azure Introspective incident retrospective option two of two at 9:30 tonight. So you get to catch it live very shortly. So if you want to after we record the show tonight, which I'm sure you're going to want to do after so much. Yeah, well I did put the show notes in there for the link to the RCA if people want to follow up on what happened there. But I will read. I have not read it so I appreciate your update as always. [00:13:40] Speaker A: I'll say AWS was a little bit. I enjoyed AWS's a little bit more than Android Azures, that's kind of par for the course. [00:13:48] Speaker C: I enjoy AWS more than I enjoy Azure all the time, so it makes sense. [00:13:53] Speaker A: So on to Azure predictions. [00:13:54] Speaker C: Azure predictions for Microsoft Ignite. Yes, this is our annual game, which we normally do for Re Invent, but because. And we did it for Google Next and now Matt says, no, no, you also need to do it for Azure. So he has made us suffer through figuring out what our predictions are going to be for next week's Ignite conference, which is a terrible week to do a conference right before Thanksgiving. Only followed up by right after Thanksgiving. [00:14:16] Speaker A: I was like, which one's worse? [00:14:18] Speaker C: So yeah, that's great. Matt in the pre action rolling of the dice rolled a 10, I rolled a four and Jonathan isn't here. He also rolled a four and so he by default loses. So he's last. Although I don't have his predictions, so I don't know what he's going to do. [00:14:32] Speaker A: But he says he has them. [00:14:33] Speaker C: He says he has them. So if he does, I'll put them into the show notes and if you're listening to this in the car or on your computer, you can check out what those are in the show notes and then hopefully next week we can make fun of him for how badly he did as well as I'm sure Matt and I will do. [00:14:47] Speaker A: Watch him win. I'm just saying. [00:14:48] Speaker C: Oh, that would just annoy the crap out of me. So anyways, all right, Matt, you are on the clock for your first Microsoft Ignite prediction. [00:15:00] Speaker A: So I'm going to add one which I just really want and I don't think it's actually going to come out, but this is like, come on. I want an ACM competitor, so I want true SSL to be managed in, you know, to be able to build a certificate and manage in all the services where you can put SSL certs. App gateways front door. They kind of piecemealed it together right now, but it should be a single thing and they kind of have most of it, but they're missing the overarching piece still. [00:15:34] Speaker C: Okay. I mean I did not know they still didn't have one. So there you have that part of it. Yeah, I mean especially with Microsoft being one of the main people, you know, pushing for the reduction of the validation of certificates to 45 days. You would think they would definitely want. [00:15:49] Speaker A: To have some product on that door has it. App gateways don't. And you know how much I love my app gateways. They recently they have a preview of trusted Code signing, which is like a code signing certificate service, but it's a preview still. So like they have pieces of it and I don't know if they'll ever actually put it together into a service but. Or they're like going to let every team manage it themselves. But I want something overarching because it would make my life easier because I really hate SSL certs. [00:16:18] Speaker C: Me too. Me too. Especially when they expire because that's always bad. And you forgot they were going to. [00:16:23] Speaker A: So it'll be 47 days. [00:16:25] Speaker C: Yep. Only 47 days. I can have an outage every 47 days. All right, so I, like I said earlier, I used the research mode of Claude and I gave it some really good prompting about what I wanted to go do. And so it did a bunch of research. It gave me evidence of why it thinks these. And this is the best educated research predictions I've ever done. So see how this goes. I'm going to go with my first one as next generation Azure Custom Silicone for AI training and inference. This will be a follow up to the MAYA AI accelerator which has been successful as well as the potential expansion of the Cobalt 100 ARM processors. And the reason why the AI said this was super important is because Microsoft must match competitor silicon investments or face structural cost disadvantages. And I said, aha, that is a great reason to do this other than, you know, me too. So I definitely like this one. [00:17:15] Speaker A: I was debating whether to do that one first or second. I said cobalt 2000 was my note to myself nice. Or 200. [00:17:23] Speaker C: Sorry. So I'm going with Maya or gen or cobalt gen 2 basically or similar. All right, Jonathan, what's your first prediction? Stunning. Stunning. Okay, Matt, your second prediction. [00:17:46] Speaker A: Well, you stole my second one. [00:17:48] Speaker C: You're welcome. [00:17:49] Speaker A: I'm gonna go with some sort of update, some sort of AI announcement in the security space. So not just like Cobalt for security tools, but kind of that seam tool. [00:18:00] Speaker C: They have Sentinel, but AI agent for Sentinel. [00:18:04] Speaker A: For Sentinel. Kind of. Yeah. It's kind of where I'm going with it because they have copilot AI for security which kind of can touch on it. And maybe it does touch on it. I'm just straight up wrong because it already exists. But I don't think it does. But that's kind of where my I'm targeting makes sense. [00:18:23] Speaker C: Mine is dramatic price reduction or caching capability for OpenAI. [00:18:32] Speaker A: They have some caching. [00:18:34] Speaker C: Yeah. [00:18:34] Speaker A: But I don't know enough about it though. [00:18:36] Speaker C: The bedrock announcement just recently about 90% for cost reduction and Intelligent prompt routing for servicing cost savings, I think is something they're going to want to do as well in an aggressive way. And, you know, it also goes back to what they've been saying about open, you know, inference costs going down, et cetera, et cetera. [00:18:53] Speaker A: So they have prompt caching right now, just so you know. Yeah. [00:18:56] Speaker C: But it's not very robust. That's my idea. So I'm looking for something aggressive. [00:19:02] Speaker A: Yep. [00:19:03] Speaker C: Yeah. All right. Jonathan, your second. Oh, nothing again. Okay. [00:19:08] Speaker A: Oh, damn. That was a good one, Jonathan. [00:19:10] Speaker C: Yeah, it was great. Matt, your third and final. [00:19:16] Speaker A: So, based on a conversation we had when AWS went down, because we're like, they're updating for the conference, Azure DevOps had an outage today. This morning. Oh. So I'm going to say there's going to be an Azure DevOps announcement, which kind of still would think surprising given that they pushed so much towards GitHub actions. [00:19:38] Speaker C: I could see them announcing potentially Azure DevOps is being replaced by GitHub Copilot. And yeah, I can see that, given. [00:19:44] Speaker A: That they had an outage today. And I was on with one of my employees and he said, you should say this. And I was like, great. [00:19:51] Speaker C: All right. Well, I mean, I got nothing better. I have nothing better. That rule of thumb works quite well for myself, you know, when I think about Amazon stuff. So, yeah, no, I think it's good. I have a final for my third. I am going to expect a Microsoft Foundational LLM. I've announced it. I've asked for it many times. I think it's coming to compete with OpenAI. [00:20:15] Speaker A: I think I had that prediction last time too. [00:20:20] Speaker B: There are a lot of cloud cost management tools out there, but only archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Achera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask, will you actually give me my money back? Achero will click the link in the show notes to check them out on the AWS marketplace. [00:20:59] Speaker C: All right, then our final thing is how many times will they say co pilot in this presentation? And since Jon's not here to be first, it's me because it's reverse order. [00:21:09] Speaker A: Is it price is right style? I always forget it. [00:21:11] Speaker C: Is prices right style? Yes, I'm gonna go with 35 times. [00:21:17] Speaker A: I really want to go with 36. [00:21:19] Speaker C: Of course you do. I'm not gonna say you can't do that because that's just fun. That's the way the game works. But if I hit 35 on the head and I win that point, I. [00:21:29] Speaker A: Am going to you. I think it's actually double by the way. If you get. If you hit it out of the head, you get two points. [00:21:35] Speaker C: That would be fair. [00:21:36] Speaker A: Okay. [00:21:37] Speaker C: It's already a tiebreaker to begin with, so it's fine. [00:21:42] Speaker A: It's rubbing it in our face for a year is what it is. [00:21:46] Speaker C: Yes. [00:21:46] Speaker A: True. I'm going to go with like 40. I'll give you a few buffer. All right. [00:21:53] Speaker C: Sounds reasonable. I did have some honorable mentions from my Claude research, but I didn't like them as much. [00:21:59] Speaker A: Oh, I Claude the one, the. The fourth one, if somebody took any of the other ones was Claude for. For Azure Open AI. [00:22:07] Speaker C: Yep. Azure platform. [00:22:09] Speaker A: It drives me crazy. It's not there right now. [00:22:11] Speaker C: Yeah, that is annoying. So I. Other ones that an autonomous agent platform. Multi agent orchestration capabilities. I felt like this was already out there and I didn't have time to do the research so I didn't even go with this one because I was pretty confident. Copilot. Copilot Studio already basically does this, but. [00:22:26] Speaker A: I think it does, but I haven't used it enough to know. [00:22:30] Speaker C: Yeah, basically the distinction that the chat AI made was that it's a production ready platform for building and deploying autonomous agents that can independently act independently on events, coordinate across multiple specialized agents and integrate deeply with Microsoft 365 and Azure services. And this would include pre built agent templates, orchestration APIs and event triggers. And it bases this on Satya, Nadella being on CNBC and a couple other things to talk about in earnings calls about why it feels like this is a strong recommendation and basically says the language is too specific and repeated across multiple executive statements to be coincidental. Copilot Studio already has a hundred thousand organizations 2x quarter record growth. Microsoft providing the foundation. Makes sense. So I don't know. I don't. I didn't buy that one. So that was an honorable mention from it. Autonomous agent platform with deep links. That's a key word there. See what else? They had a bunch of stuff about EU and regulated industries and sovereign cloud and I was like no one cares about that. [00:23:30] Speaker A: No. [00:23:30] Speaker C: So that means this one makes sense to me. Microsoft fabric unified AI and data platform enhancements with real time agent integration. I just hate Fabric I mean fabric is its own hellscape but if you think about Power BI inside of fabric, if you added AI capabilities to the fabric version of it, which is lacking compared to the desktop version of of it, I could see that being interesting. So definitely potentially good ones then it had some bold predictions. Azure foundation model marketplace expansion with over a hundred plus models I guess it's kind of your cloud thing so cloud showing up there Breakthrough extended context window for Azure OpenAI 512,000 to 1 million tokens match Google Gemini Doubt that's going to happen. No, I don't think OpenAI is said that that's necessary AI native developer platform with ripped to debugging automated co generation or repository scale I mean GitHub copilot you know basically this is evolution of GitHub copilot. This may be kind of tied to your DevOps thing. Maybe I didn't, didn't feel confident in it. Carbon Aware worthless scheduling is the same SLAs energy optimized infrastructure maybe I've had that one. [00:24:35] Speaker A: I've. I've had green on the main stage a few times. One of these days may hit that. [00:24:39] Speaker C: Yeah and then multimodel AI capabilities with native image and video generation integrated into Azure OpenAI service which would make sense because I do believe all that exists currently but I think it's separate model so this would be a multimodal version of OpenAI that would support that but I don't think GPT5 is multimodal like that yet. [00:24:56] Speaker A: So I don't think it is either. [00:24:58] Speaker C: So either it has to be their own model that they built which goes back to mine or you know something else is going on there that Chad GPT as part of their divorce said yeah we'll announce this cool thing at your conference which I doubt will be the case. So anyway so that's, that's honorable mentions for you. [00:25:14] Speaker A: The other one I had which I didn't really like but if I was pigeonholed it was going to be like a VMware to cloud, like a competitor, you know, someone to kind of offset what VMware and Broadcom is doing with their licensing. They've had Hyper V but can you do live migrations between Hyper V on Prem and Azure like you use in the same way where VMware you can kind of move the workload back and forth. [00:25:44] Speaker C: I don't know. [00:25:46] Speaker A: That's kind of was my last one. [00:25:50] Speaker C: Migrating hyper VMs to Azure but it doesn't say live migration so you can definitely migrate them. You Just can't do it live. But I mean to do live you have to have enough speed to get your memory to sync and that's difficult in a high volume workload. So I don't think VMware actually can do it as much as they said they could. But anyways, we'll see. All right, moving on to cloud tools and getting out of Azure for a little bit. IBM's Aptio launches Cloud ability governance with Terraform integration to provide real time cost estimation and policy compliance at deployment time. Platform engineers can now see cost impacts before deploying infrastructure through version control systems like GitHub. Addressing the problem where 55% of business leaders lack adequate visibility into technology spending ROI. Kubecast 3.0 adds GPU specific monitoring capabilities through Nvidia's data center GPU manager exporter providing utilization of memory metrics critical for AI workloads. And the platform addresses common tagging blind spots by automatically identifying resource initiators and applying ownership tags when teams forget. Oh, thank you. AI workload acceleration has increased velocity of cloud spending rather than creating new blind spots. And with GPU costs potentially reaching thousands of dollars per hour, real time visibility becomes essential. So yeah, that's nice. [00:27:04] Speaker A: Yeah, I've always, I've set these up in my pipelines before and I guess it's been probably like four years where it does the cost estimate. It's always nice to see it's good if you're launching net new but for like a general PR it's just more noise. So it kind of needed these tools always needed like a way to be like okay, here it is or like oh, you're updating the instance size. So that's something I care about. But just because I'm, you know. [00:27:28] Speaker C: Well, you need to have like a variance tolerance in your report. Like if the, if the price increases, you know, more than this percentage then you know, flag it in the build. Right, yeah, exactly. [00:27:40] Speaker A: And in the past, I mean they were open source tools I've used in the past. So you know, not a apptio, you know, not an IBM done tools which I'm sure costs more money than I'm willing to spend on something like this. You know, something along those lines would be, would be great. So you know, but it's just I've never been willing to spend the money on. [00:27:59] Speaker C: That makes sense. AWS this week is rolling out a new Fastnet cable. The Fastnet subsea cable connects the US and Ireland with a 320 terabits per second capacity. When operational in 2028. The system uses unique landing points away from traditional cable corridors to provide route diversity and network resilience for Amazon customers running cloud and a network loads. The cable features advanced optical switching branching unit technology allows future topology changes and can redirect data to new landing points as network demands evolve. Threat specifically targets growing AI traffic loads and integrates directly into the AWS services like Cloudfront and Global Accelerator for rapid data rerouting. [00:28:44] Speaker A: The speed of this is ridiculous that they're able to kind of get like. [00:28:49] Speaker C: I can move my blue Brute, my Blu ray rips so fast on this network. [00:28:52] Speaker A: I know like 320 plus terabytes per second. [00:28:58] Speaker C: Holy. [00:28:59] Speaker A: Like that is a lot of data to go at once. Like I understand, you know, we use a lot of data and what data is growing at a massive amount, but holy crap. And then the other metric in here that I was shocked by was AWS global fiber network now spans over 9 billion kilometers. Sorry, 9 million kilometers. I was like, that is a lot of miles of fiber that just AWS has. [00:29:25] Speaker C: That is a lot of fiber. A lot of. A lot of goodness there. But yeah, crazy. All right, moving on to our next story. AWS is launching Capabilities by Region, a new planning tool that lets you compare service availability, API operations, CloudFormation resources and EC2 instance types across multiple AWS regions simultaneously. Tool addresses a common customer pain point by providing visibility into the aws. Features are available in different regions. Thank you. I've wanted this for a long time. You put it in a really weird UI choice, but I do appreciate that it's there. This is in the builder center, which is the UI I was mentioning. And literally you can just pick the regions you want to compare and it'll show you what APIs are available or not available or when it's coming, which is super nice. And then as well, if it's not expanding into our region, it'll tell you that as well. So that's very helpful information to have and does save me from going to multiple product blogs and product articles to figure that information out. [00:30:23] Speaker A: I mean it sounds like one of those dumb features, but as a person that runs a SaaS product like you, you know, hey, customer wants to spin up here and I gotta go look to see if things are available in every different aspect of our platform and all the different manager services are there, having this at a much easier way will be fantastic to be able to do. Especially because it's not just like hey, is EC2 available here? It's sub features too of the product line. So it's hey, are these instance types or are these app gateway features or API features available there that really make this be a valuable tool? So while it sounds like something that's just like eh for the average consumer, anyone that's had to do these comparisons to figure out what the next if they can launch a region, this will make your life significantly better. [00:31:15] Speaker C: You can now back up your EKS cluster, providing a centralized backup and restore capability for both your Kubernetes configuration and the persistent data stored on EBS, EFS and S3. This eliminates the need for custom scripts for third party tools that previous require complex maintenance across multiple clusters. Service includes policy based automation for protecting Single or multi EK multiple EKs clusters with immutable backups to meet compliance requirements and during restore operations. AWS Backup can now provision a new EKS cluster automatically based on previous configuration settings, removing the requirement to pre provision target infrastructure which is that's super nice. Restore operations are now non destructive, meaning that they apply only the delta between backup and source rather than overriding existing data or Kubernetes versions and customers can restore full clusters, individual namespaces to existing clusters or specific persistent storage resources. If a partial backup failure occurs, which is available to you in all commercial regions except for China and in AWS. [00:32:08] Speaker A: GovCloud, it's the namespace level that they can even deploy or back up and restore to. That to me is great. I could see this being a SaaS company that runs their application kubernetes and they have a namespace per customer and having that ability to have that single customer backed up and being able to restore that is fantastic. While it sounds like a minor release, if you're in the Kubernetes ecosystem it will just make your life better. [00:32:37] Speaker C: So being able to start from backup when you screw up is very helpful. [00:32:42] Speaker A: I'm just saying I've had to do it a few times in my career. [00:32:45] Speaker C: I have had to do it as well. [00:32:46] Speaker A: And if you haven't done it, you're not trying hard enough. [00:32:49] Speaker C: In your job, you're not trying hard enough for sure. Jupyter Deploy is a open source CLI tool from AWS that lets small teams and startups deploy a fully configured Jupyter Lab environment to the cloud in minutes, solving the problem of expensive enterprise deployment frameworks. The tool automatically sets up EC2 instances with HTTPs encryption, GitHub, oauthentication and real time collaboration features and custom domain without requiring manual console configurations. This is great to have and the reason why this exists is because many teams who don't know how to do things with infrastructure would like to use Jupyter. And so who provides Jupyter? Oh, Snowflake and Databricks and others. So now Amazon can just say, don't pay those people that money, just use our tool and keep the data in our cloud. So well done. [00:33:35] Speaker A: That makes a lot of sense. I was trying to figure out this announcement beforehand and I realized that they didn't have a full Jupyter because I feel like they had Jupyter Notebooks inside of SageMaker back in the day and that's been there for a while. [00:33:49] Speaker C: But a lot of people, especially in their AI workloads, they don't want to use SageMaker for that, necessarily want their own deployment of a cluster. And so there was just some undifferentiated heavy lifting that was happening. And so I think this helps address some of that. [00:34:02] Speaker A: Got it. [00:34:04] Speaker C: Moving on to gcp Agent Sandbox is a new Kubernetes primitive design specific for running AI agents that need to execute code or use computer interfaces providing kernel level isolation through gvisor and KATA containers. This addresses the security challenges of AI agents making autonomous decisions about tool usage where traditional application security models may fall short on gke. Agent Sandbox delivers sub second latency for isolated agent workloads through pre wormed sandbox pools, representing up to 90% improvement over cold starts. The managed implementation leverages GKE sandbox and container optimized compute for horizontal scaling of thousands of ephemeral sandbox environments. Pod Snapshots is a GKE exclusive feature unlimited preview that enables checkpoint and restore running pods, reducing startup times from minutes to seconds for both CPU and GPU workloads. And the project includes a Python SDK designed for AI engineers to manage Sandbox lifecycles without requiring deep infrastructure expertise. Still, providing Kubernetes administrators with all of the control primary use cases include agentic AI systems that need to execute generated code safely, reinforcement learning environments requiring rapid provisioning of isolated compute and computer use scenarios where agents interact with terminals or your browser. [00:35:11] Speaker A: I got nothing in this one. Not even gonna lie. [00:35:16] Speaker C: It'S cool. I definitely have seen the sandboxing capabilities and being able to block certain commands from being run or things is important. So I get the gist of why it's important, but that's a good thing. [00:35:25] Speaker A: Yeah, I mean it's the 90% improvement over cold Start sounds great too, especially for developers. And if you're not as impatient as I am in life, it's having to wait that time so anything that can make these environments, especially if they are ephemeral, scale up and down better so you're not burning time and capacity around your GPUs that are not cheap is definitely useful. Be a nice little money saver along the way. [00:35:52] Speaker C: Yep, Google is giving us a bunch of hardware this week everyone. These were things you talked about at conferences? Not anymore, but it used to be a thing Google is announcing Ironwood, its seventh generation TPU delivering 10x peak performance improvements over the TPU V5P and 4x better performance per chip than the TPU V6E for both training and inference workloads. System scales up to 9216 chips in a superpod with 9.6 terabits per second of interconnect speed and 1.77 petabytes of shared HBM featuring optical circuit switching for automated failover. Anthropic apparently plans access to 1 million TPUs and reports that performance gains will help scale CLAUDE more efficiently in the future. There's a new Axion based N4A instance entering preview offering up to 2x better price performance than comparable x86 VMs for general purpose workloads like microservices, databases and data prep. The C4Ametal gool's first arm based bare metal instance will launch in preview soon for specialized workloads requiring dedicated physical servers and early customers have reported 30% performance improvements for video transcoding at Vimeo and 60% better price performance for data processing at Zoom Info. Google is positioning Ironwood and Axion as complementary solutions for the age of inference, where genetic workflows require coordination between ML acceleration and general purpose compute. NASA emphasizes system level co design across hardware, networking and software built on Google's custom silicon history, including TPUs that enabled the transformer architecture eight years ago. So thanks. Nice new hardware. [00:37:17] Speaker A: Yeah, I mean new chips are always great and like we just said about Microsoft's it's going to be important to have your own because you're going to be able to control and optimize these chips to exactly what's needed in your environment and everything along those lines. So having those ability for the specifically the act the Axion chips is always is going to be critical for their. [00:37:38] Speaker C: Success in the future and finops tooling again showing up in the cloud pod today. Google has enhanced Workload Manager to automate FinOps cost governance policies across GCP organizations, allowing teams to codify financial rules using Open Policy Agent or OPA and runs continuous compliance scans. The tool includes predefined rules for common cost management scenarios like enforcing resource labels, lifecycle policies and cloud storage buckets and data retention settings like with Results exportable to BigQuery for analysis and visualization. In the Looker Studio, the pricing update is significant with Google reducing workload manager costs by up to 95% for certain scenarios and introducing a small free tier for testing. This makes large scale automated policy scanning more economical compared to manual auditing processes that can take weeks or months while costs accumulate. [00:38:22] Speaker A: Somebody that's dealt with the unexpected costs of hey, let's run a test of scale tests or anything else along these lines. Having automated alerts and everything to make sure that your environment doesn't blow out of the water. Especially with all these new workloads that are extremely expensive. You know, GPUs like we said, can burn thousands of dollars an hour very quickly. You know, having that very quick rapid response to know what you know, something changed, someone go look at it before you get, you know, a $10 million bill is, you know, critical. So anything we can do to automate these and these tools definitely are useful so your CFO doesn't yell at you at the end of the day. [00:39:09] Speaker C: Thanks. All right, moving on to Azure once again we've come full circle and ironically Jonathan has now submitted his three I. [00:39:20] Speaker A: Just saw that message. [00:39:22] Speaker C: So he first of all had a the general availability of new smaller and more power efficient Azure local hardware form factors. Okay, sure. Declarative AI and fabric. This represents a more move towards a declarative model where users state the desired outcome and the agent simply determines the steps needed to achieve it within the fabric ecosystem. I mean that would be cool. Way ambitious for Azure. And the final one, Advanced cost management Granular dashboards to track the token and consumption per agent or per transaction, nailing businesses to forecast costs at budgets for the agent workloads. That's a good one. I definitely think the cloud vendors need to work on their cost visibility for AI. So yeah, those are all be good ones for them to deliver. So nice job Jonathan. [00:40:05] Speaker A: From afar, I'm pretty sure part of the reason they don't want to do such visibility is that right now every company can get away with pretty much saying we're doing this for AI. And once they can give you all that granular details of how and why you're spending that money, they're not going to want to do that for free. [00:40:24] Speaker C: Probably not. That's wishful thinking. [00:40:26] Speaker A: Yeah. [00:40:28] Speaker C: Azure MCP Server is providing a standardized way of for AI agents and developers to interact with Azure services through The MCP protocol. This creates a consistent interface layer across services like aks, Azure Container Apps, App Services, Cosmos, db, SQL Database, and AI Foundry reduce the need to learn individual service APIs. MTP implementation allows developers to build AI agents that can programmatically manage and query Azure resources using natural language or structured commands. This bridge is the gap between conversational AI interfaces and cloud infrastructure management, enabling scenarios like automated resource provisioning or intelligent troubleshooting assistance. The server architecture provides secure authenticated access to Azure services while maintaining standard Azure RBAC controls. This means the AI agent operates within existing security boundaries and permission frameworks rather than requiring separate authentication mechanisms. Or you'll learn all the ways that your security controls are limited because this thing tries to do things you never thought about. So yeah, nice. [00:41:23] Speaker A: So I like the idea of this and I like it for troubleshooting and stuff like this, but the idea of using it to provision resources terrifies me. You know, maybe in development environments, hey, I'm, you know, setting up a three tier web application. Spin me up what I need, you know, but you know, if you're doing this for, you know, a company, I really worry about speaking in natural language and consistently getting the same result to spin up resources. Like that's what kind of terrifies me for troubleshooting everything else like that. Oh my God, that's going to be great. Except for, you know, the Copilot RA built into the Azure console sucks already, so maybe this will be better, but it just, you gotta use this in the right way. Maybe I'm too cynical, but I really don't want my team spinning up production workloads with in production environments with natural language queries to an MCP server. [00:42:23] Speaker C: I mean, I don't have a problem with it per se, but I definitely want to make sure it has guardrails around it. [00:42:29] Speaker A: Well, how are you being consistently. I mean, let's go back to AWS. How many ways are there to run Docker in AWS? [00:42:37] Speaker C: Only 100, right? [00:42:39] Speaker A: So there are six that I could think of off the top of my head in Azure right now to do it. And if I say spin me up a Docker container behind an app gateway, there are at least three or four ways you could do that. You could spin up Kubernetes cluster, you could do it behind the app service. You can run an EC VM with, with a Docker. [00:42:58] Speaker C: Like, I mean, the MCP is just providing you the capability to do that. The AI itself is the one that made that decision on which tool to use. So in your Prompting you should be giving it some guidance. [00:43:07] Speaker A: You have to have to be very specified. Yeah, I'll still take Terraform. [00:43:13] Speaker C: I mean really, the MCP is not, is not the magic sauce. MCP is just giving you a different way to talk to it versus the API. I think the part you're concerned about is really more of the person making the prompts and what they're doing versus the MCP itself. Yes, and I agree with you on that point. I think the MCP is good though. [00:43:34] Speaker A: Yes, I think the MCP overall is good and I think it will help in debugging and learning new services and playing with new stuff. And for developers, it will be great for a cloud engineering team that's launching and managing infrastructure. Great for debugging and things along those lines, you know, not great for, you know, spin me up stuff that terrifies me. I'll stick with Kubernetes. Maybe I'll be old man yelling at, you know, yelling at the cloud in the future of get off my lawn and give me back my Terraform. But I'm not there yet. Or I am and I don't know. I wanted to. [00:44:10] Speaker C: I mean, you might be there. It just, I mean we're, we're all aging rapidly in cloud. It's just one of those things. I remember it was new, it was trainy. Now we're old men. Say no, that's not how you do it. There you go. Azure Ultra Disk is receiving substantial performance and cost optimization updates focused on mission critical workloads. It's only about time. The service now delivers an 80% reduction in P99.9 and outlier latency, plus 30% improvement in average latency, making it finally suitable for transaction logs and IO intensive applications that previously required local SSDs or the Write Accelerator. The new flexible provisioning model enables significant cost savings, with workloads on small disks saving up to 50% and large disks up to 25%. Customers can now independently adjust capacity, iops and throughput with more granular controls, allowing a financial database example to reduce ultra disk spending by 22% while maintaining required performance levels. You know what I really like? If you use that MCP earlier to just make that automated for me. And I don't have to think about that because that's just more capacity planning for me to do. [00:45:08] Speaker A: Well. It's like GP3. Was it GP3 where you kind of have the conviction, the provisioned and. But the feature they finally released two weeks ago was great. That pain they still deal with. [00:45:20] Speaker C: No, Azure has it you also get instant Access Snapshot feature enable entering Public preview for Ultra Disk and Premium SSD v2, eliminating traditional wait times for snapshot readiness which good you Copying Amazon there Ultradisc now supports Azure Boost VMs including EVDSv5 series with 400,000 IOPS and 10 gigabits per second of throughput, a memory optimized MB V3 VM standard instances with 550,000 IOPS and 10 Gbps of throughput. Additional Azure Boost VM announcements are planned for 2020 for Ignite. Oh, should have learned that before we did the announcements. You could have picked a new instance and recent features additions include Live Resize capability, encryption at host support, Azure Site recovery and VM backup integration, and shared disk capability for SCSI persistent reservations. So nice. [00:46:06] Speaker A: Wait, there was the encryption at the host level. I mean clearly I don't use Ultra Disk because I make bad life choices while being in Azure. But not that bad life choices. But you couldn't encrypt at the host level. That seems crazy. Oh, recent additional recent feature additions? Never. [00:46:22] Speaker C: Yeah, recent ones. [00:46:24] Speaker A: It's like that seems like a weird thing to not have. [00:46:27] Speaker C: But you know, it's interesting before and now it's still there. It's good. [00:46:31] Speaker A: So this is where your SAP HANA workload. Let's be honest here. [00:46:35] Speaker C: Yeah. [00:46:35] Speaker A: Or if you're running your own dedicated SQL, you know Microsoft SQL Server and not leveraging one of the three three ways to run SQL RA on Azure. [00:46:45] Speaker C: Exactly. You nailed it and won announability of larger container sizes on Azure Container instances they now support container sizes up to 31 VCPUs and 240 gigabytes of memory for standard containers expanding from the previous four VCPU and 16 gig limits. This applies across standard containers, confidential containers, virtual network enabled containers, and AKS virtual nodes. I guess Windows needs more power. [00:47:12] Speaker A: Yeah, I will say 16 feels low for like a high limit, but 240 feels really high. Like I feel like there should have been a middle ground like 32 64. You know things along those lines. It feels weird to jump all the way from 16 to 240 gigabytes or even like the CPUs. The 31. It still irks me a little bit. It's not a multiple of two. It bothers me. Just does. [00:47:43] Speaker C: I mean I don't blame you, but I also don't doesn't keep me up. [00:47:48] Speaker A: Awake at night so just bothers me while we talk about it. [00:47:52] Speaker C: I mean if I was doing hardware still and I cared, I would definitely have an issue with it, but yeah, maybe they're using that one VCPU. [00:47:59] Speaker A: So it's like a management 32nd for management. [00:48:01] Speaker C: Yeah, yeah, makes sense. And our final Azure story, Geo Priority Replication is now generally available for Azure Blob storage writing accelerated data replication between primary and secondary regions for GRS and GZRS storage accounts with an SLA backed guarantee. This addresses a long standing customer request for predictable replication timing and GEO redundant storage scenarios. And the feature specifically targets customers with compliance requirements or business continuity needs that demand fast recovery point objectives for their geo replicated data. [00:48:32] Speaker A: So this feature was released on AWS November 20, 2019. [00:48:38] Speaker C: Well, shh, don't, don't, don't. You know, Azure is brand new to them. Just because it's a used card to the rest of us doesn't mean they don't love it in their own way. [00:48:48] Speaker A: Oh, sorry. They then also updated it again. There's a different update later on. I think that's what it was from my quick search. I mean I put this in here just because it's amazing some of the basic features that Azure just doesn't have, you know, and if you do have an SLA that is a 15 minute SLA or even less, then you need to have these RPOs in there. And for Blob storage it doesn't feel, I mean, I guess it's also dependent on size, but like it should be something that's there. I guess they just didn't have enough, you know, back point support. [00:49:22] Speaker C: All right, well that is it for us here this week at the Cloud Pod. Another fantastic episode. Let's see how our predictions do next week. Horrible. Imagine poorly. But I look forward to maybe getting one right. Maybe one of us will get one. Then that person will be the champion. [00:49:36] Speaker A: I think the main keynote's Tuesday at noon east coast time, so we should know by the time we do the show if we actually manage to do it on Tuesday. [00:49:44] Speaker C: Yeah, we should know. All right, well I'll see you next week to learn how we did. See you, Matt. [00:49:50] Speaker A: See ya. [00:49:54] Speaker B: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services. While you're at it, head over to our [email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.

Other Episodes

Episode 88

October 21, 2020 00:42:21
Episode Cover

Episode 88: The Chronicles of The Cloud Pod

Your hosts have an action-packed episode in store for you on The Cloud Pod this week, and Ryan is back after surviving the wild...

Listen

Episode 202

March 10, 2023 00:35:56
Episode Cover

202: The Bing is dead! Long live the Bing

On this episode of The Cloud Pod, the team talks about the possible replacement of CEO Sundar Pichai after Alphabet stock went up by...

Listen

Episode 72

June 02, 2020 01:06:42
Episode Cover

Episode 72: 13 Reasons Why This Episode is Better Than the Last One

Your co-hosts cover conferences past and yet to come on this week’s episode of The Cloud Pod. A big thanks to this week’s sponsor:  ...

Listen