[00:00:00] Speaker A: Foreign.
[00:00:08] Speaker B: Where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure.
[00:00:14] Speaker A: We are your hosts, Justin, Jonathan, Ryan and Matthew.
Episode 311 recorded for the week of July 1, 2025.
The crawlers are running the asylum.
[00:00:26] Speaker C: Loser won. Yeah, so you had to say it.
[00:00:28] Speaker A: Yeah, I did.
We rolled and we're splitting up responsibilities just between Matt and I, so good luck.
[00:00:37] Speaker C: Turn it off now. One of the two.
[00:00:39] Speaker A: I don't know. Maybe it's. Maybe it's good.
See if hopefully it's short.
[00:00:44] Speaker C: Couldn't.
[00:00:45] Speaker A: Unlike our latest episodes, it was only like two hours.
[00:00:50] Speaker C: I don't even know what we talked about for two hours.
[00:00:52] Speaker A: I don't either.
In theory, Cloud News, but I don't think we have all right, I'm going to go ahead and kick us off getting to the follow up, which is Microsoft is changing Windows in order to Prevent the next CrowdStrike style catastrophe Microsoft's creating the next Windows endpoint security platform that allows antivirus vendors this is already getting too hard to operate outside the kernel protect, preventing catastrophic system wide failures like the CrowdStrike incident that grounded flights and disrupted global services all in 2024.
The CrowdStrike auditors highlighted a fundamental Windows architecture problem where security software with kernel access can crash the entire system during boot, forcing IT teams to manually fix millions of machines one by one. It sucked. I do not recommend this architectural change represents Microsoft's attempt to balance security vendor needs and system stability, potentially ending decades of kernel level access that has been both security necessity and a reliability nightmare.
Cloud and enterprise IT officials should care because this could dramatically reduce the blast radius of security software failures or just random updates to your signature thing without any notification or prompting, preventing single bad updates from taking down the entire fleet of services and workstations.
The move signals a broader industry shift towards isolation and resilience in the system design, where critical security functions can operate effectively without having the power bring down the entire operating system.
[00:02:22] Speaker C: I feel like this is also just a fundamental change in the way that, you know, we run infrastructure nowadays.
You know, back in the day you had these mainframes that were massive and you didn't really care because you protected them and you were very careful about them and what was on them. And now it's thousands of small systems that you care because when Ryan has to go log into 1000 systems, he gets very angry at life. Yes, and starts muttering things under his breath.
[00:02:47] Speaker A: No, it's not in my breath anymore.
Now I'm old and jaded. I just yell it out loud like crazy, man.
[00:02:53] Speaker C: Stop yelling at the cloud, Ryan.
[00:02:55] Speaker A: Nope. Cannot make me.
[00:02:57] Speaker C: So, you know, I think that this is a good change that they're going to do and it will. I think that this is going to take them a while and longer than they want to fully admit to actually fully implement because the, you know, it's the kernel, it's been around forever and changing these things is going to take a lot of time. And then all the security vendors, you have to update their software and, and people that have outdated security software that don't technically update it all the time, it's going to become problems there. So it's going to be a long ish road to get it there.
[00:03:26] Speaker A: Well, yeah, those security vendors don't keep up to date. They will lose all access. Right. Because it's going to be outside. It's not like Windows is going to wait.
[00:03:34] Speaker C: Yeah.
[00:03:34] Speaker A: But I do think the major vendors will be quicker to adopt.
[00:03:37] Speaker C: How's that? End of life support and the extended Support for Windows 10. They wait for money and that's what's going to happen.
[00:03:44] Speaker A: Yeah, but that's the end of the consumer. So you're going to wait for a third party vendor? No way. No.
[00:03:49] Speaker C: Anyway, I've seen dumber things happen. True. How's that Windows XP box that's running, you know that, you know, Milling software, a factory somewhere. Definitely isn't there. Still there. You are correct, sir. And I'm hoping it's only Windows xp.
So in this week, AI is how ML makes money. They've actually figured out how to make it make money. Introducing Pay per crawl enabled content owners to charge AI crawlers for access. Cloudflare has actually figured out how to make money on crawlers by introducing a Pay per crawl, a private beta feature that implements HTTP 402 payment required to enable content owners to charge AI crawlers for access.
Content owners can set a flat per request price across the domain and configure three levels of access per crawler. Free charge payment at the configurated price or block deny access with no payment option. Cloudflare acts as a merchant of records handling billing aggregation and payment distribution. Crawlers can discover pricing by receiving a 402 response with the crawler price header or proactively by including crawlermax price header in the initial request. Successful payments return HTTP 200 because what else are you going to return?
The implementation integrates with existing infrastructures for waf. Bot management policies are applied requires minimal charges to the security configuration because security departments always are going to love this publishers retain flexible abilities to bypass charges for specific crawlers to accommodate existing content partnerships. Always good to not piss off your partners. This approach enables future programmatic negotiation between AI agents and content providers, potentially supporting dynamic pricing based on content type, usage patterns and application scale. This framework could extend beyond the simple pay per request pricing to include more granular licensing for training, inference and search applications.
[00:05:51] Speaker A: So interesting to see how the introduction of one technology introduces the advancement of other technology. And this is a pretty good example, you know, like, while this complicates things greatly, like I love this because.
[00:06:08] Speaker C: It'S.
[00:06:09] Speaker A: The compromise I've been waiting for. You've been able to block AI crawling for a while, but then what a pain in the ass to the end user, which is me when I trying to go do a deep research thing to answer some sort of question. And so I'm sure this will get passed on to the consumer sort of indirectly, which I think I'm okay with because it is just, it's content, it's content you're getting for free and when you bundle that all up in an AI model, you know, it's a pretty big impact. So I think that's, I think it's great.
Very curious to see how the, you know, this shakes out in the end and how the billing all works and how the negotiation works. And you know, it's pretty cool. I mean we've, we can do way too much processing in the edge is all, you know, all I've learned here. But that's great.
[00:07:04] Speaker C: No, I think this is interesting and seeing also how the bots kind of negotiate pricing, I'm picturing like a spot market in the future.
Like I only want to pay for X quantity of data, you know, So I tell Claude, hey, I want to spend up to $5 on research.
You know, it knows to collect 100 websites. So it's $0.05 per website and kind of will decide, okay, I can get more based on that or less based on that. And these are more credible resources. So I think, I feel like it's almost going to end up being this whole algorithm behind the scenes also of like, here's the amount of money I want to spend on this and it figures out which sources are going to be the best content for you. So it's going to be like a hybrid of agents and Google kind of figuring out where to get that data for you. I mean, it's great to see whole new, you know, actual money stream pop up from this, income stream come up from this. But it's also Terrifying that, you know, you now have new ways to burn your capital by accidentally telling things, oh yeah, it's 5 cents, but really they meant 5 cents per website, not 5 cents total to spend all of it. So while one person makes money, someone's gonna lose the money.
[00:08:09] Speaker A: Yeah. And you know, just the risk of introducing hallucinations just because it doesn't have the full data set. Right. As well. So it's, you know, because it's the first thing to get AI to give you a wrong answer is give it some sort of ambiguous void to fill. And it will very gladly.
[00:08:26] Speaker C: I've done that a few times, it's fine.
That's why there's still a human in the loop sometimes.
[00:08:31] Speaker A: Indeed.
For now. For now.
Moving on to cloud tools, introducing open source OpenAI terraform provider MKeyDev MKDev released an open source terraform provider for OpenAI that enables the infrastructure as code management of open API resources, eliminating the need for manual click ops configuration. Yay. And ensuring consistent security and productivity across projects.
[00:08:59] Speaker C: Yeah, sorry I had to fill that.
[00:09:01] Speaker A: In for thank you. The new provider supports both OpenAI administration APIs for managing projects, service accounts and user permissions, as well as the platform APIs that allows developers to integrate generative AI capabilities directly into their infrastructure deployments.
A unique capability demonstrated is Vibe coding where developers can use Terraform to generate application code via GPT4, create images with Dall E and automatically deploy the results to AWS Lamba. Essentially building and deploying AI generated applications in a single Terraform run.
The provider requires two separate API keys, admin and standard, and handles OpenAI's API limitations cleverly such as tracking and restoring rate limits to default states. Since there's no API endpoint for deletion, this tool enables platform engineering teams to create self service modules where non developers could go from an idea to deployed application using props, all while maintaining compliance and security through existing Terraform infrastructure.
[00:10:04] Speaker C: I really love how this has slowly grown over the years. You know having this be a true Terraform provider means a platform engineering team or anyone can now actually manage these systems. Where before the idea of click copying, giving people access, uploading that secret to a secret vault in whatever system you want. Now that end to end workflow can be automated.
It also to me shows that these platforms, whether it's Claude, OpenAI et cetera, now are really selling to the enterprises. They've had their enterprise sso, their you know, let's set the policies and everything else on it. But this to me is really like that next level piece to say, hey, we're there, we are an enterprise tool that we are going to let your security, your compliance, your infrastructure, your platform engineering, whatever you want to call them, team set up things and set up things securely and logically so you're able to continue to scale and leverage these services.
[00:11:01] Speaker A: It's, I mean it's true.
It's just the funny thing is when I try to imagine the run through of this like the whole end to end resources like you're right, this is enterprise targeting and it's, it's definitely to keep in, in line with other compliance and procedure steps. But it's also funny to me like because anyone who's doing vibe coding, I just don't think they're going to go through this endpoint, you know, this whole process to get the resources deployed. Right. I could be wrong but and it is interesting to see, you know, the infrastructure deployed alongside, you know, specific sort of prompt engagements and other things that you put into AI powered apps. And so it's, this is one of those things like you know, this old man in the cloud doesn't quite understand it.
So but maybe I'll use it.
[00:11:51] Speaker C: I think it's less for the vibe coding in my opinion and more for let's actually get this into enterprise world.
I think the vibe coding is nice to have and maybe you set up some sort of system as part of your onboarding where when somebody starts, generates their user, puts the password in 1Password key VO wherever you're storing your, you know, secrets for people. But like so you can build that out for the end user because you know, as we've talked about here, you can vibe code a full bot.
So at this point you can get to that point where developers can vibe code stuff they've never done and it's probably pretty good quality especially if you're actually going through a peer review process and following your proper sdlc. Clearly you can tell I'm in compliance hell right now at my day job, but you can get to that point and it kind of enables that end to end experience of onboarding, offboarding, et cetera. Even though I know they said they don't have the delete API.
So you almost get there. Maybe you delete the keys or you do a short lived key and you put auto rotation. I don't know what you do at that point, but I feel like this is more of the enterprise play than Justin does a thing.
[00:13:02] Speaker A: Yeah, for sure. I don't know in any kind of AI sort of POC experiment that I'VE done even, you know, deploying the app level infrastructure. I don't think I've, I've gone this far. Right. Like it's a develop, it's an extra development step you go into when you get into like deploying stuff into the cloud versus, you know, just playing with it locally.
So it, you know, like this is definitely something that I'll have to try out.
[00:13:28] Speaker C: MC yeah, I'm curious. I assume you have to have like the enterprise tier or you know, probably above teams tier in order to integrate.
[00:13:35] Speaker A: So I mean, I think you can just give it OpenAI money.
[00:13:40] Speaker C: Well, that's also the way to think.
[00:13:44] Speaker A: Because it is like the two separate API keys are tied to your account and so I assume that there's just separate infrastructure. But I mean, I don't know if the free account, since I don't actually have an OpenAI account, I don't know if it allows the same management for projects and service accounts that you would need to make this happen.
[00:14:00] Speaker C: Yeah.
Into the world of AWS. Amazon FSX for OpenZFS now supports Amazon S3 access without any data movement.
I'm not going to try to reset that again without messing it up.
[00:14:14] Speaker A: I was just about to say like, Amazon FSX for OpenZFS has got to be one of the hardest terms that we have to pronounce on this. On this whole Cloud POD thing.
[00:14:22] Speaker C: What was the one you guys used to screw Peter on? It was documentdb from with MongoDB. MongoDB compatibility.
[00:14:29] Speaker A: Yes, absolutely.
Yeah.
And funnily enough, we've made fun of them enough where we can all do it seamlessly now, which is, I know, from memory. Yeah.
[00:14:39] Speaker C: Amazon FSX for OpenZFS now allows direct S3 API access to files through S3 access endpoints without copying or moving data, enabling AWS, AI and MLL services. Because what else would you use these things for, like Bedrock and SageMaker that expect S3 as their data point organization can attach hundreds of S3 endpoints to a single FSX file system with granular IAM permissions to meet your compliance needs and your security needs per access point while maintaining the existing NFS access and file system capabilities.
This feature delivers first byte latency in tens of milliseconds, which is definitely needed when you're training models with performance scaling based on FSX provision throughput because you want to burn money through customers paying for both FSX plus the S3 requests and data transfer costs because you thought anything in the cloud was going to be free.
Real world applications including rag workflows with bedrock knowledge bases, training ML models with SageMaker and running analytics with Athena and glue directly against FXX stored enterprise file data currently available in nine regions.
[00:15:56] Speaker A: I love this but I hate the write up right? Like it's. I totally get the need for this because you know Amazon long ago made S3 sort of the. The cornerstone of everything data within their infrastructure and then everything you know in the last 20 years is just really an exposure of that to some level to consumers.
And then you know, newer technologies like Amazon FSX are sort of like we would like to get our data into the machine.
[00:16:27] Speaker C: It's slowly becoming the if you support S3 you made it as a SQL database. No SQL database like I feel like that's where S3 is at this point. Like if any service wants to make it say they've made it, they somehow interact directly with S3.
[00:16:41] Speaker A: They have to be S3 API compatible for sure. Yeah, it is kind of cool and it is funny. They're definitely touting up the compliance features of this. I noticed how heavy this was on access points and the IM restrictions which I mean in practice is really difficult to support but it's good.
I like the idea you grant API access with a certain level of permissions but then you can tailor that down via individual permissions per access point and that especially with AI and machine learning workloads it makes a lot of sense because you can control access to different areas of the data set.
But yeah, no, I mean I, you know I, I think this will be useful. I think people are love it and I, I love that you know they call out enterprise systems for needing the file system.
It's almost like a dig for not not supporting object store.
[00:17:34] Speaker C: Yeah and it every time they release one of these I'm always like and we briefly talked about this in the pre show but you know how a lot of these things are like the AWS storage gateway type features for it which is how do we just expose this data in a consumable way that current systems can to help onboarding.
So if you already have a system that knows how to read ZFS, here's an easy way to get to that S3 data versus rearchitecting your system and rewriting part of it.
[00:18:08] Speaker A: I swear every time we talk about S3 storage gateway I think you're the only one who used it or fun.
[00:18:14] Speaker C: Fact I've only used it once. I just wrote some exam questions about was a good tool to help people learn how to do AWS I don't think they've actually updated. I wonder if this is like cloud search, where it technically still exists but like nobody actually uses it or.
[00:18:31] Speaker A: Or it's this. They just rebranded it, you know, like it's the same thing. They're just calling it something.
[00:18:36] Speaker C: I wonder if they deprecated it. Hold on, I'm curious. What's new?
[00:18:43] Speaker A: I mean, it's definitely. I know. It was definitely a lot more useful when people were trying to figure out how to get their data into the cloud, how to get, you know, start using cloud native principles and access and it's, you know, like it definitely had its time in place.
[00:18:58] Speaker C: Yeah, it's definitely.
[00:18:59] Speaker A: Yeah.
[00:19:00] Speaker C: Okay, so the last 1, 2, 3, 4, 5, what's new feeds from 2023 through 2025 for AWS Storage Gateway is just GA announcements in regions Israel, Tel Aviv, Canada West Asia Pacific Mexico and Asia Pacific Thailand.
[00:19:19] Speaker A: But if they're rolling out the new regions, they are not deprecating that thing.
[00:19:23] Speaker C: So you're definitely the last real Update of it.
2022, 1216 introduced Terraform modules for AWS S3 file gateway. 1227, 2022 AWS storage management Console File simplifies file share creation for AWS Amazon S3 File Gateway to be fair, name.
[00:19:48] Speaker A: A feature that you on your wish list for this service.
Exactly.
[00:19:55] Speaker C: Snowmobile capability. Even though they deprecate snowmobiles.
Fine. Yeah. What's.
[00:20:01] Speaker A: Well, I mean, it just.
[00:20:02] Speaker C: It.
[00:20:02] Speaker A: It's one of those things. It does what you need right there.
[00:20:05] Speaker C: Right.
[00:20:05] Speaker A: There's no bells and whistles to it. Like if you need it, you need it and you can use it, it's great.
[00:20:10] Speaker C: But so to be fair, I say that about S3 and then they release a new feature.
[00:20:14] Speaker A: I'm like, that is fair point. Yeah.
[00:20:16] Speaker C: I never really thought I needed single ac. You know, I didn't know I needed that traffic, but now I know I need it. Thank you Amazon, for telling me how I want to burn more money and how I want to play with this.
[00:20:28] Speaker A: Yeah, definitely.
All right. My favorite Amazon EC2 new instance time.
The C8GN instances powered by Amazon Graviton 4 offering is up to 600 gigabits per second network bandwidth.
That's a lot.
The CAGN instances powered by Graviton 4 develop are the highest among EC2 network optimized instances and they offer over 30% better compute performance than the previous C7GN instances with up to 192 VCPUs and 384 gigabytes memory.
The new 6th generation AWS Nitro Card enables the huge amount of bandwidth, making CAGN ideal for network intensive workloads like virtual firewalls, load balancers, DDOs, appliances and any other tightly coupled cluster computing.
This positions AWS ahead of competitors in the network performance for specialized workloads.
The new instances maintain a similar VCPU and memory ratios to the C7GN instances, simplifying migration for existing customers. Customers available initially in only US east and US west regions with the standard purchasing options including on demand, Savings plan and Spot instances. The timing aligns with growing demand for high bandwidth applications in security, analytics and distributed computing. Organizations running network appliances or data intensive workloads can consolidate infrastructure with fewer, more powerful instances. How Cloud Friendly Cost considerations remain important While AWS hasn't disclosed pricing, the 3x bandwidth increase over the C7GN suggest a premium pricing tier. Customers should evaluate whether their workloads can fully utilize the 600 GPS, not just be told that from the team who says they need it and make sure that you can justify the potential cost increases.
I feel like you know, with the inmates running the asylum, there's a lot more snarky reading throughout.
[00:22:31] Speaker C: There's a lot more comments that we throw in there.
No, I mean look, it's really cool. They're, they're getting the bandwidth higher that is directly exposed to the end consumer.
If you are running this bandwidth, one I would love to understand what you're doing besides inference and training models.
But two, I'm just jealous.
I feel like Azure doesn't have good graviton yet and even when they do, if you're running a Windows based workload you can't even leverage them yet.
So we were talking in the pre show, should we keep it? I was like I really just am jealous of this announcement.
[00:23:11] Speaker A: So there's just no ARM processing options in Azure?
[00:23:15] Speaker C: They have some, the Cobalt series, the Cobalts, but they're not 1000% shared and I don't know if they're fully J. I think they are at this point, but they don't. I can't leverage them in a lot of places because like the Windows workload doesn't like you can't run Windows Server on ARM on Azure, right? Really?
Or we. Or we're not on a supported version. Like there's a nuance there I haven't dug into deeply.
[00:23:48] Speaker A: Never tried to. Yeah, running Windows on ARM seems like a poor plan.
[00:23:52] Speaker C: I mean I've done it before. Like I have an old laptop that like is how old? 10 years old has arm.
I believe so, you know, it works but I haven't run a server on it, you know, so look, it's great. They're still making the improvements and I like the fact that they're kind of trying to specify some of these things out and clearly this is a way for them to test the their new nitro card that you mentioned totally in a specific workload to kind of get they will probably work out the kinks of it first before they ga across everywhere.
[00:24:28] Speaker A: I mean I think this is one of the advantages that Amazon has over the other cloud providers is how strong their nitro layer is. And so the only reason they can offer this is because of how powerful Nitro is and just how much they've R and D they've put into.
It's almost like R and D of the running of the customer workloads, right? Like it's the service on the service on the service.
It's very impressive.
[00:24:56] Speaker C: It's the platform, their platform's just stronger than a lot of the others and the nitro card set up, the gravitons that they've custom built out are all things that are just making things be better for their customers and easier.
Speaking of things that if you're on a niche use case and that you.
[00:25:16] Speaker A: Need, this is not niche.
[00:25:19] Speaker C: Strongly consistent reads. Really?
[00:25:21] Speaker A: How dare you.
[00:25:23] Speaker C: We should probably tell everyone what we're talking about.
DynamoDB now supports multi region strongly consistent, enabling a 0rpo time recovery point objective for critical mission apps like payment processing financial services that need guaranteed access at the latest data across all regions.
MRSC Multi region Strongly consistent made up the acronym on the fly, so I'm really hoping it's correct. Requires 3 AWS regions configured with either 3 full replicas or 2 replicas plus a witness node because that's the way clustering works that stores the change data reducing costs while maintaining resilience. Available in nine regions currently, so you have to be in at least a third of them. Applications can enable strongly consistent by setting a consistently read equals true in their API calls, allowing developers to choose whether they want eventually consistent which is better for performance or strongly consistent which is better for mission critical work apps on a per basis question Kind of like going to your read replica or your write replica.
Pricing follows the existing global table structures as this is just at the API level for customers. The feature addresses a gap between DynamoDB multi AZ architectures that need and the needs of financial services payment processing that requires immediate consistency across regions during a rare regional failure.
[00:26:50] Speaker A: So maybe I've just Worked in financial services for too long. But this has always been a ginormous problem that keeps like having a truly active, active workload be a thing.
Because there's always that one naysayer. It's like but, but, but what happens when this. And it's just like.
But when it's financial data. Like if it was my $4 billion transaction or whatever it is because that's the size of my transactions at least on average, I. I would be mad if that got lost. So I get it.
[00:27:19] Speaker C: I need to switch jobs to come work for you guys. That's what I just learned. If your average transaction size is 4 million or.
[00:27:26] Speaker A: We've all learned that. I' night liar one of those two.
[00:27:30] Speaker C: But I look at it on the other side where yes, this is definitely a useful feature. Definitely something that I can see many use cases for healthcare data, financial services. That high criticality of consistency.
But also like S3 only was strongly consistent. I feel like it was only like a couple years ago like post like during COVID I mean prior to that it was eventually consistent.
[00:27:55] Speaker A: I thought it was more recent than that.
[00:27:57] Speaker C: Sorry, post Covid. Yeah, that's what I meant.
[00:28:00] Speaker A: Yeah.
[00:28:01] Speaker C: Because that's the way I timeline stuff at this point in my life.
[00:28:04] Speaker A: I think that. Yeah, me too.
[00:28:06] Speaker C: Oh yeah.
Deep dive on S3 consistency.
All things distributed 2021.
[00:28:13] Speaker A: And you know the funny part about this is that this is somehow related to that which is it's DynamoDB and global tables which you know, somehow is under the covers. Like deep down it's just S3 because it's how Amazon works.
[00:28:26] Speaker C: See prior conversations.
But look, I get the need for it. It's nice that they did it not at the deployment of the infrastructure, but it's done at the API call. So you can as a developer say these calls are more important than those calls and hopefully you just don't set everything to read consistent. True. Because it can cause delays.
[00:28:52] Speaker A: Well, hopefully this is priced. You know where that's.
[00:28:56] Speaker C: It's not priced any different because you're just paying for the replicas at that point. And it's just how it responds.
[00:29:01] Speaker A: Yeah.
[00:29:02] Speaker C: So you have a delay of the response was my understanding of. From reading the blog post.
[00:29:07] Speaker A: I do. Yeah. No, I do think that this is definitely something that for performance reasons, if not cost reasons, you should definitely tailor this. And. And it's true. Like not every request needs to be strongly consistent. And so the fact that you can enable this per call, like I love this.
I hadn't really thought of that concept before reading this. And so it's like, it's kind of neat that you can, you know, your same DynamoDB endpoint you can say like this one needs to be strongly consistent, this other one who cares. It's just metadata for a name or something.
[00:29:36] Speaker C: Yeah, super cool because we always trust developers to properly identify where stuff goes.
[00:29:43] Speaker A: Hey, if I'm going to, you know, I've made a career out of blaming developers and then now I've turned to developing more software than than running.
Yeah, we should put the control at the power of the developers. It's a good plan. I, I see nothing wrong.
[00:29:59] Speaker C: Did you just have like a involuntary shiver with the SRE on calls that you have to deal with?
[00:30:04] Speaker A: I think it might have been like a personality split for a quick second.
[00:30:07] Speaker C: Okay. Yeah, that makes more sense.
Onto the World of GCP Google released their 2025 environmental report Google achieved a 12% reduction in data center energy emissions despite the 27 increase in electrical demand, demonstrating successful decoupling of operational growth from carbon emissions through their 25 clean energy projects that added 2.5 gigawatts to their grid capacity. The company's data center operates now at 84% less overhead energy than industry standard, while their seventh generation Ironwood TPUs use nearly 30% less energy than their first Cloud TPUs from 2018. I really hope so. Positioning GCP as a leader in environmental energy efficient AI infrastructure Google AI powered products including Nest thermostats, solar AI, fuel efficient routing in maps help customers reduce their own emissions by nearly 26 million metric tons of CO2 equivalent in 2024, equivalent to removing energy use from 3.5 million homes for the year. The company is investing in the next generation energy solutions including advanced nuclear partnerships. As we talked about, who doesn't want their nuclear reactor in their backyard?
And geothermal projects with Fervo to address the energy demands of AI workloads and ensure clean power for all future data centers. While data center emissions decrease, the total supply chain emissions increase 11% year over year. So this is looking. If you're looking at your tier 1, 2 or 3 of your AI. Sorry, of your green program, highlighting changes in regions like Asia Pacific where clean energy infrastructure remains limited and the need for broader ecosystem transformations beyond Google's direct operation.
[00:32:07] Speaker A: I always really love these environmental sort of reports and stuff. I know it's a whole bunch of how to lie with numbers and how to make them look good in a sales piece. But you know, the reality is is that we have to keep attention on these things and and any decrease whether you know, which is better than no decrease. And so while I don't believe any of these numbers, I just like that it's a selling point and that people can continue looking at this and, and you know, like I do think it's an important thing that you know, I think all the cloud providers are going to have to do which is you figure out how to do more with less power. I think where density is going to be a problem. Like you know, this is, you know, there's always going to be a problem building new nuclear plants and there's, there's only so much, you know, existing power infrastructure that we can tax. And so I think this is, I do think this is great. I do think that focusing on features that tout environmental concerns even if you're offsetting one business unit with another. Like I use the Google Maps environmental route because live in California and I was raised by hippies.
So cool. I like it.
[00:33:16] Speaker C: Yeah. And you're also seeing more and more stuff I feel like more particularly in the, in the UK and the EU where you are required to provide some of your data even as a sub processor or a vendor of companies to show what your green initiative is.
And leveraging things like Graviton, come on Azure, pick up your game, you know, can help reduce it because most people don't need that high level for a lot of workloads. A simple Graviton processor to handle web requests or backend processing where if it takes an extra two minutes but is 20% more efficient and saves 5%, 10% of money is going to be more beneficial for companies.
What to me was interesting about this was when they talked about the supply chain overall increase, which is like that tier 1, 2 and 3 level of supply chain, which means they're losing on transportation and a lot of the other things to build their data centers. And maybe this is just the initial upfront costs of building all these data centers in the region.
So it's going to depend also on how you kind of, you know, look at your numbers as you said, how you play your numbers.
[00:34:28] Speaker B: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days.
If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask, will you actually give me my money back? Achero will click the link in the Show Notes to check them out on the AWS Marketplace.
[00:35:07] Speaker A: Google announces Gemini CLI Your Open Source AI agent Google Open Source AI is an agent that brings Gemini 2.0 flash directly to your terminal with 60 requests per minute and 1000 daily requests free for developers using a personal Google account.
The tool integrates with Gemini Code Assist across free, standard and enterprise plans, providing AI powered coding assistance in both VS and the command line with a 1 million token context window built in. Capabilities include Google Search, grounding for real time context model, context protocol support for extensibility, and automation features for script integration. Positioning positioning it as a versatile utility beyond just coding tasks.
The Apache 2.0 Open Source License allows developers to inspect, modify and contribute to the code base while supporting custom prompts and team configurations through the Gemini MD system prompts.
Professional developers requiring multiple simultaneous agents or specific models can use Google AI Studio or Vertex AI for usage based billing, offering flexibility between free and enterprise development options.
[00:36:18] Speaker C: I love the video that they put in the actual blog post. I don't know if you've seen it.
[00:36:22] Speaker A: Yeah, I haven't.
[00:36:23] Speaker C: Oh, it's like a cat in various places. It starts with like on the plane and then like over Sydney. Then it's like make me a video and now it's like in various places. I definitely am not rewatching it as we're talking to get the thing of it, but it's look, I like CLI tools somewhere when I'm not dealing with security, compliance and other things in my life. I do still live in the shell. My wife calls it the black box. And why are things red all the time? And my response is because I'm bad at my job.
So I like that we're adding things here.
You know, Claude code, same kind of feature, same thing. Still like the concept of it. Sadly I haven't played enough with them, but it's nice to see that they are going towards these people, you know, and going towards the people that don't want to live in the UI and write these things every day.
[00:37:15] Speaker A: Yeah, I agree these aren't quite in the terminal, which is what always bothers me. Right. Neither cloud code or Gemini cli. I've played around with both now. Um, they sort of take over a terminal and then you, you're sort of interacting with it a lot like you know, a desktop app or the browser from that point.
And so it's, it's kind of good, but it's not quite what I want. And I found that the, you know, the IDE integration for both of those tools is, is way more powerful than, than the actual CLI tool. But I'm sure that, that, I mean, I think it's a matter of preference. So I know there's people that are like, really into this, the terminal thing. They don't want to integrate the same people that, you know, live and die by vi.
So they're already, you know, you know, subject to abuse and like it that way.
So, you know, this is great. I mean, I do like that. I do like how much they're offering and that it's something that you can get across every, every payment plan, which is nice. So it keeps it sort of available to everyone.
[00:38:18] Speaker C: Yeah, I mean, I think it's a good feature. I didn't realize how much it takes over your console, but now that, like, I'm looking at the doc, I'm like, oh, yeah, I see it now. Yeah, it pretty much just takes over the entire thing.
[00:38:28] Speaker A: It's just a. It's just a. Yeah, it's just an application window at that point.
[00:38:32] Speaker C: Yeah, I feel like it's just like a good old links web browser. We've just taken over and run your thing.
[00:38:37] Speaker A: And it's such new technology. I don't know if it's just because I started using it one way in the IDE that I can't adapt to this new way. I know Justin has definitely used cloud code, CLI and gone kind of the other way with it. And so we both kind of like, think we're doing AI wrong, which is funny, but, you know, like. And so I think it's just a matter of personal preference.
[00:39:00] Speaker C: I.
They weirdly say, I don't understand how people use Linux as a desktop. I learned it as servers, I learned it as command line. I learned it with no ui. Every time I see like Ubuntu or anything else, I'm like, I don't understand.
I have to click to do something like, just give me the shell, let me go do it there. I don't understand.
[00:39:19] Speaker A: I mean, maybe I'm a masochist, but I had a Linux desktop forever that I did not use, like the. I think it was like a gnome, GNOME BSD box or whatever. And so I just ssh to the thing forever. And that was it. And so, yeah, I didn't even know it had a UI forever.
[00:39:36] Speaker C: I still remember configuring X in order to get your screens working. And I was like, Now I'm good. Just give me the command line. I'll learn how to do this.
[00:39:44] Speaker A: Yes, Way better.
[00:39:47] Speaker C: Well, when you're at the time of the year when you have to deal with audits, you should use Audit Smarter Introducing recommended AI control frameworks. We really should do this more often. Yeah, the snarkiness level goes up.
[00:39:59] Speaker A: It really does.
[00:40:00] Speaker C: But we have to kind of prep a little bit more so you know.
[00:40:03] Speaker A: That will never happen. As we've. As we've demonstrated over lots of history.
[00:40:07] Speaker C: Yes.
Google Cloud launches the recommended AI Controls framework in Audit Manager, providing automated compliance assessment for generative AI workloads based on the NIST AI Risk Management Framework and Cyber Risk Institute standards. If you are in the world of compliance, you've definitely heard about these.
This addresses the growing challenges of proving AI systems comply with internal policies, regulations and and as organizations continue to grow their AI agent and automation frameworks, the framework automatically evidence collects evidence collections across vertex AI and supporting services like cloud storage, im, VPC networks, replacing manual audit check clicks. Because everybody loves doing things manually with continuous monitoring, which you technically have to do as part of your audit frameworks. Yeah, so I don't really understand that sentence. Organizations can schedule regular assessments and generate one click compliance reports that definitely are always correct with direct links to collected evidence key controls including disabling root accounts on vertex AI. So doing your job correctly, enforcing customer managed encryption keys for data protection. Again, hopefully doing your job correctly implementing vulnerability scanning across artifact analysis and restricting resource service usage based on environment sensitivity. This framework clearly. I don't know why the word clearly is identified there delineates control and responsibility between customers and platform under the Google Shared Fate model. Didn't know it was called Shared Fate model.
[00:41:44] Speaker A: Oh, it's, it's my, it's my favorite part about Google. The Shared Fate isn't.
[00:41:49] Speaker C: Is it the same thing as shared shared responsibility model of Azure and aws?
[00:41:52] Speaker A: It is the same thing except for the way they take responsibility, yeah.
[00:41:56] Speaker C: Oh, is that like the insurance policy Google will let you get? Hence Shared Fate. Got it.
[00:42:01] Speaker A: Yeah.
[00:42:02] Speaker C: The positioning of Google Cloud competitively against AWS and Azure by offering AI specific compliance automation while their solution remains more generic. The integration with Security Command Control center tool that Ryan likes provides a unified view of AI security postures alongside traditional cloud workloads is now available in the console.
Because you like Click Ops?
[00:42:30] Speaker A: Well, for compliance reporting. I do, yeah.
[00:42:33] Speaker C: I mean I'm not gonna, I'm not gonna deny that I'm not gonna try to like CLI my compliance report no, that feels like a bad life choice even for us.
[00:42:40] Speaker A: Yeah, exactly.
Yeah. I mean it's, it's. I don't know how they left the two like security guys like alone. That's ridiculous. But this is one of my more favorite releases lately. Just because AI is this hotbed item and no one knows how to secure it and it's all just open ended questions and really just a whole lot of movement to try to look good and not have egg on your face because you don't really know what the AI workloads are across your business.
And so I do like that this is rolled into the compliance manager and security pan center because that means it's centralized. It means it's, it's, it's hooked up with the. Org layer, which means I can turn it on and I can get, you know, the glaring red reports or, or magically it's all green some somehow.
[00:43:26] Speaker C: Which means you haven't configured it properly.
[00:43:27] Speaker A: It just. No, it just means I have so many rules that they can't actually do any work.
[00:43:33] Speaker C: This is why people don't like security departments.
[00:43:36] Speaker A: Well, yeah, and so like I do, I, I think these are great because it does allow sort of that visibility into what the AI workloads are and even gotten some areas where I didn't think of which was like the root access to the vertex AI notebooks. I'm like, I've never thought of that. That is sort of a thing. So I'm like, oh cool, I'll turn this on.
[00:43:59] Speaker C: Your day job hates that you do the podcast. You learn about all these new tools.
[00:44:03] Speaker A: I mean the reason why I do this podcast is so that my day job is more effective at my day job. It is that way around and always has been.
Yeah.
[00:44:13] Speaker C: I mean I love the fact that they're integrating it. Like you said, it's a great thing.
It's interesting to me that I understand why they chose nist, but I've definitely had people ask me if we're complying with the ISO 42001. I think it was. I think that's AI framework standard.
So I'm curious as why they include that in day one because I feel like that's been a little bit more prevalent from a customer perspective, but from a risk management perspective. So it depends on which side of compliance you're kind of on.
[00:44:47] Speaker A: Yeah, I mean I can tell you, you know, I'm sorry audience. It's, it's mostly because of the way that the, the ISO 42 something, something, something.
The way that the Compliance controls are mapped. It's, it would be very difficult to put it in tool. It's a lot of process right. Like and, and so this is, I think the reason why they're using the NIST and the, the CIS standards is because you can sort of apply against them and be like yes, no, got it.
[00:45:15] Speaker C: Yeah. I mean it's a great feature. I'm waiting for the other clouds to add these things because it's inevitably going to happen and honestly I'm a little surprised it's taking this long.
[00:45:24] Speaker A: Yeah, I mean Google continues to be the most user friendly AI, you know both, both on the security and in, in my viewpoint maybe I'm biased because this is my, my daily cloud driver. Today the Vertex AI offering is just feel intuitive and easier for me to adopt versus like trying to log into bedrock. I feel like I'm a lost babe in the woods.
[00:45:48] Speaker C: That's because it doesn't feel like AWS still to me it's like its own.
It's like Mace when it first was released like it was like a tool they bought and like never like didn't set it up properly. Like that's kind of how I feel when I go in there. It just doesn't have that same feel of everything.
[00:46:03] Speaker A: Well, moving on to Azure, your favorite nut.
[00:46:06] Speaker C: Love it makes me happy.
[00:46:08] Speaker A: So now in public preview the Azure Monitor ingestion with Azure Monitor workspace.
Azure Monitor Workspace provides visibility into Prometheus metrics ingestion, helping customers identify and troubleshoot issues. When Azure managed, Prometheus sends metrics to their workspace or doesn't.
This feature addresses a common operational blind spot where metrics fail to ingest but customers lack any visibility into why. Similar to AWS, CloudWatch metrics insights, but specifically for the Prometheus workloads.
The platform's metrics integration means ingestion errors appear alongside other Azure Monitor metrics, enabling unified monitoring and alerting without additional tooling or configuration.
Target customers include organizations running Kubernetes workloads, everyone who need enterprise grade observability and troubleshooting capabilities for their metrics pipeline. This feature comes at no additional cost beyond standard Azure Monitor workspace charges, making it accessible for teams already invested in Azure's Prometheus ecosystem.
So I was trying to explain this feature in their pre show and I don't think I did a good job.
[00:47:17] Speaker C: So we're going to make you do it live.
[00:47:18] Speaker A: So I'm going to do it again live and I'm going to fail twice and somehow worse the second Time, we'll see. So I like these things just because it's so difficult to troubleshoot log ingestion or metric ingestion when you have a lack of data. And so a lot of these things are insights into the ability to parse data, the ability to ingest data.
If you have field conflicts, that's making things rejected. This is the sort of way that you would see that problem be highlighted, which usually you find it because your numbers don't make any sense. And then you have to go dig into why and there's complete black box. You have no idea why this isn't working. And eventually you try enough things, you stumble across it, hopefully or not.
And so like, this is definitely something where I think it's, it's, it's a really nice thing to have because it is pretty easy to mess up in some of these things. Like Prometheus makes it a lot easier from what I hear, by standardizing a lot of these things. But if you're, you know, incorrectly formatting something like a custom metric, I imagine it's just as easy to blow up just like any other monitoring platform. So.
[00:48:22] Speaker C: So, okay, well, let me make sure I got this. So if you've messed up your ingestion, Prometheus, cloudwatch, whatever, will just drop that packet and won't process it. Potentially. Yeah, this is really a feature showing, hey, there's a bunch of drop stuff potentially. Go look over there to see why you're dropping stuff.
[00:48:42] Speaker A: Yeah, I mean, I, I've run into it more with log parsing than I have with metrics. But I mean, if it's, it'll, it'll tell you that it received a thing and then it won't be. That insight won't match to the metrics that you're viewing. Right. So it's, you know, and that could be for any number of reasons. But it's, you know, it's one of those things where it's ingestion and ingest failure, you know, which can happen for too much load or it can do format or all kinds of things.
This really sort of helps highlight that. So at least you have some area to go look at to figure out what's going on.
[00:49:17] Speaker C: I now get the feature and why it's so important because I've definitely been in the situation where I'm missing a log because it just cod watch logs isn't set up or dad or whatever your tool is, isn't set up and it's not getting that data and then you're sitting there trying to debug and you're like I'm blindsided by this thing. But you don't know even know what that thing is yet.
So I definitely see how and why this is useful and it will definitely solve outages quicker.
[00:49:46] Speaker A: Yeah. How to troubleshoot the absence of data is always fun.
[00:49:49] Speaker C: Yeah.
In a tool where data is the primary thing Microsoft Fabric Extension is now in VS code Microsoft Fabric extension for VS code now allows developers to create, delete and rename fabric directly in their ide, which is definitely what you want to do as an enterprise. Eliminating context switching between VS code and fabric portals because they don't match for basic workspace management tasks. The new tenant switching capability enables users to manage their workspace and items across multiple Microsoft tenants. Definitely a useful feature from a single VS code instance. Please don't do that. Addressing a common pain point for consultants and developers working in multiple organizations, this positions fabric as a more developer friendly analytics platform compared to AWS and GCP because we have to get the data which typically requires separate web management consoles or CLI tools for similar workspace management operations. Segregation of duties, definitely nothing we care about. The integration targets data engineers, analysts who prefer working in VS code for their development workflows, particularly those managing multiple fabrics for different customers or projects targeted at consultants.
While this feature is free itself as part of VS code extensions, users should note fabric is not cheap and that you will still incur fabric cost. Sorry fabric capacity cost based on compute and storage resource consumption.
[00:51:16] Speaker A: I mean at the risk of repeating myself like it feels like the part of the show where I have to be. Someone has to explain to me what fabric is again.
But yeah, like I. I don't understand.
Like I understand now through enough.
[00:51:30] Speaker C: It's your data leak.
[00:51:32] Speaker A: Yeah.
So but in your ide what are you renaming? I don't understand what you would possibly be like. I'm so confused by all of this and I'm sure it's useful for fabric.
[00:51:43] Speaker C: It's creating your workflows and your ETL jobs and things like that. Like it's. It's all that. So if you can do that within there and build your workflows and your data lakes and everything else and have it processed in different ways and pipe the data, that's what it's used for. This to me is a consultant feature. If you need that ability to kind of switch, you know there's definitely a lot of.
As I was a consultant in a past life, having the ability to do these things are. Is a great for consultants, the average consumer that works for a single company. Unless if you are a large organization which has multiple tenants and things along those lines, the odds are you're not going to use this because you would be switching users and roles and things like that, which you just don't do in Azure.
[00:52:29] Speaker A: Yeah, confused as usual with fabric.
[00:52:33] Speaker C: I need a deep dive on it though. I need to get better at it.
[00:52:35] Speaker A: I will say that I'm sure it's useful. I just haven't found a use yet.
[00:52:41] Speaker C: That's because it's not cheap.
[00:52:43] Speaker A: Yeah, that's for sure.
All right, moving on now in Public Preview Organizational Templates and Azure Logic Apps Logic Apps now lets organizations create and share private workflow templates within their tenant, addressing the gap where teams previously had to use public templates or build everything from scratch. This brings Logic Apps closer to AWS Step Functions Reusable workflow patterns while maintaining enterprise control through Azure RBAC integration.
The new UI eliminates manual packaging by automatically extracting connection parameters and documentation from the existing workflows, making template creation accessible to non developers, A notable improvement over competitors where creating reusable automation patterns often requires significant technical expertise.
Templates support both tests and production publishing modes with full lifecycle management, allowing enterprises to safely experiment with automation patterns before a wide deployment. This is particularly useful in organizations standardizing on specific patterns and and enforcing architectural guidelines across teams.
That's first class Azure resources, but it's not called premium, so how do I know it's first class? These templates integrate with the existing subscription and role based access controls, ensuring teams only see the templates they're authorized to use. This addresses a common enterprise concern about sharing internal APIs and business logics without exposing them publicly.
Feature targets Enterprises looking to scale their automation efforts by packaging common patterns like API integrations, data processing workflows or approval chains into reusable components, reducing development time from hours to minutes for repetitive integration scenarios.
[00:54:24] Speaker C: I love this. I mean, building step functions. In the past I've used logic apps only a few times in my day job, but building step functions, being able to share them across the organization and having people do a simple, I don't know, function app to teams integration because it's not simple because it's Microsoft Teams, you know, or anything along those lines, like these reusable patterns, connections to jira, connections to other internal systems, your SRE notification system, and just being able to say grab this, run it and be done with it is so much better than even saying hey, try to grab this terraform, you know, module and then Having people maintain it and update it because you all know that no one's going to actually do that, you know, so having that ability to kind of put that out there, at least then have that. And I feel like they also solved a big problem, which is they have a test and production workflow where I feel like a lot of these tools just share and you're like, you'll have a testing environment. So you just are like, here you go. And you're like, I hope I have a good test suite to manage it, which is hard when you're in an internal platform engineering team methodology. So I feel like this is a great feature.
I don't have a good use case for it today, but I really like the concept that they did.
[00:55:44] Speaker A: I mean, I do, you know, coming from a cloud center of excellence sort of point of view, you have to have reusable components. If nothing else, just to provide as an example of how to do a thing that if you can reuse them, great. If they can actually be rolled out in versions separately, that's fantastic. But that's always where these things sort of let me down. They turn into the service dashboard or service desk rather because like if they update the. If they update the template, is it version controlled and what if my.
It's integrating with other workflows within mine? They change the format, those types of things. Like that's where I think Terraform really nailed it with their providers and modules is that they really built in a lot of that dependency management into the ecosystem.
I haven't used logic apps in Azure and I've never used sort of reusable templates in step functions.
But you know, like, I do think it's important, but it's also sort of. I always find myself sort of taking the existing resources and making my own version of it.
[00:56:48] Speaker C: That's because you and I like to tweak and modify stuff, but for the average consumer.
[00:56:53] Speaker A: Yeah, I know.
[00:56:53] Speaker C: Yeah, it's good enough.
It is now generally available. Azure WAF integrates with Microsoft Security Copilot to make Ryan happy.
Azure WAF integrations with Microsoft Security Copilot is generally available leveraging the two places where WAFs are used, front door and app gateways. This allows the security team to investigate and respond to web application threats using natural language queries so that they really don't have to know what's going on while within Security Copilot interface. This integration enables security analysts to query WAF logs, analyze attack patterns and generate instant response without switching between multiple tools or writing complex KQL queries. Trust me, you don't want to do that.
[00:57:37] Speaker A: Yes sir.
[00:57:38] Speaker C: This reduces the time needed to investigate a web application security incent from hours to minutes. Microsoft continues to expand the Security Copilot reach across the security portfolio, positioning it as the center hub for security operations. AWS offers similar WAF capabilities but lacks the AI powered natural language queries, while GCP Cloud Armor requires more manual log analysis.
Target consumers include enterprise apps with complex web applications that need streamlined security operations and reduce alert fatigue. Bingo.
The integration is particularly valuable for organizations already invested in Security Copilot ecosystem because it is not cheap pricing.
Mentioning the not cheap part follows Copilot consumption model at $4 per security compute unit with no additional charge for WAAF integration itself. TIP. If you're going to use the WAF look at Azure DDoS protection organizations should consider SCU Security compute units consumption when enabling it to make sure your CFO doesn't yell you yeah.
[00:58:52] Speaker A: The fact that they put in that last bullet point I'm like ooh yeah. I mean anything that's like allows me to query things with natural language and not some specific DSL to figure out. I do appreciate and it's been useful in so many other tools like WAF seems like the best use case really because there's so much noise and trying.
[00:59:17] Speaker C: To PPC flow logs anything network like raw networking related like look I made fun of it across the way but this is a phenomenal if you're already invested this is a no brainer to.
[00:59:29] Speaker A: Integrate and if you're in the Azure ecosystem you're already in Security Copilot there's no way you can avoid it. They force you into it and so like and you know, so that's at least you know if, if you're gonna be sort of pigeonholed into this tool at least it's all in one place and it makes it easier to use.
So I do really like this I do think it's you know I like the dig for Amazon and, and GCP because I and I couldn't agree more because both of those tools are very lacking in the amount of visibility you have to to look through your traffic and and deduce sort of patterns and and be able to tell what's going on.
I really look forward to more AI with every WAF just because I think that it's the only way that we're going to be able to keep up with the amount you know of noise that's going on when you've got seven terabytes per second of you know, DDoS attacks going on at any one time. Like I don't know how I'm supposed to find the one failed request that the developer team assures me is a WAF problem and not some other problem.
[01:00:35] Speaker C: Yeah, what I do find interesting is as I've kind of dealt with this at my day job is the Azure Front Door WAF and the App Gateway. WAF are two different things, which is why they actually had to call it out here that they support both because they're two different setups and configurations. So you can't apply like the same rule set, at least as far as I know from one, from the app gateway to the WAF to the front door, you essentially have to two different rule sets. And then if you're on premium you can do bot protection. Obviously you can't do that if you're on standard because you know, why would you want that for front door? So like it's nice they day one or you know, when they ga'd it did include everything and they're not piecemealing adding it in the future.
[01:01:16] Speaker A: Yeah, I mean it's definitely something that you want across both.
That is interesting that they're two different laps across your application gateway and your sort of load balancing front end.
[01:01:27] Speaker C: Yeah, it's a weird nuance we've kind of ran into and it's both good and bad but it makes it more complicated to set up.
[01:01:37] Speaker A: Well, continuing Sharp edge. Yeah, continuing on Azure Front Door enhanced capabilities Front Door now supports managed certificate for Wildcard domains Azure Front Door now automatically provisions and manages SSL certificates for wildcard domains, eliminating the need to manually update and maintain your own certificates for securing multiple subdomains under a single domain.
This feature brings Azure Front Door to parity with AWS Cloudfront and Google Cloud cdn, both of which have managed wildcard certificates for years, making multi years making multi subdomain deployments simpler for enterprises.
The managed certificate service is also available for both standard and premium tiers at no additional cost beyond the standard Azure Front door pricing, reducing operational overhead for DevOps teams, managing multiple staging regional and customer specific subdomains.
[01:02:31] Speaker C: This is a great feature. It baffles me that they didn't support this and again ran into this one with a, you know, ran in full speed into the corner of that table over there in the past because I just assumed that this was something there, you know. So it's great that they support it. It's great that they actually supported it for both standard and premium because security is something Azure normally charges more for in this case and it doesn't feel like it was a big lift. And I'm trying to understand what the technical limitation was to go from hey, I support, you know, ryan.cloudpod.com to matt.cloudpod.com and they both had to be their own different SSL certs and different domains and it caused definitely issues and along the way. So like great that they got rid of it. Don't understand why it took so long. Just curious, honestly. Yeah, you know, there's some like dumb limitation they ran into with the way they architected it.
[01:03:28] Speaker A: Yeah, no, I mean it's. This is one of those, you know, announcements where I'm like, how did people live without this?
[01:03:35] Speaker C: Like, because I have, you know, unhappily. Yeah, let me add to that for you.
[01:03:41] Speaker A: I mean managing SSL search is, is a chore, right?
[01:03:45] Speaker C: It's just.
[01:03:45] Speaker A: And you know, they. To enhance security over time they're reducing the, you know, the, the windows which they're valid and all kinds of things.
So it's just making it, I mean.
[01:03:56] Speaker C: The app gateway support a automated SEL sir. Yet there's no Amazon Certificate Manager ACM in Azure and it upsets me. I had to buy a certificate and it's a redirect through the Azure portal. I think to Godaddy, if I remember correctly and I'm like why?
So like it just upsets me like yeah, yeah.
[01:04:19] Speaker A: And I think it should like, I think it should be something that's built in at least it's become, you know, since everything is sort of forced TLS encryption these days. Like I think it is commonplace and should be cable stakes for any one of these services.
[01:04:32] Speaker C: Does Google have acm?
[01:04:33] Speaker A: They do.
[01:04:34] Speaker C: Okay. Is it literally like gcp?
[01:04:37] Speaker A: ACM is what it's called a certificate manager. Yeah.
[01:04:43] Speaker C: Just rub it in more guys.
[01:04:44] Speaker A: Got it.
[01:04:50] Speaker C: Azure Virtual Network Manager IP Address management solution is now launched. The Azure Virtual Network Management or IP Address management feature. Definitely can't say that three times fast, terribly worded Briggs. Centralized IP planning and allocation to complex network environments addressing a common pain point for network people and for enterprises to manage multiple vnets across subnets and domains and regions.
The feature provides automatic IP address allocation, conflict detection and visual network topology mapping because that's what you always need.
Similar to AWS VPC IP Address Manager. Again, not easy to say, but integrates directly with the Azure Virtual Network Manager service.
This targets large enterprises and managed service providers who struggle with IP address sprawl across hybrid and multi region deployments. Reducing manual tracks and getting rid of all those fun filled Excel spreadsheets everyone has.
Unlike AWS ipam, Microsoft's really going after the other clouds today which requires separate configuration Azure implements is built into Virtual Network Manager potentially keyword. They're simplifying adaptation for Azure customers already using Azure Network Manager for network governance. Pricing follows the Azure Network manager model at $0.02 per managed research per hour making it cost effective for organizations already invested in the technology.
[01:06:23] Speaker A: I don't know if it's just because I'm an old sysadmin grew up in a data center, but I love IPAM solutions.
It's one of those things where it's like IP4 addresses were just so cheap for so long that people just completely abused them and now that it's become more of a finite resource and because systems become so, so distributed like you really need solutions like this to really understand how you've allocated your subnets, you know, what kind of usage you have. Can you repurpose this without breaking everyone, you know, the whole thing.
And so I really like this. I love that this is built in to the existing service so that you know it's just part of the network manager. And so I assume I was not ever using Azure that it would just be either a tab or visualization that I could see directly in the console, which is cool.
Every other IPAM solution I've used is kind of a chore to set up and maintain over time.
So it's one of those things. There's discovery options but it's always a little bit lackluster for me.
[01:07:41] Speaker C: So it has to be a system that is maintained is the other piece because otherwise it's garbage and garbage out. And that's the other piece I've seen with a lot of these IPAM systems whether they're the old on prem was the big one.
[01:07:54] Speaker A: Info blocks, Infoblox. Yeah, yeah.
[01:07:56] Speaker C: That I've seen used at a couple places or AWS's or now Azure's like if they're not maintained it just becomes another source of data where you're like cool, do I trust it or not?
So it's great that they do it. It's not terribly priced but it's not going to be cheap. So you need to do this if you're at scale and you really need it.
[01:08:15] Speaker A: Yeah, yeah. 2 cents per resource is not nothing in a large ecosystem.
[01:08:23] Speaker C: And with that we made it Ryan.
[01:08:24] Speaker A: We made it through like level of.
[01:08:27] Speaker C: Snarkiness is definitely up today, definitely higher.
[01:08:30] Speaker A: Yeah, we should add a sentiment analysis.
[01:08:34] Speaker C: To our I don't think so. Yeah, two angry old men yelling at Cloud.
[01:08:42] Speaker A: Yeah, well, I look forward to, you know, Justin hearing the show and tut tutting.
[01:08:50] Speaker C: I don't think I want Justin and Jonathan to listen to. I think we should just like not publish it and just see what they say.
[01:08:55] Speaker A: Yeah, we'll see.
I don't know. Me and Jonathan did that for several years where we just didn't record when Justin was around. I think it lets her our fan base down. And by fan base I mean I think my podcast client.
But yeah, no. Congrats, Matt.
[01:09:14] Speaker C: Good job, sir.
[01:09:16] Speaker A: We've proven there's continuity in the cloud pod, so we're now we can survive with 50% of us out of commission.
[01:09:23] Speaker C: So that's why there's four of us and the goal is just three. We still can't achieve that.
[01:09:28] Speaker A: Yep.
Nope.
[01:09:29] Speaker C: All right, I'll talk to you later, Ryan.
[01:09:31] Speaker A: All right, bye, everybody.
[01:09:33] Speaker C: Bye, everyone.
[01:09:36] Speaker B: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services.
While you're at it, head over to our
[email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback, and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.
[01:09:59] Speaker A: Sa.