[00:00:00] Speaker A: Foreign.
[00:00:06] Speaker B: Welcome to the Cloud Pod where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure.
[00:00:14] Speaker C: We are your hosts, Justin, Jonathan, Ryan and Matthew.
[00:00:18] Speaker D: Episode 321 recorded for September 9, 2025. The Cloud Pod is in tiers. Trying to understand Azure tiers.
Good evening Matt and Ryan. How you doing? Doing well.
[00:00:30] Speaker A: Ryan's got jokes today.
[00:00:31] Speaker D: I know that was Ryan's title. I think that's one of the few that we've ever chosen that Ryan wrote. So yeah, congrats.
[00:00:37] Speaker C: It might be the second ever.
[00:00:39] Speaker A: Does he go to two hands now or is he still on one hand of the number?
[00:00:42] Speaker C: Oh no, like definitely one hand.
Usually whenever I try to contribute, it just like it ends up in just like weird looks because my brain doesn't work right. And sometimes, sometimes my jokes don't land that.
[00:00:56] Speaker D: I mean they land for us. They just don't land all people who don't know you.
That's the problem.
Details A couple general news items here to cover this week up first up, Gregor Garcia of Finops Weekly reached out to me and asked us to share the news. But the Finops Weekly summit coming up on October 23, 4pm to 8pm Central Eastern Central European Standard Time, I think is what that is. Or 10:00am Eastern Time to 2:00pm Eastern Time for those who speak normal time zones, which is unfortunate though for Ryan. And I mean 7:00am Pacific Time, which is.
[00:01:32] Speaker A: I got you covered, guys.
[00:01:33] Speaker D: Yeah, yeah, you guys covered. So this is a great little finops conference is virtual summit. You can join the event. Lots of great speakers here. I saw Corey Quinn's one of the speakers.
There's Peter Crenshaw from you know, he's Iam manufacturing. I've seen him talk before I believe as well. Flexera is talking as well as Microsoft, AWS and others. So it looks like a pretty good event. So check that one out. If you're in the FINOP space and looking for something to do on October 23, the next one up is Ignite is now open for you to attend in beautiful San Francisco at Moscone Center November 18th through the 21st. Which I guess means Amazon reinvents probably opening for registration any moment.
This one comes a little bit before there is an optional Pre Day on November 17th. They even have a convince your manager letter if you're looking for something to do there.
[00:02:26] Speaker A: They all have that.
[00:02:27] Speaker D: They all have that now.
You know, lots of reasons that were silly. I would rewrite this and say you just sent me here because you're replacing me with AI and I can teach you how to do that if you take me to the class at Ignite.
[00:02:40] Speaker A: Or you can teach the AI how to do it if you can send.
[00:02:44] Speaker D: The AI to the conference on our behalf. That could be really helpful.
So that's it for upcoming events that you should be aware of. Other interesting news There is a issue going around with cloudflare. Apparently they've 12 unauthorized TLS certificates for the Cloudflare 1.1.1.1 DNS resolver were issued between February 2024 and August 2025, violating the domain control validation requirements and potentially allowing man the middle attacks on DNS over TLS and DNS over HTTPs connections.
This highlights vulnerabilities in a certificate authority trust model where any trusted CA can issue certificates for any domain or IP without the proper validation.
Cloudflare failed to detect these certificates for months despite operating their own certificate transparency monitoring service because their system wasn't configured to learn IP address certificates rather than domain names. Oops. The certificates have been revoked and no evidence of malicious use was found, but the incident demonstrates why certificate transparency logs are critical infrastructure. Without fina CA voluntary logging these test certificates, they might never have been discovered at all.
Organizations should review their root certificate stores and consider removing or treating CAs with poor validation practices. DNS clients developers should implement certificate transfer validation requirements similar to modern browsers to prevent future incidents.
Yeah, the whole certificates on IP addresses I think we talked about when that first came out, like that's right for reviews. And here we go. First incident of it.
[00:04:14] Speaker A: I just really like how in this they go through and say look we messed up but also you should go review everyone that you don't trust and only keep ours because we are trusted. Because look at what we just found and look how we fixed it. Don't worry about the issue that just occurred. We just got split this blog post a little bit.
[00:04:30] Speaker D: Yeah you know our certificate transparency service, you know that we forgot about IP address certificates, you know, when we designed it, you know, should have protected you but also did not.
[00:04:38] Speaker A: So yeah, don't worry about those details.
[00:04:39] Speaker D: Yeah, minor, minor problem.
[00:04:41] Speaker C: I got stuck trying to figure out how their transparency monitoring service would work because like it I guess.
How would you detect the CA without actually trying to validate requests against one of these certs.
[00:04:55] Speaker D: But all right.
I mean I also did. I also learned today that there's a, you know, certificate.
What do they call this? DL? All right, let me find again what was the name of it? The there's a mailing list that I could join certificate transparency mailing list.
[00:05:11] Speaker C: Oh wow.
[00:05:12] Speaker D: Sure. I'm sure that's a riveting mailing list full of many many emails that will help me fall asleep just in my.
[00:05:21] Speaker C: Head it's just a whole bunch of like domain validation requests.
[00:05:27] Speaker A: Like we used to get from Amazon when you had to do it with the email addresses. Do you validate before they moved to DNS?
[00:05:33] Speaker D: Yeah.
[00:05:35] Speaker A: My after show reading is going to be because I had some family stuff come before is really fully understand how the validation works is now I'm curious so I'm going to go down a hole tonight and there'll be a lot less sleep because of this.
[00:05:48] Speaker D: Yeah actually checking out the radar site on certificate transparency that's actually fascinating how many certs are generated the distributions by tld. Actually a lot of really interesting data there in my executive dashboard 3D but it's kind of cool. There's definitely some interesting stuff there to check out.
All right, well I'll go on to AI is how ML makes money. But in some cases you can also lose a lot of money which a lot of investors found out when builder AI collapsed from their $1.5 billion dollar valuation to bankruptcy after the board discovered sales were overstated by 75%. Reported 217 million in revenue in 24 was actually 51 million. Highlighting risks of AI startup valuations during current investment booms, companies spent 80% of revenue on marketing rather than product development, using terms like AI powered and machine learning without substantial AI technology. Their Natasha AI product manager was reportedly assisted by 700 programmers in India rather than autonomous AI.
Microsoft had invested 30 million and partnered with Builder for cloud storage integration, while other investors included Qatar Investment Authority, Softbank's Deep Core and Jeffrey Katzenberg. With over 450 million raised before collapse, the SEC has charged multiple ASRs with fraud this year including GameOn and Neat.
With Builder now under investigation by Southern District of New York prosecutors. The AI domain registrations are approaching 1 million addresses with 1500 new ones daily compared to estimated 10,000 total ventures during.com era demonstrates scale of current AI investment frenzy where companies rebrand to attract funding.
[00:07:20] Speaker A: I was going to say I'm pretty sure every company I've seen now has their name that was, you know, company name.com and now has company name AI. We're now powered by AI. Really? It's just a chatbot.
[00:07:31] Speaker D: Yeah.
[00:07:32] Speaker C: If that.
[00:07:32] Speaker D: Right.
[00:07:33] Speaker A: Like or apparently 700 India programmers.
[00:07:36] Speaker C: I mean I've definitely seen this before and and you know, this sort of model of like oh We've got machine learning, we got this, and now it's with AI too. It's the same sort of thing, but it's, you know, fake it till you make it only goes so far.
[00:07:49] Speaker D: You know, you gotta prove it's real. But I mean, overstating revenue, that's. Oh yeah. Really, really bad move. Do not.
[00:07:58] Speaker C: This looks like they did everything wrong.
[00:08:02] Speaker A: They got caught. That's why they did everything wrong. Otherwise they're doing everything right.
[00:08:06] Speaker C: Yeah, I guess I don't want to become dotcom rich that bad. Where I would take that kind of.
[00:08:13] Speaker A: Risk to be fair. Back to the show title conversation at the beginning of Ryan Gaggywater, he did name a full section that we talk about every week. So we got to give him credit for that because it was Ryan's idea of to make it. AI is how ML makes money, so we got to give it a little bit of credit.
[00:08:28] Speaker D: That's true.
[00:08:28] Speaker A: Sorry, the random train of thought just came to my head.
[00:08:32] Speaker D: I mean, yeah, we definitely. Yeah, Ryan. Lots.
[00:08:35] Speaker C: When my moments of brilliance happen, which is rare, they do get highlighted on the show, which I.
I love because.
[00:08:41] Speaker A: You holiday that day.
[00:08:43] Speaker D: Yeah.
[00:08:44] Speaker A: How much do.
[00:08:46] Speaker D: How much do AI domains cost?
[00:08:49] Speaker C: I was thinking that too. I bet you it's expansive.
[00:08:52] Speaker D: It should be, right? I mean, if I own that TLD, I would be definitely capitalizing 7200 according to Gemini.
All right, well, maybe I should buy the CloudPod AI if it's not bajillions of dollars. Right.
We can rebrand as an AI podcast.
Nice.
[00:09:12] Speaker A: Our readers, our viewers or listeners have skyrocketed.
[00:09:16] Speaker D: They've already been impacted by a lot more AI chat over the years, despite our best wishes not to do that. But all the cloud providers stopped announcing anything but AI, so it kind of became a must do.
Visual Studio has the August update, which includes smarter AI, better debugging and more control. This integrates GPT5 and includes MCP support, enabling developers grant AI agents directly to databases, code search and deployment systems without customer integration for their tools.
MCP functions as the HTTP of tool connectivity with OAuth support for any provider. Long click server installation from web repositories and governance controls by GitHub policy settings for enterprise components compliance. I hope that's HTTPs of tools of connectivity, not HTTP. But yeah, we like to make the same mistakes in engineering all the time. The enhanced Copilot chat now uses improved semantic code search to automatically retrieve relevant code snippets from natural language queries across entire solutions, reducing manual navigation time and developers can now bring their own AI models using API keys from OpenAI, Google or Anthropic, providing flexibility for teams with specific performance, privacy or cost requirements and their cloud development workflow and new features include partial code completion, acceptance word by word or Lambda line, get history, context and chat and unified debugging for Unreal engine that combines blueprint and native C code in a single session.
So overall, I'm glad Visual Studio is finally catching up with everyone who's building on top of Visual Studio to make Cursor and Windsurf and all the other AI Vibe coding tools. Yeah, I mean, I don't know.
[00:10:43] Speaker C: I've been using Copilot almost exclusively for a little while in VS code just because it's better than some of the. The sort of the add ons. Like I forget the. There's a couple other, you know, integrations you can use with you know, AWS Q and Gemini and you can sort of tack them on. But Copilot you can use multiple language and it has just sort of built in hooks into the client itself and so like it's. I don't know if it's a matter of like it's the first one I use so I'm biased or if or what, but I really like it.
[00:11:16] Speaker D: All right, move on to aws. AWS Global View now provides you a centralized management of region and local zone access through a single console page, eliminating the need to check opt in status across multiple locations Individually, the Regions and Zones page displays infrastructure location details, offense status and parent region relationships, giving administrators a comprehensive view of their global AWS footprint for compliance and governance purpose.
This feature addresses a common pain point for enterprise managing multiple region deployments as well as multi account multi region deployments who previously had to navigate to each region separately to verify access and opt in status. The capability integrates with existing AWS Global View functionality that allows viewing resources across multiple regions, extending the service utility for global infrastructure management. And this is available to you for free. So thanks Amazon.
[00:12:02] Speaker C: I will take features that should have existed a decade ago for a thousand bucks.
[00:12:08] Speaker D: Yeah, I mean I thought they saw this as organizations, but apparently not.
[00:12:11] Speaker C: No, no, this wasn't, this wasn't anything. You still had to log into each individual account and disable the automatically enabled zones or regions.
[00:12:23] Speaker A: But you could have done an SCP that blocked from launching stuff in that zone, couldn't you? Yeah, that's why I thought Control Tower has a built in. Yeah, that's why I think Control Tower has a built in in order to do that like 85% sure and definitely not.
[00:12:38] Speaker D: By the way, I'd like to point out that they expressed no cost in this ui. It is literally a table of how many regions, instances, vpc, subnet security groups, volumes, auto scan groups, route tables.
It's just a long row of table of stuff then tells you if it needs to be opted in or not opted in, etc. Um, but it is sort of annoying to me all the regions that you know are on by default. Back in the day they all have a default vpc, so each of them says one. And I have multiple subnets. Yep. In my account, even though the only account region I use is Oregon, I have resources in all the other ones because that was just the default. So it's kind of annoying.
[00:13:15] Speaker C: And it's still like that today. If you get a new org and the. The the organization's UI is kind of the same sort of boat. So it is still sort of this weird sort of tack on at the edge. But I do, you know, these features are great from you know, cloud, you know, service provider to the rest of your company kind of view. Like it's being able to provide that sort of user experience where the it's not even displayed in the console as an inactive zone versus the if you do an SCP or something, it's just like a permission denied, like you can control it but it's, you know, I don't know how many times I've got a ticket request saying oh, I don't have permissions to do a thing when it's. It's really, you know, something that they shouldn't be doing and they just assume that it's, you know, messed up Permissions because everything's messed up permissions because iam sorry.
[00:14:04] Speaker D: I mean this global search window which has every asset I have across everything. This can be useful for CMDB purposes and the SA well use their first launches a long way. Good to see it.
All right, let's move on to cloudwatch. They're giving you new querying metrics data for up to two weeks old instead of just three hours. So if you're using CloudWatch Metric Insights, you get two weeks of data instead of three hours. Nailing longer term trend analysis and post incident investigations using SQL based queries. This extension addresses a significant limitation for teams modern dynamic resource groups who previously couldn't visualize historical data beyond three hours when using metric Insights Queries queries features automatically available at no additional cost on all commercial AWS regions. Although you are already paying for Cloud watchmetric insights so don't that fool you?
Operations teams now investigate incidents days after they occur and identify patterns across the infrastructure without switching between different query methods or data sources. And this positions Cloud Watch Metrics Insights as a more viable alternative to third party monitoring solutions that had already provided this.
[00:15:08] Speaker A: Because three hours is definitely not. Oh, hold on.
[00:15:12] Speaker C: No, it's, I mean yeah, three hours is, is nowhere near enough, right? It's so many workloads are cyclical across a day or we'll even have different traffic patterns across a week, you know, days of the week.
So it's kind of crazy to me like three hours and you know, I've never used Codwatch Metrics Insights and I now understand why.
[00:15:34] Speaker A: I think I played with it.
Got stuck too at like the hey, this doesn't have the data I need and like rolled back to what I did know because I mean I remember when AWS and this is when I feel like I've been on AWS too long. It, you know, they only had, what was it originally, two weeks worth of metrics and they upped it to like 18 months and that was like oh my God, there's so much data now. So I felt like the same thing here where there just wasn't enough and you don't, you know, you just don't use the tool because it's not, isn't useful at that point.
[00:16:06] Speaker D: I have so many insights. I mean they're adding a lot of these. Like they're trying to add container insights, database insights. They're all, all the insights. Things have been bit pricey. That's been my kind of experience with it for what it gives you.
And in other CloudWatch news, you're also getting query alarms that can now monitor multiple individual metrics through a single alarm using Metric Insights SQL queries with group by and order by conditions automatically adjusting as resources are created or deleted. This solves the operational burden of managing separate alarms for dynamic resource fleets like auto scaling groups where teams previously had to choose between aggregated monitoring or maintaining individual alarms for each resource. This also goes into the category of Ryan's should have been there 10 years ago features. So glad to see this one as well.
[00:16:49] Speaker C: It's kind of funny because for a while back they announced sort of like a, I don't know how they phrased it, but almost like a conditional alarms and like I kind of figured you would be able to do this with that feature, but I guess, yeah, but not so like this is. Yeah, I can't believe this. This took this long. But I'm glad glad it's there now.
[00:17:09] Speaker D: Well this is where you get into like well that's high cardinality of monitoring and you need you know, per server plus per ASG group and like you know it was it always got kind of tricky when you're talking to CloudWatch people and I'm glad they finally got this. But yeah, it's nice to have one way to monitor my ASG that covers the nodes and the group without having to do both, which is now if I can combine it to dashboard views, that'd be even better. So I'm hoping that's a deeper enhancement to this AWS User Notifications now support centralized notification management across organizations, allowing management accounts or up to five delegated administrators to configure and view notifications for specific OUs or entire organizations from a single location. The feature integrates with Amazon EventBridge events, enabling organizations to create notification rules for security events like console sign ins about mfs with alerts delivered to the AWS console Mobile Application and Console Notification Center. This addresses a key operational challenge for multi account organizations by limiting the need to configure notifications individually in each member account, significantly reducing administrative overhead for security and compliance monitoring. Organizations can now implement consistent notification policies across hundreds or thousands of accounts, improving incident response time and ensuring critical events don't go unnoticed in sprawling AWS Mercury environments.
[00:18:24] Speaker C: I think I figured it out. I think they're finally getting around to all the requests that we made 10 years ago. 10 years ago. Yeah, yeah, I mean they did.
[00:18:33] Speaker D: They did turn off the, they did remove the ability to have marketing email sent to all of them a while ago. And I was like oh that's a step in the right direction. And now this is fine.
[00:18:40] Speaker C: But you had to log into each individual account and go disable that.
[00:18:45] Speaker D: They added it as an SCP so you could disable it at the SCP level. Okay, yes, but originally it was every account you had to log into. But then they fixed that problem.
[00:18:54] Speaker A: I mean this is key even I was working on an old client account that slowly becoming a new client where we're setting them up a control tower from scratch. Nice and clean. Feels great to start the world in a nice clean greenfield environment where I was like ooh, terraforming control tower everywhere. And then I was working through like Amazon's going into the marketplace. You'd like essentially do a well architect framework review and a bunch of other stuff in there and one of the things was setting the organization notifications for security and everything else. And I feel like they're finally starting to like you guys said knockout. Like hey, let's actually help people manage their multi account strategy, not have to have them build their own automation for every single account.
[00:19:41] Speaker D: I mean the theme of the Amazon section today is just everything that Ryan and I asked for 10 years ago. In general, as the next story is also in that category. This one is AMI usage provides free visibility into which AWS accounts are consuming your AMIs across your EC2 instances and launch templates, eliminating the need for custom tracking scripts that previously created operational overhead. There is literally a script and. Ryle, wait, did you steal my shitty.
[00:20:09] Speaker A: Script and put it in your repo?
[00:20:11] Speaker C: So I had no. I had one that I ran my hand.
[00:20:14] Speaker D: A fork of yours. Yeah.
[00:20:16] Speaker A: Okay.
[00:20:17] Speaker D: I think.
[00:20:17] Speaker C: I don't know if it's a fork or if it was sort of like the predecessor of like this stupid thing shouldn't exist. So ask Matt to build a real version.
Matt's actually had like a data source underneath it. Like.
[00:20:32] Speaker A: Yeah, we built a whole system. That was terrifying.
And I hope that somebody still uses it.
[00:20:39] Speaker D: Yeah, I'm sure they do.
But now you don't have to do that. You can just use. You figure out where your AMIs are deployed and then you can go figure out where else your vulnerabilities are. If this AMI has been marked as no longer, you know, secure, you can now know where those people and who gets tickets, which is kind of nice without having to script it out. That's very nice. So in general, thank you. It's about time.
[00:20:59] Speaker A: Did you guys look at the documentation for this at all? I clearly did not prepare today from.
[00:21:03] Speaker D: Because.
[00:21:03] Speaker A: Because of some family stuff. But is this like actually easy to do or is it.
[00:21:08] Speaker C: Oh, I just read the announcement. I haven't played around with it at all. I hope it's easy.
[00:21:11] Speaker D: I have my time either. But I mean it looks.
Yeah, yeah.
[00:21:17] Speaker A: Except for you have to give it your account IDs to see that it's in. So that could be fun.
So you're definitely have to do this as some part of.
[00:21:23] Speaker D: Well, but if you're a part of an organization already has all your account IDs, so it'd only be if you're. If you're distributing your AMI outside of your organization, that's where that would be a problem. But I don't have any amis in this account that I'm logged into right now, so I can't. Can't answer this question. But it I met. It's not too complicated. No, I'm Pretty sure I had this as a reinvent prediction. One year and lost, I think multiple years.
Easy Exec now provides direct console access to running containers without SH keys or inbound ports, eliminating the need to switch between console and CLI for debugging tasks. The feature integrates with Cloud Shell to open interactive sessions directly from task details pages while displaying the underlying CLI command for local terminal use. Console configurations include encryption and logging settings at the cluster level, with ECS exec enablement available during service and task creation or updates. This addresses a common debugging workflow where developers need a quick container access for troubleshooting applications and examining running processes and production environments. And this is available to you in all AWS portion regions with no additional charges beyond standard ECS and Cloud Shell usage costs.
[00:22:26] Speaker A: Wasn't this area there where you could ECS exec into it? You just had to have access, access.
[00:22:32] Speaker D: Key or even you get to EC2, you know, through SSM and then you could access ECS tasks from there. But now you can just go right from the console, which is kind of nice.
[00:22:42] Speaker A: I thought you could do it direct, but clearly I'm wrong.
[00:22:48] Speaker D: I mean, it's been a little bit.
I don't even know where. I don't even know where this option is. Like, I'm looking at a container I have running. Like, where is this? How do you even get to this? Oh, there it is. Connect. There's a connect button. I see it.
Click the container.
Oh, it's not enabled for the task. I enabled in the task. Damn it.
[00:23:06] Speaker A: Well, that's why I remember a couple years, A couple years ago, I. I did a project where we. It was this, this future was released and we had to go in and add this to the task definition.
[00:23:19] Speaker D: I got added to it. There is, There was something. You're right. What was that?
There was something they added into it that you could connect to something, but I don't think it was exactly connecting into the container like this is supposed to do. So, but I'm gonna play with this one. I'll get back to you guys on how it works because I run a lot of ECS tasks for personal stuff.
[00:23:39] Speaker A: So yeah, I'm trying to look at a ECS terraform code that I think I haven't looked at in about five years to see what I'm talking about.
And now I'm terrified by my own terraform code. So great times.
[00:23:53] Speaker D: I don't look at my old code. I just, I. I just have AI look at it now and go like, hey, this, this chump wrote this five years ago. Could you figure out what he was smoking and make it better? Like that's kind of my new thing now because I can't.
[00:24:05] Speaker A: So it's called.
So the parameter was called enable Execute command that you had to do in.
[00:24:12] Speaker C: The Terraform code AWS Exec released in 2021 according to this blog post.
[00:24:20] Speaker D: Yeah, this is execute a command. I think this, this new thing you allows you to do interactive actually get into the session. Yeah, yeah.
[00:24:28] Speaker A: Feels like that should have been an easy jump from one to the other, not a four year gap. But what do I know that's.
[00:24:33] Speaker D: Yeah, they. Yeah, busy on AI and so they didn't do it.
[00:24:39] Speaker A: They should have just had AI to it.
[00:24:41] Speaker D: Come on.
[00:24:42] Speaker C: Well that's probably what they did.
[00:24:43] Speaker D: That's probably what they did. They oh, we can make AI to this project because no one else wants to do it.
[00:24:46] Speaker C: That's why it's the 10 years setting AI at like the 10 year old backlog.
[00:24:50] Speaker D: All these features, everyone like this has been carried on some, some Porsche Schmucks spreadsheet or PowerPoint of like feature requests from customers and I'm like, yeah, it's really hard or, or it's not hard but it's really easy. And no one wants to do this. They probably just created the AI bot that just goes and creates easy things from our user. You know, it's very simple. And there's. These are the features they're doing. They're like ye were easy. We just had AI do it for you. That's why they're now existing.
All right. AWS IAM introduces three new global condition keys. AWS VPC E Account, AWS VPC Org PASS and VPC ORG id. That enables organizations to enforce network parameter controls by ensuring requests to AWS resources only come through their VPC endpoints. These condition keys automatically scale with VPC usage and eliminate the need to manually enumerate VPC endpoints or update policies when adding or removing endpoints. Working across SCPs, RCPs, resource based policies and identity based policies, the feature addresses a common security requirement for enterprises that need to restrict access to AWS resources from specific network boundaries. Particularly useful for organizations with strict compliance requirements around data locality and network isolation.
This is currently limited to a select set of AWS services that support AWS Private Link, which may require careful planning for organizations looking to implement comprehensive network parameter controls across their entire AWS footprint. The enhancement simplifies zero trust network architectures by providing granular control at the account organizational path or entire organization level without the operational overhead of maintaining extensive VPC endpoint lists in policies.
[00:26:18] Speaker C: Yeah, this is another feature that it's good. It's frustrating because supporting this is a nightmare. But the previous way you did this was even worse. Yeah, exactly. Trying to define policies in multiple different places and still only getting, you know, partial coverage and not being able to sort of dictate that. And so this is, hopefully this helps that because it is a really common security requirement, although not one I particularly like where it's just restricting access to call AWS APIs from specific node networks.
But yeah, I think, you know, like it's a good thing to have. It's definitely on a lot of control frameworks, so it's nice to have that sort of easier button to check that compliance box.
[00:27:04] Speaker A: I've definitely been at places where there's no Internet gateways allowed and all traffic for everything has to be done inside of VPCs and endpoints. And there was definitely some lower level environments where you know, you're running your same infrastructure as code, but you still have to have the endpoints to at least get the app up. And it costs you more in private endpoints than your entire app did running because you needed 12 services. So 12 endpoints times 10 was it $0.01ig an hour times 12. You know, all this stuff really adds up. And these are the places that don't really care about what their AWS spend is.
[00:27:42] Speaker C: Yeah, their number is too large for them to care about this.
[00:27:47] Speaker A: Your $400 debit count or $4,000 debit card doesn't really matter on their bill. It's not even a rounding error, it's a decimal place change.
[00:27:56] Speaker D: Well, a product manager got past finance at AWS to give us something free this week with the AWS WAF now providing 500 megabits of free CloudWatch logs vended log ingestion for every 1 million WAF requests processed, helping customers reduce logging costs while maintaining security visibility. The pre allocation applies automatically to your AWS bill at month end and covers both CloudWatch and S3 destinations with users beyond the included amount charged. Standard WAF vended log pricing. The change addresses a common customer pain point where WAF logging costs could become substantial for high traffic apps, making comprehensive security monitoring more accessible for cost conscious organizations. Customers can leverage CloudWatch's analytical capabilities, including log insights, queries and laundry detection and dashboards to analyze web traffic patterns and security events without worrying about base logging costs. The pricing model scales with usage mean customers who process more requests for wef automatically receive more free log storage.
It shows you the margin on WAF is pretty darn high that they're able to give you a free storage.
[00:28:56] Speaker C: Well, I mean, it's funny, I remember evaluating AWS WAF for years ago. You know, the security team I was working with at the time rejected it because they didn't have access to the logs. And I imagine the minute they did get access to the logs, like, no, turn it off.
[00:29:10] Speaker D: It's very expensive.
[00:29:10] Speaker A: No, they're a security team. They just don't care about this cost.
That's the fit ops person's problem. You have not been indoctrinated to the security team yet, Ryan.
[00:29:20] Speaker C: Oh, you know, if you got a.
[00:29:22] Speaker D: Good showback model, slowly moving into it.
[00:29:25] Speaker A: Yeah, if this came out in six months from now, he would have been like, it's fine, it's great.
[00:29:32] Speaker D: Cost that causes some problems. Security is, you know, worth every penny. Every bit.
I was WAF doesn't even show up on my bill. It's in the other section. Look, it's such a small portion of my bill. Like I don't really worry about it.
[00:29:45] Speaker A: I mean, as long as you're not using the manage rules or anything else like that. Like that's where they get you.
[00:29:51] Speaker C: Manage rules are expensive.
[00:29:52] Speaker D: I mean, I use the Amazon ones, I don't use the third party ones.
[00:29:55] Speaker A: Yeah, that's what I was thinking about. I thought it's like 15 bucks though, just to turn it on.
[00:30:00] Speaker D: Am I wrong?
[00:30:02] Speaker C: Well, the, the manage rules, you can turn on the manage rules, but then you can stack them and then depending on, you know, within the limit.
[00:30:09] Speaker D: And I think it's. It's also like 15 per million requests or something. I mean like it, it's one of those, you. You get a certain amount of traffic for it before it really starts to add up. And I'm just not those levels.
[00:30:19] Speaker A: So it's the w. It's the web ACL that just starting out as $5 per month, which is what I was thinking of.
[00:30:27] Speaker D: Yes, that one.
[00:30:31] Speaker B: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days.
If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask, will you actually give me my money back. Our chair A will click the link in the show notes to check them out on the AWS Marketplace.
[00:31:10] Speaker D: AWS config is now adding resource tag tracking for IAM policies, enabling teams to filter and evaluate IAM policy configurations based on tags for improved governance and compliance monitoring. This enhancement allows config rules to evaluate IAM policies selectively using tags, making it easier to enforce different compliance standards across development, staging and production policies without creating separate rules for each environment.
Multi account organizations can now use config aggregators to collect IAM policy data across accounts filtered by tags, streamlining centralized governance for policies that match specific tag criteria like department or compliance scope. The feature arrives at no additional cost and all supported of regions and automatically populates tags when recording IAM policy resource types, requiring only config recorder configurations to enable, which you do pay for config recorder configurations, so don't be fooled. Once again, this addresses a common pain point where teams struggle to apply granular config rules to subsets of IAM policies, policies previously requiring custom lambda functions or manual process to achieve tag based policy governance amount Amazon just given the gifts of 10 years ago to us this week.
Yeah.
[00:32:13] Speaker C: Oh taking away all that lambda spackle, you know, making that no longer necessary. It's fantastic. I haven't added config rules specifically for parsing policies, but I've definitely built my own sort of like library auditor for policies and it was definitely a lot of really complicated and probably non functional like parsing based off of the name of the policy and sort of the layout. So this is.
[00:32:37] Speaker D: I like to see this and you're parsing an account name per caps to figure out dev or prod because you can't. That's not a tag you can set of the organization level. You know all these that we had to do in the past.
[00:32:49] Speaker C: So this is nice.
[00:32:51] Speaker D: Yeah thanks Amazon. Appreciate it all. It have been cool you know reinvent predictions 10 years ago but here we are. Yeah Moving on to GCP who's still trying to get you to use IPv6 with their new DNS64 and NAT64 to enable IPv6 only workloads to communicate with IPv4v4 services addressing the critical gap as enterprise transition away from increasingly scarce IPv4 addresses while maintaining access to legacy IPv4 apps. This feature allows organization to build pure IPv6 environments without dual stack complexity. Using DNS64 to synthesize IPv6 addresses from IPv4 DNS records and NAT64 gateways to translate the actual traffic between protocols. This is like when NAT came out and I didn't understand it initially and then I figured it out. It's sort of how I feel about these NAT64 and DNS64 translation layers like we dynamically figured out by some ID4. I'm like, that's black magic to me.
[00:33:48] Speaker C: But I, well that's the way I want it.
[00:33:50] Speaker D: Right.
[00:33:50] Speaker C: I don't want to really understand this, I want it to be handled. But I'm quickly like trying to research like what's the pricing model?
Because just like, you know, managed NATs.
[00:34:01] Speaker D: It can get expensive.
[00:34:01] Speaker A: Oh, I can think of N64. Every time I see NAT 64 my brain just does it properly.
[00:34:06] Speaker C: Yeah, totally.
[00:34:07] Speaker A: It's like, nah, you're good.
Mario Kart and 007.
[00:34:12] Speaker D: Yeah, yeah, that's like one of the good Nintendo consoles.
[00:34:17] Speaker A: That's pretty much how it works. It just starts, you know, pistol whipping people and it converts it for you, Justin. That's the black magic.
[00:34:24] Speaker D: That's how it works. Perfect. Got it. Fantastic.
Well, for those of you who had to Dr. BigQuery, they're now offering you a soft failover capability which waits for complete data replication before promoting the secondary region, eliminating the risk of data loss during planned failovers compared to traditional hard failover that could lose up to 15 minutes of data within the RPO window. This addresses a key enterprise concern where companies previously had to choose between immediate failover with potential data loss or delayed recovery while waiting for primary region that might never recover, making Dr. Testing particularly challenging for compliance driven industries like financial services. The feature provides multiple failure options through BigQuery, UI, DDL and CLI, giving administrators granular control over disaster recovery transitions while maintaining the required RTO and RPO objectives. Without the operational complexity of manual verification.
[00:35:12] Speaker C: Applications, there's nothing worse than trying to doctor for a giant data set like, and then especially if you have like sort of like big data querying or, or you know, job based things that are, you're fronting into your application with inside those insights or something like that.
It just can be so nightmarish. And so I haven't done that directly over the database.
[00:35:36] Speaker D: Then you have to turn it in reverse and you have to do all these things and so the fact that you can do this soft failover where you're, you know, a, you're, you're making sure the data is fully coalesced before you do the failover so you don't lose anything in your test, that's great. And the fact that you can do a drill without actually impacting production with this capability is super nice, especially with BigQuery.
[00:35:55] Speaker C: Really?
Yeah, that's really sweet.
[00:35:58] Speaker D: Google is expanding their compute Flex codes to cover memory optimized VMs, the M1 through M4 HPC instances, H3, H4D and serverless offerings like cloud run and cloud functions, allowing customers to apply, spend commitments and across more services. The new billing model charges discounted rates directly instead of using credits simply and cost tracking while expanding coverage beyond traditional compute instances to specialized workloads. Cuds are well compete against AWS reserve instances and Azure reserve VM instances.
And the key benefits include SAP, HANA scientific computing workloads and organizations with mixed traditional and service architectures can now optimize their costs across the entire stack. You can opt in immediately for this.
All customers will automatically transition to it in January of 2026.
[00:36:44] Speaker A: Is there a reason you wouldn't opt in? Because I feel like. No, the cuts for cloud run and functions and stuff like that.
[00:36:52] Speaker D: So you got to remember there's cuds and there's flex cuts. So flex cuts, you know, we're only on certain instance types and so and you Basically it's more like a savings plan. A savings plan, right.
Where the cud is more like an RI and so the you go, we get it, you get a better discount. Using non flex cuts, you get less flexibility. So if your workload is pretty static and you know you're always going to have this particular instance type, then a cud is actually a better use case. But then when you do want to upgrade, you're kind of hosed. And so this ability allows you to, you know, basically move between the different versions of like going from M3 to an M4 without losing that cud benefit, which allows you to move to newer hardware platforms faster but with less discount.
[00:37:36] Speaker A: But why would you not opt in? Because you, at that point you wouldn't be buying Flex cuts, you would just be buying cuts.
[00:37:42] Speaker D: I mean, I don't, again, I don't, I don't remember if you can still buy cuds for instances that are covered by flex cuts. I think once they go to flex cut, you can't buy normal cuts anymore. I think that's why it's opt in.
[00:37:54] Speaker C: Oh, it's not like an option like that. You have to buy a flex cut or a regular cut for specific.
[00:37:59] Speaker D: Yeah, I don't believe it's an. I think once they're under flex cuts, your only option is flex cuts. I don't think.
[00:38:03] Speaker C: Yeah, interesting.
[00:38:04] Speaker D: But I have to be able. I don't quote me on that. I double check it. It's kind of it since I've every time I try to learn cuds I then forget it.
Also there's also another discount level which is the if you're not in a Google enterprise agreement you get a discount just for running it for the full month. Right?
[00:38:19] Speaker A: That's what I always liked about GCP was like if as you run it for longer they give you more of a discount too.
[00:38:24] Speaker D: It's a marketing scam, let me tell you. It is.
It is not for any enterprise that's out there because as soon as you get into an enterprise agreement the very first thing they pull out is that ability like you can't do that anymore as soon as you're on an enterprise account. So it's, it's all marketing to get you. It's like oh that's a better deal than Amazon. And then you come and you get onto the Google thing, you drink the Kool Aid and you get onto ek, you know, onto GKE and then you're like oh I want better discounts. I'm like oh yeah, well that thing we gave you for free, we're taking it away.
[00:38:52] Speaker A: And on today Justin does a rant because Matt is not talking about app gateways today.
[00:39:03] Speaker D: It's annoying. Google Cloud is launching a gentic SOC workshop Society Operations Center Workshop a free half day training series for security professionals to learn practical AI applications and security operation centers starting in Los Angeles and Chicago this month. The workshops focus on teaching security teams how to use AI agents to automate routine security tasks and reduce alert fatigue. Vision Google's vision of of every customer having a virtual security assistant trained by the leading security experts. I mean you are also paying an ARM and a leg for Google SOC to get this capability and agentic but it is nice to see that this is available and the workshops I believe will probably be pretty good if you're interested in this and can afford dual society.
[00:39:46] Speaker C: Unfortunately when this is released the the sock workshop will be today.
[00:39:52] Speaker D: In Los Angeles.
[00:39:54] Speaker C: The one in Los Angeles I think the one in Chicago is a little.
[00:39:57] Speaker D: Later can't remember it's on Friday the 19th.
[00:40:02] Speaker C: So yeah if you guys are. I'll be in in person when I.
[00:40:06] Speaker D: Know you'll be able to tell us how great I will what's worth going to.
[00:40:10] Speaker C: I will report back.
[00:40:11] Speaker D: Let us know.
Google Data Proc now supports a multi tenant cluster allowing multiple data scientists to share compute resources while maintaining per user authorization to data Resources through service account mappings. This addresses the traditional trade off between resource efficiency and workload isolation and shared environments. Feature enables dynamic user to service account mapping updates on running clusters and supports YAML based configuration for managing large user bases. Each user's workload is run with dedicated OS users Kerberos principles and restricted access to their only their map services account credentials. So you can now argue with yourselves and your ML team about using up all the capacity instead of blaming the Ops guys.
[00:40:50] Speaker C: Yeah, 18 months ago they announced Serverless Data Proc and I thought that that would basically mean you wouldn't need this anymore like because like this means you're going to host a giant dataproc cluster and just pay for it all the time.
[00:41:04] Speaker A: That's the point.
[00:41:04] Speaker C: In order to use this they want.
[00:41:05] Speaker D: You to burn money.
[00:41:06] Speaker C: Like I don't know if it's. I guess you know, I don't know like from a a data scientist perspective, I'm not sure.
Unless you know, I don't know if the moving the OPS complexity of like managing the data set and connecting to the data set maybe you know, being able to use your own tools because there's some limitations with serverless data proc on that. I don't know but I guess this is cool.
[00:41:28] Speaker A: I don't know.
[00:41:29] Speaker D: No one knows. I mean that's the. No one knows.
You got to talk to machine learning guys to know more.
It's a tough one.
Google has beaten both Azure and AWS I believe to their official GA version of the Rust SDK.
Basically the first official Rust SDK supporting over 140 APIs including Vertex, Cloud, KMS and IAM. Addressing the gap where developers previously relied on unofficial or community maintained libraries that lack consistent support and security updates. The SDK includes built in authentication with application default credentials, OAuth 2 API keys and service accounts with workload identity federation coming very soon. Basically if you're using Rust and you want to use SDK stuff, this is for you and I'm happy to see it. It is memory safe which is one of the big benefits of Rust in general includes secure data processing systems and performance critical apps with Rust 0 cost abstractions. Like I said, there is one for AWS. It's in active development. There's also one in Azure. Both of them have comments and their code that says do expect this to have breaking changes before we reach 1.0. So appreciate that that we're here with Google.
[00:42:39] Speaker C: I knew the Azure one was still in development, but I thought the AWS one was more, more mature. I guess not.
[00:42:46] Speaker D: It might be a little bit more mature on Amazon because I'm looking at the roadmap and they don't have anything in progress anymore so maybe it is but or they update the SDK roadmap. I wanted to but yeah good to see happy more rust is happening to replace hopefully legacy C apps that are not threat safe.
[00:43:07] Speaker A: There's no yeah.
[00:43:10] Speaker D: All right Matt, your time to shine. We're on to Azure.
[00:43:13] Speaker A: Don't think you're using that word correct. Definitely not a time to shine. Azure Save sets It's not going to happen.
[00:43:22] Speaker D: Well, you can now Upgrade your existing Gen1 VMs to Gen2 with trusted launch enabled, addressing security gaps and legacy infrastructure without requiring VM recreation or data migration. Each trusted launch provides foundational security features, including the Secure Boot and VTPM or Virtual Trusted platform module, protecting VMs against boot kits, rootkits, and kernel level malware capabilities that were previously unavailable to join one VM users. This positions Azure competitively with Nitro and GCP shielded VMs and upgraded path targets enterprises running legacy Windows Server 2012 and 201616 and older Linux distributions on the Gen1 hardware. As the upgrade process requires a VM restart and temporary downtime, it preserves existing configs, network settings and data disks, making it practical for production workloads during maintenance Windows. I'm sure this doesn't break the kernel on Windows at all and is totally safe and you should do this immediately, right Matt?
[00:44:12] Speaker A: Can't say we've immediately jumped on this, but we definitely are running this. At my day job, you know, it's same thing as nitros here, but like you need these things in place and if you don't have them like you're just opening yourself up to potential other attack vectors that just aren't necessary.
[00:44:27] Speaker D: Remember when they came out with the Gen 2 has taken this long for them to give you a migration path.
It's always destroy and relaunch as Gen 2 is the only option. And now to at least give you some capability where it just takes a reboot is nice.
[00:44:42] Speaker A: Unlike Windows, Azure sometimes takes a scorch earth technique, kind of like Apple does. We don't need to support that stuff. We're done, you know, and that's kind of where I feel like Azure does sometimes. Or they release a lot of features and it takes them a while to get that migration path in there and I kind of think some of it's because they want that time to, you know, test it out and get the scale. So the same way aws says, oh, this thing costs you an ARM and a leg to run. And then six months later they're like, we're going to cut that by 90%. You're like, okay, now I'll actually use it. I feel like that's kind of what Azure does here is like they set it up, it's great, but then no one really uses it because you don't have a good migration path. So if you're greenfield, great, you start to use it, they get their data and then over time they build that migration path for people.
It could also just be blatantly lying.
[00:45:31] Speaker C: But that's what I feel like Moving.
[00:45:34] Speaker D: On to our next amazing feature Gateway level metrics and native auto scaling for the azure API management v2 tiers.
In addition to not only do you get this amazing API gateway feature, you also are getting workspaces and workspace gateways in the premium V2 tier of Azure API Management. And if that wasn't enough, you also get expanded support for MCP in the Azure API Management. And so if you love Azure API gateway products, they've got those three new features for you this week, especially on.
[00:46:05] Speaker A: The Was it Premier tier, Premium tier, it's an ARM and a leg. So be careful what you're doing and by default it's not hanging.
So you know it adds up real fast when you have $2,700 times two running. So it's a really nice feature that they do have this where you know, you can at least up auto scaling and the workspaces for, you know, data for isolation, things along those lines. But the APM service is expensive so be careful using it.
[00:46:33] Speaker D: I'm trying to get past all the Azure, just all of Azure.
[00:46:37] Speaker A: We're moving on to you.
[00:46:39] Speaker C: Yeah.
[00:46:40] Speaker D: Microsoft's launching GPT Real Time on Azure AI Foundry, a speech speech model that combines voice synthesis improvements into a single API with 20% lower pricing than the preview version, positioning Azure to compete with Google's Voice AI capabilities and Amazon's Poly service. I've heard about Polly in forever.
The model introduces two new Natural voices, Marin and Cedar Enhanced Instruction following an image input support that allows users to discuss visual content through voice without requiring video. Expanding beyond traditional text to speech limitation, the pricing starts at $40 per million input tokens and $160 per million output tokens for standard tier the function calling capabilities that let developers integrate custom code directly into voice interactions for building conversational AI application. The entire use case for this includes customer service automation, accessibility tools and real time translation services with The Real Time API enabling developers to build interactive voice applications that process speech input and natural response in a single pass.
[00:47:38] Speaker A: It's pricey.
[00:47:40] Speaker D: So now AI can call you.
[00:47:41] Speaker A: Basically it's the same thing we talked about last week with the GPT5 real time here. They're just leveraging it in a platform now.
[00:47:50] Speaker D: Correct.
Azure's Response API is simplifying building AI agents by handling multi turn conversations, tool orchestration and state management in a single API call, eliminating the need for complex orchestration code that developers typically wrote themselves. The API includes six built in tools File search for unstructured content, function calling for custom APIs, code interpreter for Python, computer use for UI automation, image generation and remote MCP server connectivity, allowing agents decide which tools to use without manual intervention. Early adopters like UiPath are using it for enterprise automation where agents interpret natural language and Execute actions across SaaS applications and legacy desktop software.
[00:48:27] Speaker C: I like these things, but I've been burnt by the O365 graph API so many times that I'm like, this sounds really rational, but it's also probably how they justified that abomination of a service. So like, this seems right, seems cool and I would use it, but I don't trust them.
[00:48:46] Speaker D: Well, you can use the Response API to write your graph.
There you go, you'll be set. That's how I solve the Google Docs problem, right?
Azure is implementing mandatory MFA for all resource management operations starting October 1, 2025, expanding beyond the portal enforcement that completed in March 2025. The space to enhancement covers Azure, CLI, PowerShell, REST, APIs, SDKs and infrastructure as code tools. Addressing the fact that MFA blocks 99.2% of account compromise attacks. Enforcement uses Azure policy for gradual rollout and allows global administrators to postpone implementation if needed. Workload identities like managed identities and service principles remain unaffected. Maintaining automation capabilities while securing human access. I mean it went so well the first phase of this. I can't imagine the phase two is going to go any better than the first phase did, especially since the use.
[00:49:41] Speaker C: Cases are so much less MFA friendly like the CLI and PowerShell APIs that's going to be.
Those are going to be very disruptive when it when those authors get caught off guard. So it's not easy to add MFA to some of those flows.
[00:49:55] Speaker D: Well, in the first phase, you know, you don't realize quite. At least when they came out they announced it was happening. Like, oh okay, this makes sense if you're using Microsoft 365 different thing and then like it broke, you know, things in the day job and the SaaS application where we were using certain things and then we had to go retrofit all this stuff and like do all this heroics to make it work with our model. And it was just like, oh my God, this is, is, this was way worse than I ever expected. So yeah, I'm not looking forward to phase two.
[00:50:24] Speaker C: Yeah, this is, I mean this is, this phase would address some of the problems that broke things in phase one, but you know, it is sort of just this weird sort of mix like being able to use the rest APIs instead of some of the older APIs. And I don't know, like MFA is definitely required in this day and age with the, the amount of data compromises that are out there. But man does it suck.
[00:50:47] Speaker D: And our final story for tonight, Azure AI Foundry addresses the challenge of rapidly moving AI agents from prototype to production by providing a unified development experience across VS code, GitHub and enterprise deployment channels. The platform supports both Microsoft frameworks like Semantic Kernel and autogen alongside open source options including Lang Graph, Llama Index and Crewai Lang. Developers use their preferred tools while maintaining enterprise grade capabilities. The platform implements open protocols including MCP and Agent to Agent or cross platform agent collaboration, positioning Azure as a protocol agnostic compared to more proprietary approaches for competitors. Azure AI founder integrates directly with Microsoft 365 and Copilot through the Microsoft 365 Agents SDK, allowing developers to deploy agents to teams, biz chat and other productivity services where business users already work. The VS code extension enables local agent development with integrated tracing evaluation and one click deployment to Foundry Agent service, while the Unified Model Inference API allows model swapping without code changes built on observability. Continuous evaluation through CICD integration and Enterprise guardrails for identity networking compliance are integrated into the development workflow. And you know this is great if you're looking for doing a gentech AI, but one thing that strikes me about this is how far behind Amazon is in a Gentex.
Because Google's had this for a while and now Microsoft's got it and where are you?
[00:52:11] Speaker C: Yeah, I mean the Agent to Agent protocol is developed by Google.
[00:52:14] Speaker D: Yeah.
[00:52:15] Speaker C: And you know, open sourced. So it's like Absolutely.
And yeah I this there's a little bit of agentic capabilities in Bedrock but.
[00:52:23] Speaker A: Not they're just not there yet and.
[00:52:26] Speaker C: I wouldn't even know where to be.
[00:52:27] Speaker A: How to build it.
[00:52:30] Speaker C: Yeah. Of how to start on Bedrock just.
[00:52:31] Speaker D: Because it's not doesn't really seem it's not as clean.
And their branding is confused. I think as well, the Q versus Nova versus Bedrock, you know, there's just too much complexity there in general. But I'm glad to see this AI foundry. I think it's cool.
But I do, you know, Azure is looking a little bit confusing with foundries and factories, which, you know, I think. I think a foundry is just the same thing as a factory just for metals. But, you know, here we are, we're trying to make them different things. So, yeah, I don't know.
[00:53:04] Speaker C: And, you know, I hope to use this maybe, you know, I'm trying to figure out what. Because I don't really understand why you'd want a unified experience across VS Code and GitHub. But, you know, I guess using editing software among those tools, similar.
[00:53:20] Speaker D: We'll see.
[00:53:21] Speaker C: Hopefully. Hopefully it works out.
[00:53:23] Speaker D: All right, gentlemen, it's been another fantastic week here in the Cloud. I'll see you all next week.
[00:53:29] Speaker A: All right, bye, everyone.
[00:53:30] Speaker C: Bye, everybody.
[00:53:34] Speaker B: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services.
While you're at it, head over to our
[email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback, and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.