[00:00:00] Speaker A: Foreign.
Welcome to the cloud pod where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure.
[00:00:14] Speaker B: We are your hosts, Justin, Jonathan, Ryan and Matthew.
[00:00:18] Speaker C: Episode 331 recorded for November 18, 2025.
Claude gets a 30 billion Azure wardrobe and two new best friends.
Good evening, Jonathan, Ryan and Matt. How you guys doing?
[00:00:32] Speaker A: Good. How's it going?
[00:00:36] Speaker D: It's always nice when it's a full house.
[00:00:38] Speaker C: It is nice when it's a full house. We were a little lonely last week when Matt and I tried to do Azure predictions which went poorly, as you can imagine. But I will tell you guys right now I have to bail out in about 30 minutes, so you guys are on your own after that. But I want to make sure I can start out with you guys. And then I have to have obligations and we had to record a previous episode, which is in the future episode because again, we' confusing ourselves and timelines. But it makes sense to all of us. But here's where we're at.
[00:01:04] Speaker B: So.
[00:01:05] Speaker C: Well, first of all, I guess let's just jump into ignite. So the keynote was today. Matt, you watched it earlier and you took down my GitHub and I was very cranky with you because GitHub had an outage. So yeah, we do knock that off. This time we don't need the hand draw.
[00:01:19] Speaker D: It was the way I decided. One of my prediction was because Azure DevOps had an outage.
[00:01:24] Speaker B: Right? Oh, there you go.
[00:01:26] Speaker C: But then the date course didn't announce anything.
[00:01:27] Speaker D: Yeah, of course. Why not?
[00:01:31] Speaker C: So you gave yourself a half point. Now, I didn't judge this. I let you judge it because you know Azure best.
So you miss on ACM competitor and you miss on Azure DevOps. But you said AI announcement and security AI agent and you said presumably copilot for Sentinel.
[00:01:47] Speaker D: Did I say copilot for Sentinel? That's why I don't remember. That's the part where if I said that, then I did not get any.
[00:01:54] Speaker C: I think you use it as a clarification point, but I don't think your prediction was that it would only be copilot presented. It was just an example of a way for clarification, but I do not think it was a limiting item on that.
[00:02:07] Speaker D: Yeah, as I think about it more and more, I was thinking of the Azure Windows Defender, which is their CPAM Solution, integrates with GitHub in order to be able to.
Was it GitHub Security Plugin or whatever they call it Security plus, you know, it's their automatic SCA tool and static code analysis tool they have, but it integrates with that in order to auto create your tickets and let your security team kind of set targets for everyone. And it seems like a nice tool, but I think the more I think about it, the less I'm giving myself a half a point. So I think I will cross this off as I think out loud here.
[00:02:48] Speaker C: I mean, I'll also keep a half point because it doesn't really matter. Yeah. So I predicted incorrectly a price reduction on OpenAI or a significant prompt caching improvement and I missed again on foundational LLM from Microsoft to compete with OpenAI. I'm still expecting it's going to come, I just don't know when. But I did nail cobalt 200, which is the new generation of their custom silicone. This is a partnership with ARM Neoverse Computer Subsystem V3. The Cobol 200 exemplifies Microsoft vision of a converged AI data center where general purpose CPUs and specialized accelerators work side by side with bespoke networking, storage and security offload. And so with that, that's a point. So I will take it and I'm very happy about that. And then Jonathan, who wasn't actually here, but then sent in his predictions halfway through. Oh yeah, thank you. Nice.
Matt's a charge of sound to my best.
So he's, he's learning all the tricks of how to make the noises happen.
[00:03:47] Speaker D: Just trying to see if that would make you deaf too.
[00:03:49] Speaker C: Probably trying to blow me out with the, with the horn at some point. I'm expecting it at any moment, but I'll encourage him.
Jonathan, he wasn't here to send his announcements in how to do the show, so we just read them out middle of the show and he got none of them. So good job, Jonathan. Doing quite well.
[00:04:06] Speaker A: Microsoft's lost, not mine.
[00:04:09] Speaker C: Yeah, but, but it doesn't matter. But you did win the tiebreaker for how many times. It's like co pilot, you said 45 times. And we're having a little bit of challenge with YouTube where it's either 46 times or 71 times, depending on if you have a subscription to Gemini or not, because it looks like it only indexed the first hour through the transcript. But regardless, even if it's 46, which is what math number was, or my number was 71, you win with 45. So congratulations on the tiebreaker that you didn't need, but still congratulations on that. So I will take this win and I will cherish it greatly. This is the only thing I ever want to win with Azure.
Thank you. Thank you very much.
[00:04:52] Speaker D: No problem. I'm done now.
You should not have showed me how to do this.
[00:04:59] Speaker C: This is why we don't give you a login very often.
[00:05:01] Speaker B: Yeah.
[00:05:04] Speaker D: All right.
Security personnel, they call.
[00:05:07] Speaker C: That's right.
[00:05:09] Speaker B: They don't give me the credentials at all because they know it'd be so much worse.
[00:05:12] Speaker C: I mean, we have given them to you if you just lost them as a good security person would do so this morning I woke up to find out that Cloudflare had a massive outage and they were so upset about their outage. They've already released the rca. So yeah, they messed that one up and owned that one pretty quickly, but apparently this is the worst outage since 2019, lasting approximately three hours and affecting core traffic rattling across their entire global network. The incident was apparently triggered by a database permissions change that caused a bot management feature to file or file to double in size, exceeding hard coded limits in their proxy software and causing system panics that resulted in 500 errors for their customers. The root cause demonstrates a cascading failure pattern where a Clickhouse database query began returning duplicate column metadata after permission changes, doubling the feature file from around 60 features to over 200 which exceeded the pre allocated memory limit of 200 features in their Rust based FL2 proxy code. The team initially switches to DDoS attack due to fluctuating symptoms caused by the bad configuration file being generated every five minutes as the database cluster was gradually updated. The outage impacted multiple Cloudflare services, including their CDN workers kv, which is a key value, store access, and even their own dashboard logging system through turnstile dependencies.
Customers on the older FL proxy engine did not see errors who received incorrect bot scores of 0, potentially causing false positives for those using bot blocking rules. And Cloudflare plans to treat internal configuration files with the same validation rigor as user input, including more global kill switches for features and preventing error reporting systems from consuming excessive resources during incidents.
They acknowledge this is an unacceptable for their position of the Internet ecosystem and committed to architectural improvements to rent similar affairs. Now definitely a bad outage. I appreciate that they owned it, owned it hard and that was nice, especially considering they were front page news versus Amazon who you know, just a few weeks ago had front page news. You know they did do a good RCA as well, but I don't feel like they really owned their importance to the Internet ecosystem that they actually have.
[00:07:08] Speaker D: Could have been like Microsoft that just deal you make changes for a week and a half after theirs.
[00:07:13] Speaker C: I mean you still can't make changes, can you? It's still slow.
[00:07:18] Speaker D: Yeah, it takes 45 minutes to do a front door update, don't worry about that.
[00:07:20] Speaker C: Do an invalidation, right?
[00:07:22] Speaker D: Yeah, yeah, it's great. I love it.
[00:07:24] Speaker B: Not with my attention span like no change.
[00:07:27] Speaker C: Yeah.
So great.
Thanks very much. Yeah ChatGPT decided to reign on Ignite's parade by releasing GPT. Well actually I guess their partner still they released GPT 5.1 in their API platform with adaptive reasoning that dynamically adjusts thinking time based on tax complexity resulting in 2-3x faster performance on simple tasks while maintaining frontier intelligence. The model includes a new no reasoning Mode that delivers 1% better low latency tool calling performance compared to GPT5 with minimal reasoning being suitable for latency sensitive applications while supporting web search and improved parallel tool calling. I mean Ryan's no reasoning all the time. That makes all the sense.
We have a GPT version of Ryan now which is great.
GPT 5.1 also introduces extended prompt caching with a 24 hour retention up from minutes maintain an existing 90% cost reduction for cash tokens with no additional storage charges. Early adopters report the model uses approximately half the tokens of competitors at similar quality levels.
So they also have two new developer tools including the Responses API Apply patch for structured code editing using diffs without JSON escaping, and a shell tool that allows the model to propose and execute command line operations and control to plan execution loops.
OpenAI is also releasing specialized GPT5.1 codecs and a GPT5.1 codex minimodels for their Codex programming tool, which I actually tried to use for the first time really seriously and I fucking hate it. Just flat out don't like is so limited on what it can do from a tools perspective.
It couldn't even do a git commit or run a Python test of a code line because it was sandboxed and so the only choice was you either had to figure out how to fix the sandboxing, which you can't do because it's not allowed to change its own configuration. Safety reasons, which I appreciate. But it's also not a very easy process to go through and it's not something intuitive like I see on cloud code and others. So congrats. Glad to see you GPT 5.1 and guess what? It's available in Azure as well, so you're welcome.
[00:09:28] Speaker B: Yeah, I could play with it. I mean I didn't really like GPT5 so I don't have high Expectations, but it is, you know, as these things enhance, you know, I found that, you know, using different models for different use cases and have some advantages. So maybe I'll find a sweet spot or something.
That's actually pretty impressive though for 24 hours if that's built in.
[00:09:50] Speaker D: Like that's sweet.
[00:09:52] Speaker C: That's pretty sweet.
[00:09:53] Speaker A: Well, this is. It's just cost benefit thing, isn't they. They figured that the, the cost of swallowing the, the storage cost is probably less than the cost of having to recompute, you know, the, the attention across all these queries that people are sending in.
[00:10:08] Speaker D: But the.
[00:10:09] Speaker A: I think something interesting that hasn't really been reported on terribly is a new tokenizer that they built for GPT5, which is why it's so efficient. I'm not sure if the know the, the vocabulary for LLMs is fixed before training essentially. And so I suspect for GBT5 they've kind of extended the vocabulary so that there are, there are more tokens, but more tokens means you can capture more sort of more essence of what the user is saying in their prompt with fewer individual tokens overall. So I don't know if that maps exactly to the actual compute cost, but it's a great sales thing to do because now using pure tokens. But they'll probably just up the price anyway because the price were based on compute, not on tokens. So it's like.
[00:10:56] Speaker D: Guarantee you they're still making money even with that, even with that being cash. They're not charging you, you know, they're charging you more than their cash is costing them.
[00:11:06] Speaker B: Yeah, I'm not sure, like it's not really a profitable business yet, is it?
[00:11:10] Speaker A: I think inference is a profitable business, you know, but I don't, I don't think building the models is profitable.
[00:11:19] Speaker C: I mean, it's profitable though. All right, moving on to our next story. OpenAI is apparently piloting group chat functionality and ChatGPT, starting with users in Japan, New Zealand, South Korea and Taiwan across all subscription tiers. The feature allows up to 20 people to collaborate in a shared conversation with ChatGPT with responses powered by GPT5. One auto selects the optimal model based on the prompt and the user description level. ChatGPT has been trained with new social behaviors for group context, including deciding when to respond or stay quiet based on conversational flow, reacting with emojis and referencing profile photos for personalized image generation. Users can mention ChatGPT explicitly to trigger a response, and custom instructions can be set per group chat to control tone and personality.
Privacy Control separate group chats from personal conversations with personal ChatGPT memory not shared or used in the group context. The feature includes safeguards for users under 18, automatically reducing sensitive content exposure for all group members when a minor is present. And parents can disable group chats entirely through parental controls, providing additional oversight for their younger children who are using chatgpt subscriptions.
[00:12:20] Speaker A: Cool.
[00:12:21] Speaker C: Read limits only apply only to chat GPT responses, not user to user messages. And count against the sufficient tier of the person ChatGPT is responding to.
[00:12:30] Speaker A: That's interesting. I'd rather actually have group chats enabled if kids are going to use it because at least you have witnesses to the conversation.
[00:12:38] Speaker B: Yeah, fair point.
[00:12:40] Speaker A: Yeah, that's an interesting feature. You can only imagine where that's going to go.
[00:12:44] Speaker C: I was thinking like can I get it added to my Slack channel and can it be, you know, just live there? That would be cool. Yeah, yeah, I'm sure it's coming.
[00:12:51] Speaker A: Discord, Slack teams, corporate management of the, of the, of the system prompt so that perform certain functions in, in groups. Yeah, that's a neat feature.
[00:13:00] Speaker D: So that's also Copilot really suck.
[00:13:03] Speaker A: I'm surprised they launched it for in that region and, and in the way it did. Rather than being a business feature first. I guess they, they want to test it out on people who really don't care if it works properly before they start selling it. Perhaps.
[00:13:17] Speaker C: I mean you're saying, are you saying that Kiwis don't care if it works properly? Is that what you're saying? Is, is that a dig on New Zealand? I don't know.
No.
[00:13:29] Speaker B: For our New Zealand listeners. You can email Jonathan at the cloud. Yeah. With your complaints.
[00:13:36] Speaker A: No, I, I, I, I think business is much more likely to ask a refund if it doesn't do what they want than, than an individual who may be more tolerant.
[00:13:44] Speaker C: Well, it sounds like they really want this to be a chat for like all kinds of different use cases. So if you limit it to business, the context they would get would just be, you know, talking about the sales that I'm trying to do or this issue I'm trying to fix or this order we need to ship out to a client or you know, so it's a much smaller scale versus like, you know, you have a bunch of kids in a chat room talking about their homework or you have, you know, a group of friends trying to plan a trip. You get a much more context and ability to retrain your models and up. So I imagine that's why they went with the users. And then by choosing those regions a, you get different languages in Japan and South Korea and Taiwan, which are double by character languages typically in their chats. Maybe they want to do something there. And then New Zealand, you get enough English in New Zealand that. But a small user population so you don't have to worry about everyone overrunning the feature. Yeah, that's my, that's my take.
[00:14:32] Speaker A: My mind's just going off into all these cool use cases now. Like, I really want a ChatGPT based, you know, game master for a D and D game or something like that.
[00:14:38] Speaker C: You know, that'd be cool.
[00:14:40] Speaker B: It would be cool. I'm glad it's not rolling out to business because then it would just be an audience to my terrible prompts. So like, really? He doesn't know how to do that.
[00:14:50] Speaker A: Geez.
[00:14:53] Speaker D: So Copilot, I don't know if it was released today or what it was, but they definitely were talking about on the keynote was they have Copilot now in teams. So if you're stuck in teams and you're using Copilot, there is a way that you can like talk directly to Copilot in teams. And they were talking about their whole workflow and everything along those lines that they can, that you can start to.
[00:15:12] Speaker C: When you talk, when you tell me that you're going to build a workflow in as part of your teams thing, that's how you're going to make that work. That just tells me that feature is.
[00:15:18] Speaker D: Not ideal you as a person's workflow, not as a workflow process.
[00:15:25] Speaker B: I've been very underwhelmed with Copilot and it's use on Office365 products. I tried to use it earlier today, like rationalize the formatting of this document. It's like, well, you didn't upload a document, so I can't do that. And I'm like, you're in Word next to the document. You're right.
[00:15:44] Speaker D: You'Re in the tool.
[00:15:47] Speaker B: It's like, okay.
[00:15:48] Speaker C: That was sort of, that was sort of the problem I was having with Codex. So it was like, well, this code is broken. You need to do this change. I'm like, yeah, change the code. And it's like, well, I can't do that. I'm like, yes, you can. He's like, I can?
Yeah, you can.
Like, what is, what is happening? I don't understand.
[00:16:05] Speaker A: I think Gemini is just as bad in Google Docs as Copilot is, right?
[00:16:11] Speaker C: Oh, it's not great. Yeah, no, I'm still trying to use Gemini and Google Docs as much as possible.
Speaking of Gemini, nice segue. Nice. They have launched Gemini 3.0 Pro in preview across its products product suite including the Gemini app, AI Studio, Vertex AI and new AI mode and search with generative UI capabilities.
The model achieves a 1501 ELO score on El Marina leaderboard and demonstrates a 91.9% on GPQA diamond with a 1 million token context window for processing multimodal inputs including text, images, video, audio and code. Gemini 3 DeepThink mode offers enhanced reasoning performance, scoring 41% on humanity's last exam and 45.1% on Arc AGI 2 with code execution. Google is providing early access to safety testers before rolling out to Google AI Ultra subscribers in the coming weeks following conference of safety evaluations. Per their Frontier Safety framework. Google introduces Anti Gravity, a new agentic development platform that integrates Gemini 3 Pro with Gemini 2.5 computer user for browser control and Gemini 2.5 imaging for editing. The platform enables autonomous agent workflows with direct access to editor, terminal and Browser, scoring 54.2% on Terminal bench 2.0 and 76.2 on SWE bench verified for coding agent capabilities.
Model shows improved Long horizon planning by topping vendor venture leader 2 and I don't know what these benchmarks are so I'm just gonna stop talking at this point. But cool. Gemini 3.1 Jonathan and I live in the Twitter sphere sometimes and so I'm sure he's also seen people saying Gemini 3 is the end of everybody and it's gonna do all the AGI things.
And that was what I was saying all weekend long was that Gemini 3 was the end of all things that we know. And then I saw it today and I was like we'll see that. Much more impressive but glad to see it's rolled out. So you had to take some steam away from Gemini 5.1.
[00:17:56] Speaker B: I welcome our new robot overlords I think is the statement that you have to every time there's a new model.
I look forward to trying this. I found my initial Attempts with Gemini 2.5 did not go well, but I found sort of a sweet spot in using it for planning and documentation that was sort of, you know, the large context window allowed just a little bit more flexibility in creating like architecture documentation and whole project based sort of definitions and then using Claude to actually do the execution because it's still much better at coding than any other model that I've used.
So cool. I look forward to using this yeah.
[00:18:37] Speaker C: I'm excited to see if it's better in like agent spaces and things where if sort of got annoyed with it and its limitations. So excited to see as it rolls out here hopefully in a few weeks in the rest of the Google ecosystem.
Well, Azure broke up with OpenAI and so now they've committed to Anthropic. $30 billion in fact. Right. Well actually Anthropic committed $3 billion to Azure and up to 1 gigawatt of additional capacity, making this one of the largest cloud Azure commitments in AI history.
And this positions Azure as Anthropic's primary scaling platform for cloud models, which I don't actually think that's the case, but yeah, that's how AI read it. Nvidia and Anthropic are establishing their first deep technology partnership focused on co design and engineering optimization. And Anthropic will optimize cloud models for Nvidia, Grace Blackwell and Vera Ribbon systems, while Nvidia will fine tune architectures specifically for Anthropic workloads to improve performance, efficiency and your total cost of ownership. The cloud models include Sonnet 4.5, Opus 4.1 and Haiku 4.5, all available already through Microsoft Foundry on Azure Main cloud, the only frontier model accessible across all three major cloud platforms, aws, Azure and gcp. And also now addresses the issue where all of the GitHub copilot work was all going to AWS, so I assume that'll get switched over as well. And Microsoft is committing to maintaining cloud integration across its entire Copilot family, including GitHub, Copilot, Microsoft 365 Copilot and the Copilot studio. And Nvidia and Microsoft are investing up to 10 billion and 5 billion respectively in Anthropic as part of this partnership. So we pay you 30, you give us 10 and 5. So there you go. Half of it was covered by the company they just bought it from.
[00:20:12] Speaker B: Wow. I'm surprised because when, like I think it was two weeks ago they announced, you know, the breakup officially, I was always thinking that OpenAI was the sort of originator there, the one that's leaving the relationship. But if Microsoft can open up and get $30 billion, like, you know, maybe, maybe it's the other way around, which would be a very interesting switch.
[00:20:34] Speaker D: I know that a lot of people were complaining that Claude was not on Azure and I think that, you know, and also the fact that it's in GitHub, it is what most developers and maybe I'm taking a general statement here but most people I know, it's their default, you know, AI coding language or coding tool. So I feel like it being the Main1 with GitHub, we get up Copilot being so prevalent and it already being integrated with Copilot. It doesn't surprise me that this was kind of, you know, Microsoft's primary next step was, you know, getting it onto their platform. So they also have to stop paying, you know, AWS for running the models for them.
[00:21:21] Speaker B: Yeah. Hopefully this means that my premium request budget is get some relaxation, which would be nice.
[00:21:28] Speaker D: No.
[00:21:30] Speaker A: I'm really. What Anthropic's plan is really what they're working on in the background because they've just taken a huge amount of capacity from AWS in their new data center in Northern Indiana and now another 30 billion in Azure compute. So that's, I mean that. I guess they're still building models every day and they're still providing support for customers, but that's, that's a lot of money flying around.
I'd like that. Love to know what they're going to do with it. Like, what's, what's the plan? It's, it's not, it's not practical or reasonable to keep building bigger and bigger LLMs. It just doesn't scale cost effectively or performance wise. So I, I'm really excited to see what's going to be coming next year.
Something we didn't put in, in the news here is that Jan Lecun is leading Meta and that's, that's a huge thing as well. Should have mentioned that.
[00:22:21] Speaker C: Yeah, I can't keep track of all of the AI people getting paid bajillions of dollars moving between companies to make more bajillions of dollars in stock equity. So it's just hard for me to keep track of all of them. So yes, I saw it and I was like, that's a cool thing, but.
[00:22:35] Speaker B: I'm, but I don't.
[00:22:36] Speaker C: Yeah, okay, congrats. You're going to make a bunch of billions of dollars somewhere else.
[00:22:39] Speaker B: Yeah.
[00:22:42] Speaker C: All right, let's move on to cloud tools.
Ingress nginx, one of the most popular Kubernetes, Ingress controllers that has powered billions of requests worldwide is apparently being retired as of March 2026 due to unsustainable maintenance burden and mounting technical debt. The project has struggled for years. Only one or two volunteer maintainers working after hours. And despite its widespread use and hosted platforms and enterprise clusters, efforts to find additional support have failed.
The retirement stems from security concerns around features that were once considered flexible but are now viewed as vulnerabilities, particularly the snippets annotations that allowed arbitrary NGINX configuration. The Kubernetes Security Response Committee and SIG Network team exhausted all options to make the project sustainable before making this difficult decision to prioritize user safety over continuing an under maintained critical infrastructure component, users should immediately begin migrating to Gateway API, the modern replacement for Ingress that addresses many of the architectural issues that plagued Ingress nginx. Existing deployments will continue to function and installation artifacts remain available, but after March 26th there will be zero security patches, bug fixes or updates of any kind. Alternative ingress controllers are plentiful and listed in Kubernetes documentation including cloud provider specific options and vendor supported solutions. Users can check if they are frightened by running a simple kubectl command to look for pods with the Ingress NGINX selector across all their namespaces. The retirement hides a critical open source sustainability problem where massively popular integer projects can fail despite widespread adoption.
Actually surprised nginx didn't want to pick this up. It seems like an obvious move for F5 to pick up and own and maintain the Ingress NGINX controller, but what do I know?
[00:24:14] Speaker A: Well, they only care for people who pay them money and if this is presumably based on the open source version they're not getting paid for, then I don't think they have much interest in working on it.
[00:24:24] Speaker C: I mean that's why you take it over, then you figure out how to basically make it happen over time.
[00:24:31] Speaker B: There's also a F5 big IP competitor ingress to Kubernetes makes some sense then I guess.
[00:24:39] Speaker C: So yeah, if you're using the Ingress NGINX controller time to get to work because March is not that far away.
So definitely you're not going to want to run an unsecured front door to your infrastructure, especially on Kubernetes.
[00:24:52] Speaker A: 3 months as much time as it to deprecate something as critical as your Ingress controller. So okay, thanks.
[00:24:58] Speaker C: That's my thought. I was like wow, that's not great.
Luckily AI can help you rewrite your Kube control code I guess. But yeah, well during the Cloudflare Azure this morning they also announced that they're requiring a replicate bringing its 50,000 plus model catalog and fine tuning capabilities to workers AI. This consolidates model discovery, deployment and inference into a single platform backed by Cloudflare's global network. The execution addresses the operational complexity of running AI models by combining Replicate's cog containerization tool With Cloudflare, serverless infrastructure developers can now deploy custom models and fine tunes without managing GPU hardware or dependency. Existing replicate APIs will continue to function without interruption while gaining Cloudflare's networking performance and the worker AI users get access to proprietary models like GPT5 and Cloudsonnet through Replicate's unified API. Alongside open source options, integration extends beyond inference to include AI gateway for observability and cost analytics, plus native connections to Cloudflare's data stack, including R2 storage and the vectorized database Replicate. Community features for sharing models, publishing fine tunes and experimentation remain central to the platform, and the acquisition positions Cloudflare to compete more directly with hyperscaler AI offerings by combining model variety with Edge deployments.
[00:26:08] Speaker A: That's exciting. Actually, I think AI at the edge is something I thought about predicting for aws, but I didn't think they'd have the compute to do it.
[00:26:15] Speaker D: So.
[00:26:17] Speaker A: At least Replicate sort of farms out the the request to people who do have capacity. So that's, that's neat.
[00:26:25] Speaker B: Yeah, I think Cloudflare has been doing kind of amazing things at the edge, which is kind of neat. You know, we've had serverless and functions for a while and you know, definitely options out there that provide, you know, much better performance. It's kind of neat. They're well positioned to do that.
[00:26:41] Speaker A: Yep.
[00:26:42] Speaker D: Cool.
I feel like the one I would guess that on would be next year's Google conference with Android.
Whenever there are like more Android specific conferences that's what I would predict like the next generation of the phones to have more AI.
[00:26:55] Speaker C: Well, I mean I think, I think on device AI is different than Edge AI. Again like the problem with the problem with on device AI is how do you update it frequently and so Edge gets you kind of the middle ground of okay, well now I can use the Edge for more up to date models for things that are new. So if you only can update the model on the phone every year or six months or monthly, you don't have options to provide some type of updates.
Kubecon has just wrapped up, marking the industry shift from cloud native to AI native, with CNCF launching the Kubernetes AI Conformance program to standardize how AI and ML workloads run across clouds and hardware accelerators like GPUs and TPUs. The live demo showed dynamic resource allocation, making accelerators first class citizens in Kubernetes. Similarly, the AI standardization is now a community priority. I this doesn't surprise me at all because the only way to actually make kubernetes work is to use AI. So yeah, makes sense that they would adopt it harness showcase agentic AI capabilities that transform traditional CI CD pipelines into intelligent adaptive systems that learn and optimize delivery automatically.
Their booth demonstrated 17 integrated products spanning CICD, IDP, ICM security testing and FinOps, with a particular emphasis on AI powered pipeline creations and visual workflow design security apparently emerged as a critical theme with demonstrations of zero CVE malware attacks that bypass traditional security vulnerability scanners by compromising the build chain itself and the solution path involves supply chain attestation using Salsa policy is code enforcement and artifact signing with six store which Harness demonstrated as native capabilities in their platform. Apple introduced Apple Containerization, a framework running Linux containers directly on macOS using lightweight micro VMs that boot minimal Linux and urinals in under a second. Thank God. And the conference emphasized that AI native infrastructure requires intelligent scheduling, deeper observability and verified agent identity using Spiffy and Spire with multiple sessions showing practical notation at scale for companies like Yahoo managing 8,000 nodes and Spotify handling a million infrastructure resources.
Kind of a boring kubecon to be honest.
[00:28:56] Speaker B: Yeah, I was very underwhelmed by a lot of the announcements and so it's like kind of like I was expecting more for some reason I don't really.
[00:29:04] Speaker C: Know because everyone's moved on from Kubernetes as the hotness. Now it's all AI. So like, you know, what are people working on in the AI space? I guess.
[00:29:15] Speaker D: I guess.
[00:29:17] Speaker C: All right, well I think it's a good segue for me to drop off guys, but I will turn it over to you all to finish up with aws, but I'll catch you guys after Thanksgiving. Have a great one and see you guys on our side.
[00:29:28] Speaker A: All right, have fun.
[00:29:29] Speaker B: Have fun.
[00:29:30] Speaker C: Thanks.
[00:29:31] Speaker A: Okay, AWS Lambda enhances event processing with provisioned mode for SQS event source mapping, providing 3 times faster scaling and 16 times higher concurrency up to 20,000 concurrent executions compared to the standard polling mode. This addresses customer demands for better control over event processing during traffic spikes, particularly for financial services and gaming companies requiring sub seconds latency. The new provision mode uses dedicated event pollers that customers can configure with minimum and maximum values where each poller handles up to 1 megabyte per second throughput, 10 concurrent invokes or 10 SQS API calls per second. Setting a minimum number of pollers maintains baseline capacity for immediate response to traffic surges, while the maximum prevents downstream system overload.
Pricing is based on event poller units charged for the number of pollers provisioned and their duration with a minimum of two event pollers required per event source mapping and each EPU supports up to 1 megabyte a second throughput capacity, though AWS has not published specific per EPU pricing on the announcement.
Features now available in all commercial regions and can be configured through the AWS console, CLI or SDKs.
[00:30:41] Speaker B: Where was this like five years ago when we're maintaining a logging platform from HAL, this would have been very nice because it's. That was a big performance bottleneck. So I imagine that this will definitely help out people that are have very high traffic and temperamental queues.
[00:31:00] Speaker A: So yeah, yeah, this, this would have been. This would have been great. And I kind of wonder if they hadn't built this before because they didn't expect people to route millions of log events a day through Lambda. I think they discover the way customers are going to use their products during their during their life cycles and probably probably changes the trajectory they had imagined.
Yeah, that's.
That would be. I still wouldn't use Lambda again I think if I had a redo but if I had to this, this would be a really nice feature to have.
[00:31:30] Speaker D: Yeah I just remember Concurrent Lambdas are ridiculously expensive too.
Adds up real fast.
So you have to be very careful.
[00:31:40] Speaker B: With that compared to compute, I mean because that's always the like it's com compared to Lambda. It's expensive but compared to like having compute workloads that are just sitting there burning money like that's always. That's always the cost comparison I try to do.
[00:31:55] Speaker C: But.
[00:31:57] Speaker D: Yeah, I mean general compute overhead and everything else. But still I feel like I've definitely seen people get burned real fast with concurrent Lambda's running uncontrolled.
[00:32:08] Speaker A: Concurrent Lambdas running for sure.
[00:32:10] Speaker C: Correct.
[00:32:10] Speaker A: Oh yeah, things. Things that run away.
[00:32:12] Speaker B: Yeah, everyone's got to get burned by lambda runaways at some point, right?
[00:32:17] Speaker D: I have caused an infinite loop with Lambdas and SNS fanning out and fanning out and fanning out and fanning out before.
[00:32:24] Speaker B: Yep, same here.
[00:32:26] Speaker D: Definitely been there, done that.
[00:32:31] Speaker B: Amazon eventbridge Introduces Enhanced Visual Rule Builder eventbridge launches a new rule builder that integrates the Schema Registry with a drag and drop canvas, allowing developers to discover and subscribe to events from over 200 AWS services and custom applications without referencing individual service documentation. The Schema Aware interface helps reduce syntax errors while creating event filter patterns and rules.
The enhanced builder includes a comprehensive event catalog with readily available sample payloads and schemas for your copypasta needs and it eliminates the need to hunt through all that documentation for event structures. It's a common pain point where developers previously had to manually locate, ask AI and understand event formats for different AWS services.
This is available in all regions where Schema Registry has already been launched at no additional cost beyond the standard EventBridge usage charges.
The Visual Builder particularly benefits teams building complex event driven applications that need to filter and route events from multiple sources by providing schema validation upfront and it helps catch configuration errors before for deployment rather than during runtime.
Chickens.
[00:33:43] Speaker D: Wow.
I mean I definitely back in the day had lots of fun with EventBridge and trying to make sure I got the schemas right for every frame. You know when you're trying to trigger one thing from another so not having to deal with that mess exponentially better.
You know, at this point though, I feel like I would just tell AI to tell me what the scheme was and solve the problem that way.
[00:34:06] Speaker B: Yeah, my Most public painful GitHub contribution or open source actually was I got burned and broke a whole bunch of stuff because I didn't understand sort of the schema differences between what is it like the the AWS notification service and sqs. Like they were just slightly different in terms of format and so like basically broke half the people on this like very widely used notification.
[00:34:30] Speaker D: Hold on, time to go to GitHub and find this.
[00:34:33] Speaker B: No, that's all right, we don't need to do that.
[00:34:37] Speaker D: Application load balancers support client credential flow with JWT verification ALBs now support JWT token verification natively at the load balancer, eliminating all that fun application code that you've had to develop for backend applications. It offloads OAuth tokens including signature verification, expiration and claims validation directly to load balancer, therefore reducing a lot of extra cost that you have. The feature supports client credential flows and other OAuth 2.0 flows, making it particularly useful for machine to machine and service to service authentication scenario. Organizations can now centralize token verification at the edge rather than having to implement it on all your backend services.
It is immediately available for all ALBs with no additional cost beyond the standard load balancing.
The implementation reads JWT from the request headers, invalidates against configuration JSON web key sets, JWKS endpoints, supporting integrations with identity providers like Auth0, Okta, and of course AWS Cognito.
Failed validation results in a configurable HTTP response code before reaching the backend targets.
[00:35:56] Speaker A: Yeah, I can see this would be super useful, especially if you use lambda as the target for nlb because you know, verifying, downloading the latest token, especially if it's been rotated periodically like it's supposed to, verifying it just adds so much extra time. Whereas if you can keep all that cached on load balancer, it's going to save a lot of compute and make things a little more performant. I think it's probably useful for further machine to machine use cases where, where you know, once, sort of, once you've proven yourself as the machine, you're always going to be the machine. It's not like there's a different user sitting in front of the machine logging into the same browser like it was a real person at the same time. Slightly concerned now that of course a misconfiguration in the load balancer just lets everybody write through. And if you're not also performing some kind of validation or verification of the user's token in the app, that'd be slightly concerned about that risk. But you know, these are just things we take into consideration when we design the architecture, I guess.
[00:36:58] Speaker B: I mean I like this just because I primarily work with internal services and so like, you know, I've been using IAM based authentication at the LDS for a while, so that's easy enough.
But then the minute you have to integrate with something external like an identity provider or something outside of that, the AWS ecosystem, you're sort of like, oh, then I have to create all this stuff. And now, now I wouldn't have to. Which is nice.
[00:37:22] Speaker A: Yeah, maybe. Maybe this is kind of a sign that Cognito is not getting the popularity they wanted.
Effectively you could re spin this announcement. As you know, Auth0 and OCTA are now first class citizens when it comes to authentication through API, Gateway and Album.
[00:37:40] Speaker D: And good news, Cognitive supported at day one.
[00:37:50] Speaker A: All right, onto gcp. How Protective Reroute Improves Network Resilience from the Google Cloud Blog Google Cloud's Protective Reroute PRR shifts network failure recovery from centralized routers to distributed distributed endpoints, allowing hosts to detect packet loss and immediately reroute traffic to alternate paths. This host based approach has reduced inter datacenter outages from slow network conversions by up to 84% since deployment five years ago. With recovery times measured in single digit multiples of round trip time rather than seconds or minutes.
PRR works by having hosts continuously monitor path health using TCP retransmission timeouts, then modifying IPv6 flow label headers to signal the network to use alternate paths when failures occur.
Google contributed this IPv6 flow label modification mechanism to Linux kernel version 4.20 and later, making it available as open source for the broader community.
The features particularly critical for AI and ML training workloads or even brief network interruptions can cause expensive job failures and restarts costing millions in compute time. Now I think that's perhaps an exaggeration because you checkpoint things regularly when you're doing big AI jobs. And thank you if you weren't architected for failure in the first place, you're not using the cloud right, but I guess we'll let that slide. Google it is a sales pitch after all.
Large scale distributed Training across multiple GPUs and TPUs requires this ultra reliable data distribution that PRR provides to prevent communication pattern disruptions.
You can use the function in two modes hypervisor mode which automatically protects cross data center traffic without guest OS changes, or a guest mode for fastest recovery requiring Linux kernel 4.2 and above I.
[00:39:37] Speaker B: Was trying to think like how I would even implement something like this like in guest mode because it's like breaks my head seems pretty cool and I'm sure from an underlying technology at like you know, the infrastructure level, the Google Network like it's sounds pretty neat but it's also sort of the coordination of that failover seems very complex and I would worry I know if this idiot, you know enabled it I would have a very difficult time going through and figuring out why stuff wasn't behaving as I expected or that kind of thing. So it's but pretty cool.
[00:40:11] Speaker A: Yeah the. The self healing Network it sounds like something Oracle should have published.
[00:40:17] Speaker B: Yeah, Oracle said they had it four years ago.
[00:40:19] Speaker D: Was it the unbreakable Linux?
The unbreakable network, yep.
Introducing Emerging Threat Centers in Google Security Operations Google Security Operation Launches Emerging Threat Center a Gemini power detection engineering system that automatically generates security rules when threats campaigns emerge. From Google Threat Intelligence, Mandiant and VirusTotal. The system addresses a key pain point where 59% of security leaders feels like a low number for key pain point report difficulties deriving actionable intelligence and threat data typically requiring weeks or days of manual work to assess the organization exposure. The platform provides two critical capabilities for security teams during the major threat event. It automatically searches the previous 12 months of security telemetry data for campaign related indicators of compromise and detection rule of matches while also confirming active protection through campaign specific detection. This eliminates manual cross reference processes that traditionally occur when zero day vulnerabilities emerge under the hood. The system uses Agentic workflows where Gemini ingest threats intelligence from Mandiant and Google Google Global Visibility generates synthetic event data mimicking advisory attacks, tests existing detection rules for coverage gaps and automatically drafts new rules when gaps are found. Human security analysis is the final approval because you still want a human in the loop before deployment. Transforming detection engineering from a best effort manual process to synthetic automation workflow which still could have issues, but we're going to bypass that point. The Threat Emerging center is available today for licensed Google Security operation customers through specific pricing details and we're not disclosing announcement which means expensive, expensive, expensive, very expensive. Organizations with high volume security operation operations are already using this and have found it to be very useful.
All I know is something dynamically ingesting 12 months of telemetry data without publicly releasing the price. I just thank God I'm not the CFO of that company. Because if you're talking telemetry data, you're talking WAF rules, load balancer logs. Like if you're really ingesting all that, that's like terabytes of data, if not petabytes depending on your traffic flow that it's going to just decide to ingest on the fly. That's terrifying to me.
[00:42:57] Speaker B: So it's not deciding to ingest on the fly, it's taking what it's ingesting and automatically putting detections in place that generate sort of cases in the sin.
[00:43:07] Speaker A: I see this very much as like a CrowdStrike type AI solution for, for Google Cloud in a way. They say you're looking at the data, you're, you're, you're identifying emerging threats which is what CrowdStrike's sales point really is and then implementing controls to help quench that.
[00:43:25] Speaker B: So full disclosure, I've been playing with.
[00:43:27] Speaker A: This.
[00:43:29] Speaker B: So I know exactly what this is.
[00:43:31] Speaker D: Enlighten us sir.
[00:43:33] Speaker B: Yeah, so I mean it is so there their maniad acquisition, they have their own sort of threat intelligence feeds as well as you know, they run, you know, across all Google Cloud and have a lot of sort of internal built trending and stuff that they have there. So it's, it's, it's less, I mean it's a little bit of the CrowdStrike in terms of like CrowdStrike can react to you know, its own feeds and then will analyze like the signature of two running processes and can automatically, the agent can automatically halt a running process based off of that. This is much more of because Google Security operations is the same. That's all it is is a giant data Lake and the ability to put rules and manage sort of security incidents and cases and investigations through Google security operations. So this is, it is basically automated monitoring of all of those ingest sources.
And so typically you would have a SOC that would go through and it would look at all these things and it would study threat intelligence and then it would generate its own detections for automatically detection of security incidents and things. So sees there and then can action cases that come from that. This is just automating that first part of the SOC process which is pretty cool.
[00:44:42] Speaker A: So it's not automating any kind of remediation or action, it's just literally opening a ticket for you.
[00:44:48] Speaker B: Yeah, I mean so technically one can automate through the execution of case execution. You can automate remediation through these. I would be very surprised to see AI providing that into their, their detections in case management. I imagine it would be much more automated detection. Very much not automated remediation.
Not without a human in a loop.
[00:45:14] Speaker D: But it's talking about ingesting previously searches last 12 months.
[00:45:19] Speaker B: Yeah, it's using, it's. So it's using. You always ingest like you know, it's. It's using your previous 12 months of ingestion. So it's. This is. It's ingestion of data sources that you're configuring. It's not dynamically.
[00:45:31] Speaker D: Right. But then querying that you normally pay per query per gigabyte of process logs.
[00:45:38] Speaker B: Oh yeah.
[00:45:38] Speaker D: No.
[00:45:38] Speaker B: Google SecOps is not a cheap product. You are not wrong. There it is. That's where I was like incredibly expensive. Yeah.
And you know like. And it's, it's getting more expensive over time as they add more functionalities as well. But it's, it's also, you know, a pretty good tool for these things. And I'm having fun automating it because while AI won't do automated remediation, this security guy will so which is fun.
[00:46:08] Speaker A: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment to shortest 30 days.
If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance. But remember to ask will you actually give me my money back? Achero will click the link in the show notes to check them out on the AWS marketplace.
[00:46:48] Speaker B: Introducing Davaru. I hope I didn't butcher that. And two new connectivity hubs Google is investing in Devaru, a new trans Indian Ocean subsea cable connecting the Maldives, Christmas island and Oman, extending the Australia Connect initiative to improve regional connectivity. The cable system aims to support growing AI service demand of Gemini 2.5, Flash and now 3.0 and Vertex AI by providing resilient infrastructure across the Indian Ocean region.
The announcement includes two new connectivity hubs and will provide three core capabilities. Cable switching for automatic transfer, rewriting during faults, content caching to reduce latency by storing popular content and colocation for services offering rack space to carriers and local companies. Hubs are positioned to serve Africa, the Middle east and South Asia and Oceania with improved reliability. Google emphasizes the energy efficiency of subsea cables compared to the traditional data centers, noting that connectivity hubs require significantly less power since they focus on networking and localized storage rather than compute intensive AI and cloud workloads. The company is exploring many ways to use power demand from these hubs to accelerate local investment and sustainable energy generations of smaller locations.
The hubs will provide strategic benefits by minimizing distance data travels before switching paths and improves resilience and reduces downtime for services across the entire region.
Of course, the investment is infrastructure strengthens their local economies while supporting Google's objective of serving content from locations closer to to the users and customers.
[00:48:24] Speaker D: So I had to look up what a connectivity hub, which is literally, I guess just a small little data center that just kind of handles basic networking and storage and nothing fancy, which is interesting. They're putting the two connectivity hubs, they're dropping these hubs, it sounds like where all their cables terminate.
So they are able to kind of cache stuff more at each location, which is always interesting.
[00:48:51] Speaker A: Yeah, I guess by adding hubs and cross connecting the hubs, then it gives them more potential paths through the network if there's been outage. And with the increase, perhaps not so much in this area, but the increase in deliberate attacks on subsea cables that we're seeing, especially around Europe.
I think it makes a lot of sense to kind of diversify the paths that your network can take.
[00:49:15] Speaker D: It's interesting they're doing a connectivity hub on Christmas island though.
[00:49:20] Speaker A: I mean it's, it's in a very, very useful place right there in the middle of the ocean.
I think we're just grateful that there's some land there to actually build a hub.
[00:49:28] Speaker B: I say there's not a lot of choice.
[00:49:30] Speaker D: Well, I'm just thinking like power and everything and reliability, you know, I mean.
[00:49:35] Speaker B: It'S, it's already a concern in the region.
[00:49:38] Speaker D: Right.
[00:49:38] Speaker B: Like it's, there's not a lot of paths through there because there's not a lot of land. So this is another one. In addition to their existing routes from.
[00:49:45] Speaker D: Australia, I guess subsea cables can only be so long. They have to land in order to re power themselves. Yeah, yeah.
[00:49:54] Speaker A: I mean they are powered cables and they do have, they do have repeaters.
[00:49:57] Speaker D: Along, but they only last for so long.
[00:49:59] Speaker A: Right.
[00:49:59] Speaker D: Like at one point you need a. A big jolt.
[00:50:02] Speaker A: Yeah. I mean transition line. The electronics of transmission lines is way more complex than just making a longer cable.
[00:50:08] Speaker C: Yeah.
[00:50:10] Speaker A: It's physics. Physics likes to fight against you doing things like that. Yeah.
All right.
Infinite Scale the architecture behind the Azure AI Super Factory from the official Microsoft blog. So Microsoft announced their second Fairwater data Center in Atlanta, connecting it to the Wisconsin site and existing Azure infrastructure to create a planet scale AI super factory. The facility integrates hundreds of thousands of Nvidia GB200 and GB300 GPUs into a unified supercomputer for training Frontier AI models. Using a flat network architecture, the data center operates at 140 kilowatts a rack, which is just insane. Every time I read numbers like that I think it's just insane. Using closed loop liquid cooling that consumes water equivalent to 20 homes annually, which is a huge saving compared with things like Eddie west reported in the past couple of years and their sustainable reports. The systems are designed for six plus years of operation without replacement and the two story building design minimizes cable lengths between GPUs to reduce latency. The site achieves 4 times 9 availability power at 3.9cost by using resilient grid power instead of traditional backup systems. Each rack contains up to 72 Nvidia black hole GPUs connected via NVLink. With 1.8 TB per second GPU to GPU bandwidth and 14 TB of pooled memory per GPU. The facility uses a two tier Ethernet based backbone network backend network, sorry. With 800 gigabits a second GPU to GPU connectivity running on Sonic to avoid vendor lock in and reduce costs compared to proprietary solutions, the architecture addresses the fact that large training jobs now exceed single facility power and space constraints. By creating fungibility across sites, customers can segment traffic across scalable networks within sites and scale out networks between sites, maximizing utilization of GPUs across the combined system. Rather than being limited to a single data Center.
[00:52:13] Speaker D: It is a big freaking data center with a lot of power in it.
I mean the liquid cooling was interesting one, they brought it up during the keynote where the, they filled it once and it should last for many years without any water. So you know, until there's a water leak in a server rack and they have a worse day. But you know, it shows that they are trying to build these data centers especially as they are becoming more and more power dense and more and more, you know, heat and energy intensive.
Like they, they need to figure out how they make these things be more efficient and more green. Because while we can do it in some places, not everywhere in the world's willing to, you know, burn money and for everyone to support these data centers.
So they have to really be able to make these things over time become more and more resilient themselves and while they pull power, at least make a lot of the other things they use a lot less intense.
[00:53:13] Speaker A: Yeah, it's, I mean Microsoft deployed 12, sorry 120,000 miles of fiber across the US last year to take these data centers up to the rest of the network. So they obviously saw the need coming for this type of use case. I mean we used to talk about how many customers you could fit in a data center, how many racks could this customer use, will this customer use? And now it's how many data centers per customer for some of these large AI.
AI clients.
[00:53:42] Speaker C: Yeah.
[00:53:43] Speaker D: So now it's during the keynote today. Azure Horizon DB Azure Horizon DB for postgres enters private preview for Microsoft performance focused database offerings featuring auto scaling storage up to 128tb and compute scaling up to 3072vcores. While I understand that is a multiple of two, it still bothers me.
The service claims for up to 3 times faster performance. Sorry, not multiple. Divisible by 2. The service claims for up to 3 times performance compared to open source Postgres positioning as a competitor against Aurora and AlloDB in the managed service. The 128 terabyte storage ceiling represents a substantial increase over the existing Azure postgres offering.
Microsoft appears to be building Horizon DB as a separate service line rather than the existing to Azure database for postgres flexible storage suggesting a different architecture or pricing model. Yes, they move the storage out just like they did for hyperscale. I think it's more interesting that this they didn't put it in the hyperscale family like aurora is with Postgres and MySQL storage capacity combined with the high CPU cores count targets large scale OLTP and analytics workload that needs both horizontal and vertical scaling options.
It's just interesting to me that they split it and put it into that hyperscale family. They already kind of had that same family name where they are a bifurcated out hyperscale is dedicated to Microsoft SQLs but so, but I don't think it would have been confusing. It would just be another option. I feel like. Like in the same family.
[00:55:26] Speaker A: Yeah.
Maybe they don't imply that, that this, this is hyperscale like SQL Server is. I don't know. Probably.
[00:55:33] Speaker D: I think that that's.
I think they still are trying to save the hyperscale the SQL family because yeah they make a lot of money on those licensing, you know, things still. So if you put it into the other one.
[00:55:46] Speaker A: Yeah it sounds like that pretty much built what. Yeah what Amazon did with, with Aurora separating the storage from the compute let them scale.
[00:55:55] Speaker D: That's what hyperscale is too. Yeah, yeah that's what hyperscale is too. Okay so I mean they're. I think they're slowly going to expand I would say maybe for next year's conference if we remember one year from now and I should probably write this down, they're going to do a private preview of MySQL maybe next but I feel like they have less MySQL on their platform than Postgres though. Honestly they kind of just left MySQL.
[00:56:19] Speaker A: Behind my space SQL something I really don't hear mentioned as much anymore as I used to. I think, I think Oracle taking possession of it and then of course MariaDB came along as the open source replacement and everything just seems to just not be the go to platform like it used to be. You know in the early web days, PHP applications, things like that. Using always MySQL, MySQL all the time. But and I think people are growing up a little bit now. It's you know, let's use a graph database or let's use sort of.
[00:56:50] Speaker B: I mean Postgres has become the de facto.
[00:56:52] Speaker D: Yeah, Postgres has a lot of plugins like the PGvector and all those other things that you can add in to make it be that vector database or whatever else that you need it to be.
[00:57:02] Speaker A: Yeah, not cool.
[00:57:04] Speaker B: Now in public preview, Microsoft Defender for Cloud+ GitHub Advanced Security.
Microsoft Defender for Cloud now integrates natively with GitHub advanced security and now in public preview creating a unified security workflow that spans from source code repositories through production cloud environments. The Integration allows security teams and developers to work within a single platform rather than just yelling at each other through JIRA tickets.
The solution addresses the full application lifecycle security challenge by connecting at the code level. The Vulnerability detection With Defenders for cloud runtime protection capabilities, organizations using both GitHub and Azure can now correlate security findings from development through deployment, reducing the gap between DevOps and SecOps teams. Hooray.
This treeview targets cloud native application teams who need constant security policies across their entire CSV pipeline and production workloads.
This is particularly relevant for organizations that are already invested in the Microsoft and GitHub ecosystem as it leverages the existing tooling rather than requiring additional third party solution.
The announcement doesn't provide very much details on the pricing structure, though organizations should expect costs to align with existing defender and GitHub advanced security. Cha Ching specific regional availability and rollout timelines were not included in the very brief announcement.
[00:58:25] Speaker D: Yeah, this was discussed during the keynote. It seems like it has a lot of potential but without the pricing and Windows for Defender as a CPM I feel like for me lacks some features but I've tried to use seems like not a bad like like they're going in the right direction. I don't think they're there at the end product yet. That makes sense.
[00:58:47] Speaker B: Yeah I mean my my experience with Defender is pretty limited.
It's used a lot with like IT workloads but I haven't really used it in in Azure cloud.
[00:58:58] Speaker D: So Windows Defender for cloud is their cpam. It has plugins for storage so like it will do like the storage scanning like antivirus check secure configurations.
[00:59:13] Speaker B: You know I've definitely used other tools to do you know similar things just.
[00:59:17] Speaker D: Not I don't understand specific the pieces of it like there's like Defender for servers and Defenders for storage and Defenders for app services. I don't understand why I need 17 defenders that each all have a cost associated with them. So it's one of those things that for $15 a month I've never cared enough to dive deep into. Yet one of these days when I'm annoyed I will but yeah so but I like that it's integrating with GitHub where they are trying to you know link all those correlations where if hopefully if your code, you know if you are using ARM or bicep or Terraform hopefully it will be able to say hey your app services are not having HTTP to HTTPs redirect set up. Maybe we should then you know tie that in and submit the pull request to fix that for you, that's where I kind of would hope to see them go with it. I don't know if that's where they're where it is because it's still kind of in preview and I don't have those tools to play with Public Preview Smart Tiering Account Level Tiering for Azure, Blob Storage and ADLS Azure introduces smart tiering for Blob storage Intelligent tiering one might call it if AWS had released this five years ago, which automatically moves data between hot, cold and archive tiers based on access patterns without manual intervention. This eliminates the need for lifecycle management policies and reducing operational overhead of storage costs across large datasets. This feature works at the account level rather than the requiring per container or per blob configurations, making it simpler to deploy this.
Smart Tiering monitors blob patterns, automatically transitions objects to lower tier where appropriate, then moves them back to hot tier when they're accessed frequently. This differentiates from traditional lifestyle policies that rely on age based and cannot be dynamically used. This public preview allows customers to test automated tiering without committing to production workloads.
This capability targets feature sorry targets customers with large amounts of data with variable access patterns, particularly those with those in analytics backup archival scenarios. The integrations with Gen2 makes it relevant for big data and analytic workloads. Also it's the gen 2 is their data lake feature too.
I mean I'm just still dumbfounded that this wasn't here before.
Like I when I moved to Azure 3 years ago I was like oh can we just turn this on? My team's like what you want to turn on what feature? I was like there's no automatic way to tear between stuff.
So I just was confused that this isn't here and this is if I did some real time lookup. I feel like intelligent tiering was released four or five years ago. I feel like at this point.
[01:02:10] Speaker A: Yeah, I mean how much slower are the different tiers though? Like is. Is it.
[01:02:14] Speaker D: It's not.
Not that much slower. Okay, so intelligent tiering was released in 2018 so I don't want to do that math but seven years ago at.
[01:02:24] Speaker A: Reinvent so they've always had the tiering but now they're providing some. Some autumn an easy button for you to automatically based on access patents.
[01:02:33] Speaker D: Yeah, it's the same thing as intelligent tiering where you should pay less per gig, but you get paid more per access, which they already have.
[01:02:40] Speaker A: Then they're not going to sort of move it out to a very cold Tier that's going to take half an hour to get my file back. Are they?
[01:02:46] Speaker D: No, I think, I think that's what archive. There's two archive tiers. They don't remember offhand. I think there's hot, cold and like archive and then glaciers, whatever. Remember correctly?
I did. I mean I think you had said not to go down to that level, but it was released I think or yesterday and I was like, oh God, seriously that I will use this shortly in my day job. My team doesn't know what's coming through them yet.
[01:03:13] Speaker A: Yeah, I'm surprised. People just did build their own kind of caching layer, like keep it in the cheapest storage and then build their own caching layer in front and then I, I haven't really looked into this. I don't know if Microsoft charge a fee per transition between tiers like Amazon do or not.
[01:03:30] Speaker D: Yeah, I think they do.
[01:03:32] Speaker A: Yeah.
[01:03:33] Speaker D: It's still in my brain, always worth it because you know, it's always quicker and simpler to do that.
[01:03:39] Speaker A: Yeah. Okay, well, automated cost savings are great as long as they actually save you money. Because I can see how this wouldn't work out so well for some people in some use cases.
[01:03:51] Speaker D: What's interesting about it is part of Preview doesn't support lrs, which is like a single az.
It only supports it for regional.
Regional with backup to a single zone and a single doctor or regional and backup to doctor in a regional space. So.
[01:04:10] Speaker A: But yeah, okay, yeah, big data analytics workloads, I guess that makes sense. Certainly AI workloads. Maybe they're just trying to push people to use the cheaper storage so they can can put more useful data in the fast storage they've got for AI training.
[01:04:29] Speaker D: Running out, running out of hard drive space to try and force people to move. It's not a bad idea.
[01:04:33] Speaker A: Yeah, possibly. Well, I mean I think it seems pretty obvious that GPU cost has gone through the roof. SSD cost has gone through the roof. Now hard drives, the price of hard drives are going up for the same reason because we just need places to put data to feed to feed model training. And I hadn't, I hadn't realized until last weekend I actually started using my shiny new Nvidia GPU for doing the serious training workload. And I was like, why is this performing so poorly? And it's the bottleneck, the storage bottleneck was killing me. I could not feed tokens in fast enough to train the model as fast as the GPU could go.
And I imagine at the scale of training a trillion Parameter model, you're going to need an immense amount of storage that is. That is incredibly fast and very close. So it starts to make sense why a lot of these prices are going up for the things they are.
[01:05:29] Speaker D: Yeah, I mean, you're just moving the bottleneck, you know, forever. It was, you know, you know, GPUs, and now it's the storage. Because we've solved the GPU problem. Solved in quotes, you know, and you just keep moving whatever the limiting factor is down the line.
[01:05:43] Speaker A: Yep.
Yeah. Eventually it'll become power, I'm sure.
[01:05:48] Speaker D: I think you're already kind of approaching that, but maybe I'm wrong.
[01:05:53] Speaker A: Oh, well, I mean, if they're having to distribute workloads across data centers because one data center doesn't have enough power to supply for the training work, then yeah, I think we're definitely already there.
[01:06:04] Speaker D: I mean, I would assume cloud providers would have preferred to keep it all in one region or one zone. Because, I mean, I'm just thinking, like, government regulations in the area, like everything else along those lines, you already have relationships versus, you know, having to do one in Ohio, one in.
I don't know, Atlanta was just the other one and one then on the west coast, you know, you would think that it would make their life easier, too, and saves them deploying. Why they. What was that last article? 100 million miles of fiber or whatever the number was.
[01:06:32] Speaker A: 120,000 miles of fiber.
[01:06:35] Speaker D: Fiber, yeah, yeah.
[01:06:38] Speaker A: Now, and I expect that's kind of an exaggeration because there's probably, you know, 80 or 100 fibers per. In a bottle, per cable actually lay. So it's. It's probably. It's not 120,000 miles of cable.
180,000 miles of fiber.
[01:06:55] Speaker D: I mean, I'm just sitting here thinking, like, even, like convincing your local city to let you run a fiber cable across. Across the city into another city. Now do that from, like Atlanta to Ohio.
How many different cities and towns and everything else you have to cross in order to get approval to do that?
[01:07:14] Speaker A: Yeah, I assume they hang most of it from, like, power lines and things like that, where there's already infrastructure that easy to just bolt on an extra. An extra wire. Like, I'm just. I'm just frustrated. In my house here it's only like 15 years old. I'm like, why don't they put easy cable runs in the wall so I can easily pull a new wire to this realm in from the garage or something like that. I was like, I can't believe they make it so difficult in 2025 or 2010 when the place was built. Like how not forward thinking is that, but just the planning and the legal work and everything that must go into laying cables between cities for things like this must be just phenomenal.
[01:07:53] Speaker D: Yeah. I have a house that's much, much older than yours. And I'm trying to run a cable and I'm like, how do I go from down to up? Because I'm not in our ranch. I'm a multi tier. And I'm like, hey, why don't you go up? How do I go up? And like the easy answer is outside, back and back in.
[01:08:10] Speaker B: And I'm like, just get really good at Sheetrock. And then bigger holes.
That's what I've done. I can't be bothered to like fish, tape, wire through anything anymore. I'm just.
[01:08:20] Speaker C: Nope.
[01:08:21] Speaker B: Just tear the wall out.
[01:08:22] Speaker D: Yeah. I don't really want to deal with going up though. And my house is so old, I'm afraid what's in the walls.
[01:08:27] Speaker B: So.
[01:08:29] Speaker A: Yeah, yeah. I'm toying with the idea of just going out through the garage wall, at least up into the attic space externally. That's what I want to do instead of fishing up. And then once I've got across the attic, I can, I can drop it down either on the outside of the house or probably fairly easily on. On the external wall.
[01:08:43] Speaker D: That's exactly what I need to do. And I just, I've. I've gone out through the. I found another cable, so I followed that one out and ran that it back in. But I really should go up into the attic, into the overhang, you know, on the side, and then from there have everything in the attic that I want and then drop down.
[01:09:01] Speaker A: But yeah, it's just.
[01:09:02] Speaker B: And you gotta do conduit and protect.
[01:09:04] Speaker A: It from it makes me want to just design out like I swear the next house I move into, I'm gonna sit down with an architect, I'm gonna design it myself with things in the right places, Light switches in the right places and cable runs that I can just pull. Pull extra stuff through with a piece of string whenever I want to.
[01:09:21] Speaker B: Telling you, get good at Sheetrock. You could do all this stuff in your current house, but you do have to demo your entire house.
[01:09:26] Speaker A: I should invite you up, Ryan. You can, you can tell me how the hell I'm supposed to work.
[01:09:29] Speaker B: Oh, no, I will not do she rock.
[01:09:31] Speaker A: Oh no, I don't want you to do this.
[01:09:32] Speaker B: I'm just really good.
[01:09:33] Speaker A: I want you to look at it. Look, look at it and sympathize with me. All I want is an Ethernet cable to my front door for a noodle.
[01:09:41] Speaker D: That's what I want too. But my front door is brick. It's the problem. So I'm like, I don't want to drill through brick. I don't know what's on the other side of the brick either, because it's just. I think it's a pilot. I think it's a supporting beam of the house. So I'm like, oh no, it's probably.
[01:09:55] Speaker A: It's probably an actual termite nest that's holding the entire building up. I wouldn't touch it.
[01:09:59] Speaker D: I mean, it's old enough. It wouldn't surprise me.
[01:10:02] Speaker A: All right, let's do the last story.
Official Microsoft blog post from Ideas Deployment the complete life cycle of AI on display adding Knight 2025 sounds interesting.
[01:10:17] Speaker D: He clearly read it in advance.
He said, you clearly do your homework and read it in advance. Yeah.
[01:10:24] Speaker A: Microsoft's Net 2025 introduces three intelligence layers for AI development. Work IQ connects Microsoft 365 data and user patterns. Fabric IQ unifies analytical and operational data under a shared business model and Foundry IQ provides a managed knowledge system routing across multiple data sources. These layers work together to give AI agents business context rather than requiring custom integrations for each data source. I mean, they still are custom integrations for each data source. You're just calling them something else, but okay, marketing. Marketing people. I love to hate them. Microsoft Asian Factory offers a single metered plan for building and deploying agents across Microsoft 365 Copilot and Copilot Studio without upfront licensing requirements.
The program includes access to AI forward deployed engineers and role based training targeting organizations that want to build custom agents but lack internal AI expertise or want to avoid complex provisioning processes. Microsoft Agent 365 provides centralized observability management and security for AI agents regardless of whether they were built with Microsoft platforms, open source frameworks, or third party tools. With IDC projecting 1.3 billion AI agents by 2028, this addresses the governance gap where unmanaged agents become Shadow IT integrating Defender, Entra, Purview, and Microsoft 365 Admin center for Agent Lifecycle Management. And that is actually really cool. This is going to be a huge problem and I'm glad to see them actually being ahead of the curve on this because I don't see anybody else actually taking the proliferation of agents that anybody can build and anybody can run very seriously. So this, you know, kudos to as much as I joke around about Microsoft and Azure. They, they do do some stuff, right.
[01:12:10] Speaker D: It was a big part of their keynote was the security of the AI bots. And you know, they're expanding security into like, I think it was like E5 insert some license here that I don't understand and nor do I want to, you know, on the, you know, O365 side or M365 side. But also they, they said the term and I was like, yeah, that's where we're going. Which is shadow AI. You had shadow it for years of, you know, people writing their own back in the day, their own switches and everything else and then moved into their own, you know, SaaS, platforms that they were running on John Smith's credit card and pissing off everyone else. And now you have people, you know, companies that don't want to buy or spend the AI rather than supplying, you know, hey, I may use my homegrown AI that I have or my co pilot or my, you know, cloth I have at home for personal stuff. And it's, it, it was a key section. So like the idea of not just shadow it but shadow AI is going to become a much more prevalent thing I think over the next couple years.
[01:13:09] Speaker A: Yeah.
Work IQ now exposes APIs for developers to build custom agents that leverage the intelligence layer's understanding of user workflows. Now we know why they were collecting screenshots from everybody's Windows 11 PCs for the, for the past 18 months with that useful feature.
Yeah, blah blah blah. This allows organizations to extend Microsoft 365 copilot capabilities into their own applications while maintaining the native integration advantages rather than relying on third party connectors.
The announcements position Microsoft is providing, and I quote this from Microsoft, this is not my personal belief. The announcements position Microsoft as providing end to end AI infrastructure from data center to application layer, with particular emphasis on making agent development accessible to frontline workers rather than limiting it to specialized AI teams. No specific pricing details were provided for the new services beyond the mention of meted plans for Agent Factory.
[01:14:06] Speaker D: If it's not mentioned, it's going to be expensive.
[01:14:09] Speaker A: I think really what they want to do is, is, is just detect, detect whether a person has used AI to automate their entire job.
You know, as somebody who does a lot of coding or has done a lot of his coding historically, I, I have to say that AI tools have shrunk the actual time to deliver an internal service or a product or a feature by probably 75%.
[01:14:32] Speaker B: Oh magnitudes.
[01:14:33] Speaker A: Oh yeah, yeah. And so what do you do with your spare time? Do you.
Personal development? Do you go sit on the beach? You know, what's. What do you do? And how. How are businesses going to adapt to that change in a person's workday?
Like, being given this a lot of thought because I think I've got a lot of projects in mind all the time, and I want to work on them, but because with AI code generation and sort of project planning and things like this, because it works so quickly, I'm finding myself actually quite overwhelmed with the list of projects that I could be working on.
And, you know, while I kick off one thing, it's like, well, now I could kick off this other thing and another thing, and it's. It's actually quite.
I find it quite stressful in a way to have these tools that make things go faster now. I need.
[01:15:27] Speaker D: Welcome to being a mantra.
[01:15:29] Speaker A: Yeah.
[01:15:30] Speaker B: I've discovered the exact same thing, which is like, I can't. I can context shift, but I can't context shift between major products and sort of coordinate a whole bunch of agentic coding agents and track all the. All the things I have to keep that sort of dedicated. But that said, there's other things that you can. I do. Like I generate agents for.
Like I generated an agent for providing peer feedback today.
You know, kind of thing, those shortcuts.
[01:15:54] Speaker C: For that type of thing.
[01:15:55] Speaker D: And I'm just picturing something that just swears at the person, this person.
[01:16:00] Speaker B: You've misunderstood. I swear at the bot. It turns it into useful feedback.
[01:16:05] Speaker D: Oh, I know.
[01:16:07] Speaker B: So that's the entire point of the whole thing.
[01:16:10] Speaker A: Ryan as a service.
[01:16:12] Speaker B: Yeah, exactly. No, no, we don't want that.
[01:16:14] Speaker A: Not Ryan as a service.
[01:16:15] Speaker B: Okay. Yeah, not Ryan as a service at the moment. So.
[01:16:18] Speaker D: But it is HR appropriate, right? As a service.
[01:16:21] Speaker A: Yep.
[01:16:22] Speaker B: And on my list is, you know, a bot to manage the ideas. Because I do the same thing. I sit there now while I'm, you know, yelling at AI to do it better, I'm. I'm also developing ideas and thinking about things I should do, and then I lose them. So.
[01:16:35] Speaker A: Yeah, exactly.
That's what I'm struggling with, is I get halfway through discussing or planning or something else, a particular project or feature out. I'm like, oh, actually I could do this.
And I realize that you don't want to now mention that new idea in the middle of a conversation with an AI because that will completely throw it for a loop.
[01:16:55] Speaker D: Oh, yeah.
[01:16:55] Speaker A: And it'll be like, great, let's implement that feature like 10 minutes later.
It's built a whole bunch of stuff he did want. So I think, actually, I think controlling the scope of the things that AI coding assistants work on is really important.
[01:17:10] Speaker D: So I'm also having a problem where I'm like, oh, we have this thing now. I want to add this thing and add this thing and add this thing and just keep, like, my brain just keeps going down the line and I don't have time to even finish the core thing first to build everything else off of.
For my day job, I have this, like, one idea that I think will help a lot, but then just expanding, expanding, expanding that onto other things. I'm like, okay, great. But now where do I start and where do I get the 15 minutes, even get that off the ground in order to keep going with it? Which is, you know, a different problem. So I also need to get better at like, having it run stuff in the background because I. I'm so interested. Every time I talk to it, I'm like, ooh, how are you thinking through this? And reading all the outputs and the, the reasoning and everything else? And I should probably not do that.
[01:17:57] Speaker B: Yeah, I still am not comfortable letting it run on unsupervised. Like, and so, like, I'm still reviewing.
[01:18:04] Speaker D: I would review.
[01:18:05] Speaker B: It's gone.
[01:18:06] Speaker D: No, I mean, I don't need to review the logic of how it got there until it completely goes bonkers.
[01:18:11] Speaker B: And then, well, I would say about 50% of the output is code and then 50% is also documentation in terms of, like, architectural decisions, user documentation, you know, like design choices and that kind of thing. Look at. So I'm using it for both of those things. So it is a lot of reading, but it's, you know, project management is a big one that I've added. Right. So it's like being able to describe a thing out before any work starts. Like, you know, generate. I have it generate Jira tickets and Sprint plans and all of those things. And even though I'm going to be the only person sort of doing it, like, it allows, you know, it allows you to sort of, you know, finish tasks and then pick it up in a new chat, you don't have to maintain that. But also learning how to sort of get, you know, the context of the whole project sort of communicated across different development agents is key. But it's very easy for it to misunderstand me and do something crazy. So I don't like going back.
[01:19:05] Speaker A: It is. I. I played with Claude code Web, which is.
It's a web ui. They have a secure Container infrastructure set up. You can hook your GitHub credentials up to it and then you basically start a chat in the, in the, in the, in the web browser. But all the work is done in a container somewhere running. It's. It's in the docker container, whatever. Secure environment with sandbox. Yeah, sandbox failures.
[01:19:30] Speaker B: I played a little bit.
[01:19:30] Speaker A: Yeah, I started playing with. Got like a thousand dollars of free credits to. To test drive the thing, which suddenly expires in five hours. So I've got a lot of work to do this evening.
[01:19:40] Speaker D: So we know what you're doing in the next.
[01:19:42] Speaker A: Yeah, but the thing is, I'm like, I'm great. So I moved some of the workflows to there. I've. I've updated some open source stuff that I work on in using it. I built some of the new things out. I spent probably $50 on having it investigate, explore its own sandbox. I had to figure out how the, how the, the, the root process works, how the websocket connection works from the website, how it manages memory and CPU usage inside the container to prevent the whole thing from crashing. It's really quite well thought out.
And you know, I wanted, I want that for us now, like, want that in our own business environment so that we have much more control over that. But, you know, now I've got like 10 different claud code web sessions going all at the same time.
I'm like, but, but now I need an AI to manage these things for me and keep its eye on these 10 things and tell me when I need to input some information or, or make a decision about one of these things. Like, I feel like we're just moving ourselves as people kind of further up the stack. We used to be coding, then we went to planning and then we went to, like, we, we just incrementally moving ourselves higher and higher up and I'm not sure where, not sure where it ends really, but.
[01:20:52] Speaker B: Oh, I think we fall off the.
[01:20:53] Speaker D: Top and then we're.
[01:20:54] Speaker B: We're completely irrelevant.
[01:20:55] Speaker D: Yeah, we're batteries.
[01:20:57] Speaker C: That's how it ends.
[01:20:58] Speaker A: Pretty sure that one day the AI will say, actually, why don't you go take a break?
[01:21:02] Speaker B: That's a dumb idea.
[01:21:04] Speaker A: You slow me down.
Yeah, yeah, yeah.
[01:21:09] Speaker B: I mean, it's, you know, it is. I, I do find myself, like, not because I, you know, I don't trust it. And I still do find like, unattended coding sessions just to be a little too air prone and some of that might be, you know, bad prompts on my part. I'm not blaming AI for everything but, but it, you know, like that kind of thing. So I do, I, I did do the same thing where like I, I realized I would like had different, 10 different sessions going on. I was trying to spin all those plates and it, all of them just sort of came crashing down in a hard way where now it's like, okay, no, I'm going to do one at a time.
[01:21:40] Speaker D: I'm going to focus.
[01:21:41] Speaker B: I'm going to, you know, I might, you know, start one and stop one and start another, you know.
[01:21:45] Speaker D: Yeah.
[01:21:45] Speaker B: In the same day. But I definitely do one at a time.
[01:21:49] Speaker A: Yeah, I, I, I've been, I've been really happy with, I mean claw code I've, I've used since it's been available and it's had its ups and downs. Clock code Web is obsessive about git commits. Every bit of work it does is a new git commit and you have to go in and approve the PR and merge it to wherever you want it to. And so I feel comfortable and safe leaving it to do things because I know it's not going to walk all over. And I think that they can't be recovered easily or just not merged in, I suppose.
But I should send you, I don't know if I sent you the link to my PM in the box repo.
[01:22:23] Speaker B: Yeah.
And I've used it.
[01:22:25] Speaker D: Yeah.
[01:22:25] Speaker A: But that's, I've sort of evolved that a little bit as well. I have some more, some more work to put in there, but I think something like that is great.
I think the best feature I built into that set of prompts is it's not building an architecture diagram, it's not building a narrative. It's one most important document I think should, should feature in every AI coding project plan and it's the shit that you're not going to build. It's like, yes, I acknowledge that this feature could exist. We are not building it, do not build it, do not plan for it. Put it in the, in the list of things to not do. And I think by providing examples of what to not do as well as examples of things that you want done, it really helps constrain the output to, to what you want.
[01:23:09] Speaker B: That's funny because I've always used the project narrative for exactly that. Like the facts specifically, like I want this, blah, blah, blah. We're not doing that for this reasons, you know, and directly because it is like, you know, it's, it's very easy to add scope creep and it's very easy for, for when you're getting feedback from others for them to contribute things and not fully understand sort of the delivery of those things and you get past an MVP too quickly. And so it is definitely something that's super powerful.
[01:23:35] Speaker D: Yeah.
[01:23:38] Speaker A: All right. And that's the Week in Cloud. Check out a website, the home of the Cloud Pod, where you can join our newsletter, Slack Team, and send feedback or ask a
[email protected] or tweet us through the hashtag TheCloudPod.
[01:23:49] Speaker B: See you later guys, everybody.
[01:23:51] Speaker D: Bye everyone.
[01:23:55] Speaker A: And that's all for this week in Cloud. We'd like to thank our sponsor Archera. Be sure to click the link in our show notes to learn more about their services.
While you're at it, head over to our
[email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback, ask any questions you might have. Thanks for listening and we'll catch you on the next episode.