296: Google Forces AI Protection

Episode 296 March 21, 2025 00:59:26
296: Google Forces AI Protection
tcp.fm
296: Google Forces AI Protection

Mar 21 2025 | 00:59:26

/

Show Notes

Welcome to episode 296 of The Cloud Pod – where the forecast is always cloudy! Today is a twofer – Justin and Ryan are in the house to make sure you don’t miss out on any of today’s important cloud and AI news. From AI Protection, to Google Next, to Amazon Q Developer, we’ve got it all, this week on TCP! 

Titles we almost went with this week:

A big thanks to this week’s sponsor:

We’re sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You’ve come to the right place! Send us an email or hit us up on our slack channel for more info. 

General News 

01:02 HashiCorp and Red Hat, better together 

01:48 Justin – “That’s a lot of promise for Ansible there, that I’m not sure it completely lives up to…”

07:09 Justice Department Reiterates Demand to Break Up Google 

08:12 Ryan – “The Chrome browser, if they have to sell it off, it’s going to be just a nightmare for them. They’ve put a lot into Chrome that’s not just browser-based. A lot of their zero trust for BeyondCorp has moved into that, into the Chrome enterprise and a whole bunch of sort…that’s gonna sting. But I mean, that’s, it also speaks to the you know, what the DOG is trying to accomplish, which is those things are very tied together and you have to use them.”

AI Is Going Great, Or How ML Makes Money 

09:07 Google Is Still Behind in AI. Why?

11:18 Justin – “I think it’s good. Copilot, I feel is behind in some other areas, but like for code completion and scaffolding, I think it’s still doing a pretty good job. But, you know, there were an area, it’s still pretty weak as an agentic coding exercise, like being able to give it a prompt and have it write, you know, code pieces. That’s why people are, you know, doing a lot with cursor these days and they’re doing a lot with Claude CLI and you these things where they can do a lot more interesting things. so I suspect that that’s going to have to change this year for GitHub.”

13:35 Google’s AI Unit Reorganizes Product Work, Announces Changes to Gemini App Team 

14:58 New tools for building agents 

16:57 Justin – “You know those Pinterest fails – you know, those those memes, I feel like I’ve done that with Agentic AIs left and right, like where I’m like, I have this cool idea, you know, like where I’ll read a watch a YouTube video and like how to automate this daily task. And then by the time I get through it, I’ve got this three quarters of the way created monstrosity of things shrug together with string and it’s never going to run reliably or repeatedly.”

18:11 Microsoft’s Relationship With OpenAI Is Not Looking Good 

19:37 Justin – “Microsoft needs an office assistant. Those are different needs and potentially different models. And so I think that’s maybe where you’re seeing the divergence of interest, because of, they want to make, AGI at open AI and, know, really, that’s not what Microsoft wants. They would like to sell more office licenses at higher prices and that helps them with revenue. So they have different goals, perhaps, between the two of them.”

Cloud Tools

20:55 Vault Enterprise 1.19 reduces risk with encryption updates and automated root rotation 

21:24 Justin – “So not quite production ready yet, but they’re getting ready for quantum as well.”

23:24 Terraform migrate now generally available

AWS

25:06 Application Load Balancer announces integration with Amazon VPC IPAM

26:01 Ryan – “That’s cool. didn’t quite catch on that this was a contiguous Amazon blocks…. You can provide a smaller range without actually having to go through and you know, sacrifice your first born and sell your liver for IP space. like, that’s pretty rad.”

28:00 Announcing AWS Step Functions Workflow Studio for the VS Code IDE

28:33 Ryan – “I think it was two or three years ago I was an old man yelling at cloud. ‘You can just switch over.’ But now I am so addicted to everything being my ID. This is great. I won’t use studio to create a whole bunch of step functions, but debugging them? Oh yeah. Like it’s, it’s super helpful there. That’s pretty cool. I like it.”

29:12 AWS Lambda adds support for Amazon CloudWatch Logs Live Tail in VS Code IDE

30:26 Amazon Q Developer announces a new CLI agent within the command line

31:10 Ryan – “Well, I mean, it would be nice to be able to natural language query your ginormous AWS infrastructure and have it just figure it out. Right. Like that would be fantastic if they can get there, but I don’t know if it’s there yet.”

31:56 DeepSeek-R1 now available as a fully managed serverless model in Amazon Bedrock 

32:30 Justin – “You’ll be able to then tune these and do all kinds of other things as you go in the future and use RAG, et cetera, with DeepSeq. So if you’re okay with the ramifications, they may have stolen all their data from OpenAI. You can use DeepSeq in your product. Good luck to you.”

33:18 Accelerate AWS Well-Architected reviews with Generative AI  

34:51 Ryan – “This has the potential of being really amazing. I have very mixed feelings about the well-architected framework process. I’ve done both the self-serve many times and even the walkthrough from technical account support. And I always just feel like it lacks the ability to find any real problems. Once you get past the like, you know, regional distribution and being able to rehydrate data sort of problems, it sort of falls down very quickly and, and doesn’t help solve, complex issues that may arrive due to conditions. And so I’m sort of hoping that, you know, introducing AI into this mix might give it that ability to sort of have a lot more context into your deployment as it’s asking you questions.”

39:21 Amazon Bedrock now supports multi-agent collaboration  

39:38 Ryan – “Do you think that supervisor agent just stands around, doesn’t really do anything and then takes credit for all the other agents work?”

GCP

40:51 Google Next is coming up in a few short weeks!

43:08 Meet Kubernetes History Inspector, a log visualization tool for Kubernetes clusters

46:19 Justin – “Because like even in ECS, I’ve had this problem before where I’ve had like multiple containers that talk to each other and then like, my God, why do we this error? And it’s like, if I could see the state, I would have known that the other container crashed, which is why this error occurred in my container as a dependency on it. So like there’s definitely value in this visualization, but it’s not exactly how I would have visualized it. So like when I was reading through the article, I was very excited and then I saw the screenshots and I was like, huh, it’s not bad, but it’s definitely not how I thought it was going to look when I saw it.”

47:16 Hej Sverige! Google Cloud launches new region in Sweden

(hey-j sver-ee-geh)

49:04 Announcing AI Protection: Security for the AI era 

50:28 Justin – “It pulls in a model armor, STP discovery, AI related toxic combinations, posture management for AI threat detection for AI, the notebook security scanner and the data security posture management. all into sec for this. Yeah. It’s pretty full featured out of the box, which I’m pretty impressed with for a Google product.”

50:54 Introducing tiered storage for Spanner 

51:31 Ryan – “This looks great. You know, the ability to have data stored cold and pay a lower price for it.”

Azure

51:57 What’s new in Azure Elastic SAN

52:55 Ryan – “So if you’re using a storage shared model, running your database on in the container. Yeah, I don’t know. I mean, you know, these types of things are what I want. If I’m going to have to manage infrastructure at this level, I want it to be auto-scaling and fairly automatic.”

53:30 Microsoft completes landmark EU Data Boundary, offering enhanced data residency and transparency 

54:46 Ryan – “Hopefully it’s not just all duct tape and baling wire in the backend.”

55:04 Azure Load Testing Celebrates Two Years with Two Exciting Announcements! 

57:04 Announcing the Responses API and Computer-Using Agent in Azure AI Foundry

Oracle

57:40 Oracle Announces Fiscal 2025 Third Quarter Financial Results 

Oracle won some big cloud contracts. Here’s why its stock is falling 

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. [00:00:07] Speaker B: To the cloud pod where the forecast is always cloudy, we talk weekly about all things aws, GCP and Azure. [00:00:14] Speaker C: We are your hosts, Justin, Jonathan, Ryan and Matthew. [00:00:19] Speaker A: Episode 296 recorded for the week of March 11, 2025. Google forces AI to use protection. Good evening, Ryan. How are you doing? [00:00:28] Speaker C: I'm doing well. Just the two of us. [00:00:31] Speaker A: I know it's like it's going to be a fun show. I wish I had a bunch of container stories for you just because you're here, but I don't, unfortunately. I have a lot of AI stories for you, which I know how much you love those but. [00:00:41] Speaker C: Oh, you know, I mean, I continue to use it more and more and you know, whether I want to or not, it's going to be forced upon me. [00:00:50] Speaker A: Yeah. Our other co hosts are busy taking his wife to a lovely romantic weekend. Another one is still a little bit under the weather. So we're letting everyone have a great week and Rhino hold it down because we're good at that. So. All right, well we talked about Hashicorp being bought by IBM and then IBM, sorry, HashiCorp had a kind of a blog post from their CEO, kind of saying some things they're excited about. And now we have another article on their blog about how they're going to be better together and they had some more details so I thought we'd talk about it once again. So they talk about the wide range of day two operations problems, including things like drift detection, image management and patching. Right. Sizing and configuration management. And as Red Hat Ansible is a purpose built operational management platform, it makes it easier to properly configure resources after the initial creation, but also to evolve the configuration after setup and then execute ad hoc playbooks to keep things running reliably. So if you combine that with your day one operations of Terraform and your day two with Ansible, it's a match made in heaven. Apparently. Now that's a lot of promise for Ansible there that I'm not sure it completely lives up to like batch management. I was like, that's a. I mean. [00:02:02] Speaker C: Technically you can write a playbook to do anything. [00:02:04] Speaker A: So yeah, I mean like battery is not included. Sure, yeah, I can do all these things. They did talk about some of the things they're exploring right now, like Red Hat Ansible inventory being generated dynamically by Terraform, which I think that's the nice, nice integration. There are going to be official terraform modules for Red Hat Ansible, making it easier to trigger terraform from Ansible playbooks, which I like the other direction better. I don't mind the idea that this exists for certain use cases, but. Yeah. And then Red Hat and Hashicorp will officially support the Red Hat Ansible provider for Terraform, making it easier to trigger Ansible from Terraform. Yes, yes, that one. I like good evolving Terraform provisioners to support a more comprehensive set of lifecycle integrations, which I think that is a good move as well. And then improved mechanisms to invoke Ansible Playbooks outside of the resource provisioning lifecycle, which that maybe is okay, but drift might be a problem. [00:02:59] Speaker C: No, it's. It's absolutely necessary because the. It's really solving the problem of like you don't want to redeploy this server, you just want to update it and you can't really do that it with the current providers because it's not a change to the resource itself. [00:03:17] Speaker A: That makes sense. Okay, that use case makes sense to me. I was thinking something else there which I was sort of like, I don't know. They also talked about that customers are not surprisingly regularly integrating Vault and OpenShift for secrets management. And so they have identified dozens of connection points that can add value to Vault, including Vault secrets operator for OpenShift to make it just native. And that's the data encryption capability where they can use vault keys to encrypt your ETCD data, which. What could go wrong there? Ryan? [00:03:46] Speaker C: Nothing at all. [00:03:47] Speaker A: Right. [00:03:47] Speaker C: Yeah. [00:03:49] Speaker A: They said integrations into ARGO CICD for secrets management could be pretty well nice as well as automatic Istio certificate issuance capability can be created on this as well. So a little less exciting mode. They're going to revolt there. But I see those being valuable as well if you're into the OpenShift world. But between the Red Hat Ansible and Terraform integration story and then the Vault stuff, the it. Yeah, I'm kind of excited for those. [00:04:12] Speaker C: I mean Listio certificate automation. Like I don't know if it's just the day job but like certificate, you know, just operational stuff, it's just such. It's so boring and it's necessary and it, it's ever present, it's never going to go away. So anything to make that just sort of automatic while not just making every certificate expire in the year 2038 or whatever, it's not far, far enough away anymore. So. I like that. [00:04:39] Speaker A: Yeah, they do talk about some other things. We talked about last time too, like integration to the finops tooling and things like integration to Guardium And IBM Z. And those might come too, but those are less exciting for me, I think. [00:04:51] Speaker C: I think the Ansible integration could be really powerful just because it is. I mean, there's so many pipelines that I've seen across many companies where those two tools are how you manage things. Right. [00:05:01] Speaker A: I mean, I know I'm a big Ansible person because I just don't know enough about it, but I have used Tower enough to know that I'm not like wowed by it. [00:05:09] Speaker C: Yeah. [00:05:10] Speaker A: And I'm like, if you could take Tower and then combine it with Terraform Cloud, like that'd be kind of nice to have one common user interface for that. So I do hope that happens as part of this is maybe the Tower Ansible. Tower gets kind of integrated directly into Terraform Cloud Enterprise or Terraform Cloud consoles and then these things are tied really closely together. I think that could be really interesting. [00:05:32] Speaker C: So, I mean, Tower for me was a solution to a problem that I don't think a lot of people had, you know, like. And so Ansible by itself is fantastic for just automating, you know, what you needed to do. But Tower being able to launch existing Playbooks and have LARBAC and tracking status of that, like. Sure. But there was a reason why it didn't succeed or because it's not going to make money. It's not that much of need. Yeah. [00:06:01] Speaker A: So you put it into HCP and now it doesn't need to make money, but it becomes part of the ecosystem. [00:06:07] Speaker C: And it just adds value to that tool because those things do have value. Just not going to buy a separate contract. But if it's my orchestration now as part of that same enterprise ui. Pretty slick. Agreed. [00:06:21] Speaker A: Well, I looked forward to seeing it. Like you talked about it twice now, two different blog posts. [00:06:26] Speaker C: They're really trying to make us like this. [00:06:28] Speaker A: Yeah. Show us that you won't screw it up. IBM by making this awesome. I mean they haven't, I mean they haven't really messed up Ansible and they. And Red Hat is still Red Hat. [00:06:38] Speaker C: Yeah. I mean I thought that they would be. Do a lot worse Red Hat, but they, I mean they, I know they did a lot of things to the CentOS and open source. Sure. [00:06:45] Speaker A: So yeah, I mean there's definitely something there that they've screwed up. But I mean CentOS was always kind of a weird thing anyways. Like we're going to take Red Hat, make it open source. [00:06:53] Speaker C: Like I'm sure they love that. I mean, I loved it. [00:06:57] Speaker A: It was great for us that's why you can now use what's the one that everyone moved to? Rocky Linux. Rocky Linux is the new CentOS, basically. All right, well, the new administration came in and replace the head of the Department of Justice. And I'm sure Google was hoping for a break in their antitrust case, but apparently not. The Justice Department reiterated last week that many aspects of his proposed final judgment, including the prohibition of payments to Apple for preferential selection of Google as the search default and other companies for a share of search revenue or preferential treatment, will still stand. As does the man that they sell their Chrome web browser, which again, I don't know who they sell it to or how that company survives. Just look at case in point, Mozilla. [00:07:42] Speaker C: Which hasn't gone away. [00:07:44] Speaker A: Hasn't gone away, but is definitely not the company that they once were. They did however, drop the request that Google will be prohibited from making investments in AI companies like Anthropic. And this is definitely a sign that the Justice Department may continue their aggressive antitrust stance started by the Biden administration. So sorry Google, you're not out of jail free on this one. No matter how many inaugurations Sundar goes to. [00:08:06] Speaker C: Yeah. The Chrome web browser, if they have to sell that off, it's going to be just a nightmare for them. They've put a lot into Chrome that's not just browser based. You know, a lot of their zero trust for Beyond Corp has moved into that and the. Into the Chrome enterprise and a whole bunch of sort of a lot of. [00:08:27] Speaker A: Their authorization stuff's in there too. [00:08:29] Speaker C: Yeah. So like, oof, that's going to sting. But I mean that's. It also speaks to the, you know, what the dog is trying to accomplish, which is those things are very tied together and you have to use them. Yep. [00:08:42] Speaker A: And they basically locking you into a monopolistic situation. So yeah, that'll be interesting to see. I suspect that'll continue through appeal probably this year and maybe eventually happen next year. Maybe after they finally sell TikTok, they can get on this one. All right, well, moving on to AI is how machine learning makes money. The information had an article here about Google is still behind in AI and why. So basically the article goes on about how it's not going well for Google, it's not going well for Apple, who also had to delay several new features. They announced iOS last year for another year and they're saying that Google Gemini is basically falling further and further behind OpenAI and even the Grok AI. And the example the author used was, you Know, asking it to go pull a bunch of data from the SEC about different companies and then plot out different scenarios based on certain, you know, regulations or financial things that he was interested in. And he was mocking Gemini because it says, you know, we get more accurate data if we read all the data. He's like, yeah, that's what I wanted you to do. And, you know, basically they said this is one of those areas where there's increasing disparity and struggles with AI. I think even here on the show. Jonathan and I are big Claude fans. I don't know which AI you're using mostly, Ryan, nowadays, but I've switched to Claude. [00:10:00] Speaker C: You guys have convinced you from Gemini? Because I was struggling with it. Yeah. [00:10:07] Speaker A: In general, I'm just kind of feeling like the information has their opinion. But what's your opinion of AI these days? How are you feeling about the different models and the different things you've maybe played with out there? [00:10:17] Speaker C: I feel like it's shifting a lot, so it's hard to keep it all on track because even, even your clients are sort of shifting which model you use under the covers a lot of, A lot of times, you know, even if it's, it's, it's listed, it's doing it very, you know, sort of casually. And so I, you know, like, I, I, I had switched my client to, you know, the new anthropic thing. And then, you know, I noticed as I was running questions that it had switched to the other one behind the scenes. And, and I kind of felt that's, you know, like Gemini. Like, I didn't really see the updates when it went from 1.0 to 1.5, and now that it's gone to 2.0, maybe even 2.5, like, I just, I just didn't feel like it was. I didn't feel it was that bad, but I didn't feel it was that good. And so, and anthropic just, like, continues to be a rock solid thing. Like, it doesn't get everything right, but it gets most things right. So between that and then using copilot in the day job, like, which I think does a really great job, code. [00:11:16] Speaker A: Wise, I think it's good. Copilot, I feel, is behind in some other areas, but, like, for code completion and scaffolding, I think it's still doing a pretty good job. But the word area, it's still pretty weak as agentic coding exercises, like being able to give it a prompt and have it write code pieces. That's why people are doing a lot with cursor these days and they're doing a lot with Claude Cli and these things where they can do a lot more interesting things. So I, I suspect that that's going to have to change this year for GitHub and I expect Gemini is going to have to do something at Google Next here in a few weeks as well, you know, otherwise I think they're both at risk of falling further and further behind. [00:11:56] Speaker C: Yeah, no, I agree. And you know, copilot like it's, I hate the copilot branding because like what, you know, GitHub copilot for code coding works pretty well and I like that. But yeah, a lot of the, like the O365 copilot like those things are useless. Like you ask it to do a prompt on some of those automations and it will, it will definitely design you this WYSIWYG from hell that doesn't work, you know, like it just trying to like automate simple things like you know, joining and leaving teams, chats, you know, through and it's just like these things shouldn't be that hard but they are. Apparently. [00:12:33] Speaker A: Apparently it is. That's what we're learning. But yeah, it's, it's definitely one of those areas where I'm hoping, you know, there's some big jumps in some of the reasoning and some of the other things and you know, I'm seeing the code quality and cloud get better. I'm seeing it, you know, even in deep seat. It's not bad for some of the stuff that you're trying to do and it really depends on, you know, the language and what you're trying to make it accomplish. But I'm just, I'm impressed and I think it's, you know, I definitely have, you know, Gemini I think is on my list of like, yeah, I'm not a big fan of it. I don't use it often enough to really see some of the things that make me say this is the future. I've seen some stuff coming out next that's going to be really cool. But there's definitely a lot riding on Gemini getting upgraded a lot I think this next year to become the next big thing. Well, Google apparently has agreed with this article a little bit as they've disbanded their product impact unit, which is whatever the hell that means. Whose goal was to incorporate DeepMind research into Google products as it attempted to streamline the process of creating AI products. Deeply Mind leader Dennis, Dennis Hassabis wrote an email to employees. The move was designed to Optimize and simply their product work, model development work and product area engagements. They also announced a change to the Gemini team which has struggled to compete with OpenAI. Google has fired former Meta VP of product Chris Strahar to lead product on Gemini and is adding product teams from Google's more experimental multimodal assistant product Astra into the Gemini team. I also be moving Gemini to use models developed by DeepMind's main post training teams rather than the chatbot specific team per the memo. So even Google's saying we're not where we want to be. [00:14:16] Speaker C: I mean in my head I have this picture of these very astute researchers have been doing AI and neural network research for years. They're all in lab coats and they try to do software product design and it didn't work for some reason. [00:14:34] Speaker A: So weird. Like yeah, there's a reason why a lot of academics although have really brilliant ideas, don't move into commercial products because it's hard to go from academia to non, you know, to commercial monetization. So well, OpenAI is releasing the first set of tools to help developers and enterprises build useful and reliable agents. Over the last year they have introduced new model capabilities including reasoning, multimodal interactions and new safety techniques. But customers have complained that turning these features into production ready agents was challenging, requiring sense of prompt iteration and custom orchestration logic without sufficiently visibility or built in support. And to address these challenges, OpenAI is launching a new set of APIs and tools to help build agentic applications. First is a new response API which combines the simplicity of the chat completion API with the tool use capabilities of the Assistant API for building agents. New built in tools include web search, file search and computer use all in the same agent. And the new Agent SDK to orchestrate single agent and multi agent workflows allows you to quickly orchestrate multiple agents together as well as integrated observability tooling to trace and inspect agent workflow execution and which agent dropped the ball now determinal through this process. So these are nice. This has definitely been an area that I've as I've been getting more and more Interested in GenTech AI, we're trying to think about it more like how do I build this thing, how do I make it work? And so you know I've seen a bunch of different models and this looks pretty clean like looking at some of the code samples they had about how to create the agents through the agents SDK. You know you're basically importing the different things you want. It's a little Bit of Python code, you know, then here's a support agent and it, you know, you give it the instructions. You're a support agent who can submit refunds and then there's a shopping agent and that's a shopping system who can research the web and then there's a triage agent who can route the user to the correct agent and then you have handoff instructions and tools and what like you know, the support it runs can use this tool. And this is a well defined specific which I like. And so I think it's nice to be able to potentially use this to start kind of thinking about how you prompt some of these SDK agents together as well as just simplifying some of the APIs I think is nice because I've seen some of the hoops I've had to do. Well, I haven't done it. Claudes are done for me. So I've been playing with and I'm like oh, that's just ugly. And this kind of fixes some of those problems. [00:16:50] Speaker C: Yeah, no, I mean, you know, Pinterest fails, you know, like that whole like, you know, those, those memes. I feel like I have done that with agentic AIs left and right like where I'm like ooh, I have this cool idea, you know, like, or I'll read, I'll watch a YouTube video and like how to automate this daily task and then by the time I get through it I've got this three quarters of the way created monstrosity of things strung together with string and it's never going to run reliably or repeatably. And so like these types of things are great way to sort of make that a lot, a lot more simplistic, you know, built in web search and file system. You know I, those two ring true for me because I was definitely, I was trying to get, you know, categorizing music and upscaling bit rates and that kind of thing and it was just like by the time I was done it was a disaster. So like yeah, I can see using these sort of pre built libraries and building blocks really making that much easier. [00:17:56] Speaker A: Agreed. And then the final news in this direction is there's an article here about Microsoft relationship with OpenAI is maybe not looking so great. With the latest report that Microsoft is building its own in house reasoning models to compete directly with OpenAI and has been testing models for Elon Musk's Xai Meta and Deep Seq to replace ChatGPT and Copilot. It's AI bot for the workplace. Micro Copilot has received poor reception in enterprises due to the high cost and limited results. And Microsoft has even let OpenAI out of a contract that required it to use Azure for all its hosting needs. We talked about when they did it. It may make sense in the long run if both companies continue to see themselves as competitors versus partners and further and further move away from each other, which would be disruptive but also maybe unlocks a lot of value. So we'll see. [00:18:39] Speaker C: I mean, I'm all for more competition in the marketplace, but it does feel odd to me. It feels like this is unnecessary friction that probably shouldn't exist. And you know, like, I guess if the integration isn't, you know, easy enough for Microsoft to adopt into their Copilot line of things like that's it's a very interesting sort of tell. Is OpenAI stringent? [00:19:01] Speaker A: I mean, I don't know if it's that it's not easy to integrate as much as it's that the results just aren't good enough. Like, I mean I've, I've tried to use Microsoft 365 Copilot in my personal subscription to Microsoft 365 and the results it does for create PowerPoint slides or create documents, it's just not great. It feels like clippy in a lot of those use cases. And so I think the reality is that they're not getting what they want for OpenAI. And the reality is OpenAI is trying to make a broad general market product. Microsoft needs an Office assistant. Those are different needs and potentially different models. And so I think that's maybe where you're seeing the divergence of interest because if they want to make AGI at OpenAI and you know, really that's not what Microsoft wants. They would like to sell more Office licenses at higher prices and that's helps them with revenue. So they have different goals perhaps between the two of them. And so that's kind of interesting. [00:19:59] Speaker C: I want agentic reasoning in my email rule filter so if this makes that happen sooner, awesome. [00:20:07] Speaker A: We should, we should get Abnormal to be a sponsor of us because yeah, that thing is amazing. [00:20:12] Speaker C: Yeah. [00:20:12] Speaker A: If you're, if you have a spam problem at your company, you should definitely check out Abnormal. [00:20:16] Speaker C: Abnormal. [00:20:16] Speaker A: Yeah, because it's gray mail filtering is amazing. It's amazing. [00:20:21] Speaker C: Like, and the fact I can't get it my or too cheap to pay for it. My personal. [00:20:25] Speaker A: I know I was like, I was wondering like how much would this cost me to deploy on like the cloud pod Email server or you know, these others because like that would be great. [00:20:34] Speaker C: Yeah. But yeah, maybe we should see if we can chip in and go in together. [00:20:36] Speaker A: Yeah, maybe. Maybe we're a side deal here. Yeah, perfect. All right. Cloud tools for this week Vault got an upgrade this week with 1.19 going. Generally available, vault has enhanced security workflows, post quantum computing features and long term support in the 119 version. So first is a module Lattice based Digital Signature standard or ML DSA Post quantum cryptography support with Transit Secrets engine added support for the MLDSA PQC sign and verify functionality for experimental purposes. So not quite production ready yet, but they're getting ready for Quantum as well. The Vault Transit engine support for ED25519 with pre hashing. The Vault Transit engine Now supports the ED25519PH signing which is a type of embedded computing capability which is commonly used in remote and embedded devices. Constrain certificate authorities for reduce risk by providing isolation for PKI workloads. Extended automated root rotation so Vault extended centralized rotation manager which now provides the mechanism to automatically rotation of root credentials for aws, Azure and Google Cloud auth methods and secret engines along with LDAP and database plugins, which is a big one. Additional UI support for workload identity federation for both Google Cloud and Azure. And as 116 enters one year of extended support, 119 represents Vault's enterprise. Second long term support release as well as the seal wrap apparel data for federal Information Processing standards or FIPS compliant hardware security module deployments is now available as well. So if you're looking for anybody to send, which is a lot of goobly gook, you can get that from vaults. [00:22:12] Speaker C: Yeah, you know, like I just remember when we were having to do GPG and PGP like encryption and just the mechanisms like I'm so glad tools like this exist. [00:22:24] Speaker A: Well, especially a tool like Vault because back in the day like you had to have GPG and PGP if you want to do file based encryption and then if you wanted to do you know, like encryption at rest, you had to get something like Vorometric or other vendors to like install these heavy agents and that would only do this one thing and then you, oh yeah, it's secrets how good you got to get this other thing like so now it's all involved, which is really nice. [00:22:43] Speaker C: And yeah, if, yeah, God forbid, if you ever like trying to coordinate all those certificates and the signing certificates across multiple nodes, like now at least you can put it in one place. [00:22:53] Speaker A: Yeah, I mean, unfortunately Hashicorp knew that and that's why they charge you an arm and a leg for it. [00:22:57] Speaker C: They do. [00:22:58] Speaker A: But that, you know, you're getting a lot of a lot of technology out of the box with it. So it's one of those necessary evils in some ways. Terraform Migrate is now generally available as well. We talked about this previously when they announced it in pre preview. This is generally available, making it easy to move from Terraform Community Edition to HCP Terraform and Terraform Enterprise. Designed to reduce manual effort and improve accuracy. Streamlines the migration process, helping teams adopt HTTP Terraform and Terraform Enterprise with confidence. Key features include automating, state transfer, state refactoring, validation and verification. Addition their expanded features such as variable management and migration, GitLab integration, security and validation for get personal access tokens refined directory skipping, a new dry run mode enhancements, improved target branch renaming and optimizations for error handling, logging and debugging, which might be important if the state goes wrong. [00:23:49] Speaker C: Yeah, I suspect this is where we're going to see Red Hat Red Hat's thumb on the scales first, as I wonder if the Community Edition is long for this world. I think they'll always have some sort of open source component because they learned I think they've learned from changing the license and and that it'll spin off be their own. But I definitely see maintaining the sort of two separate, you know, code lines between the Community Edition and Terraform Enterprise and Terraform Cloud. Like I can see them not doing that. But it's interesting. [00:24:29] Speaker B: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask will you actually give me my money back? Achero will click the link in the show Notes to check them out on the AWS Marketplace. [00:25:09] Speaker A: Well, there's two things in AWS world that might drive you crazy. If you're new to it or you use it for a while and you have issues. One is you can't typically get contiguous IP space and number two was that you weren't able to bring your own IPs and Amazon the last year has given or maybe even two years now gave us ipam, which allowed you to both take a block of IP addresses from Amazon and basically buy them from Amazon. Say, I want this contiguous block or bring your own IPs, which is great. And that worked fine for all the servers that you needed to be able to run and put your own IP addresses on, but you were still limited to where you could apply those IP blocks and things. And so, you know, when you're dealing with customers who don't want to block the entire Amazon subnet from accessing their system with a, you know, with a web hook, this became somewhat problem. And so Amazon has finally, now finally allowed us to set IPAM addresses against the alb. So, thank you. You can now apply a pool of public IPv4 addresses from IPAM directly to the ALB load balancer. And that could either be BYOP or one of those contiguous IPv4 address blocks. [00:26:24] Speaker C: That's cool. I didn't quite catch on that this was contiguous Amazon blocks. [00:26:32] Speaker A: You can, yeah, you can either select those or you can bring your own, which is great. So. [00:26:37] Speaker C: Well, I mean, you can provide a smaller range without actually having to go through icon and sacrifice your firstborn and sell your liver for IP space. So that's pretty rad. [00:26:50] Speaker A: Yeah, it's a good feature. This one I was hoping so they would get when they rolled out the IPAM stuff and they started. You know, when you use public IPs on your EC, two instances, which no one should do, but if you do, um, you know, those can be contiguous and those. And those kind of things are great. But yeah, being able to choose a block not purchased by you at extortionate Internet prices and just being able to use an Amazon one is quite nice. And that I think, I think it came out day one for ipam. It came a little later with the. Because BYP was kind of the initial why you need this. Yeah, it's. It's definitely gotten some features over the years now that have been nice. It's still a bit clunky to use IPAM in my experience. [00:27:28] Speaker C: Yeah, I haven't really ever felt comfortable using it. It's a useful tool, but I've never actually seen it like, fully implemented in a way that really does all the things that I want. [00:27:39] Speaker A: Right. [00:27:39] Speaker C: Like not just discovery of. Of subnets and networks, but actually, you know, getting down to utilization numbers and, and all the metrics you need, but not with the amount of operational overhead of keeping it all in sync so, like, can't read like, you know, neighbor routing table and just like do it, Come on, come on, man. [00:28:02] Speaker A: Trying to see if you can like, what size of IP addresses do you need for like, how small of a block can you get from Amazon? [00:28:10] Speaker C: I don't know, just trying to pull it up quickly. [00:28:14] Speaker A: But anyways, it's nice to have AWS sub functions. Oracle Studio is now coming to your AWS toolkit for Visual Studio code, which basically means you now can visually create, edit and debug state machine workflows directly from Visual Studio code. Thank God. AWS functions, for those of you who are unfamiliar are Visual Workflow service capable of orchestrating over 14,000 API actions from over 220 AWS services to build distributed applications and data processing workflows. Workflow Studio is a visual builder that allows you to compose workflows on Canvas while generating workflow definitions in the background. And if you're using a lot of Lambda functions and serverless, this is a great way to do a lot of scale out type work inside of a step function code machine. And being able to do it all on my VS Code ID was something I wanted many, many years ago. [00:29:00] Speaker C: Yeah, I, I think it was two or three years ago I was an old man yelling at Cloudground, you can just switch over. But now I am so addicted to everything being my ide. This is great. I will, I won't use Studio to create a whole bunch of step functions, but debugging them. Oh yeah, like it's, it's super helpful there. And so like that's pretty cool. I like it well and I'm going. [00:29:24] Speaker A: To, I'm going to move a story up a little because I realized I should be together. AWS Lambda is also now supporting Amazon CloudWatch logged live tail and VS code ID through the AWS toolkit as well. The Live Tail is an interactive log, streaming and analytics capability which provides real time visibility into logs, making it easier to develop and troubleshoot lambda functions which, this is a great feature as well. So combined with the step function thing, your troubleshooting and IDE stuff can be all right there in a simple, simple Visual Studio window. [00:29:50] Speaker C: Yeah, I mean this has been one of the barriers to, you know, using serverless functions, you know, is that, you know, it's a little clunky to test and as you're developing, you know, there's not a lot of, there's a lot of context shifting as you're, as you're going from task to task and, and I really like this because not only can you like execute directly and debug directly in your IDE but you can also see the execution logs in the Amazon side just right there, which is, you know, I don't know how everyone, everyone else develops lambda functions, but I always, you know, create and publish and then run and then go scan through the thing and so this is going to save me a ton of time. [00:30:32] Speaker A: Yeah, I agree. Which is. Yeah, that's rare. Rare that we agree with Amazon announcement so cleanly. [00:30:40] Speaker C: Yeah. [00:30:42] Speaker A: Amazon Q Developer has announced an enhanced CLI agent within the Amazon Q command line interface that allows you to have more dynamic conversations. This update, Amazon Q developer can now use the information in your CLI environment to help you read and write files locally, querying AWS resources or creating code, which. This is a natural extension. We just saw it with Claude, we see it with cursor, but this is just at the CLI level, which is how Claude implemented theirs. Again, I expect these all end up in VS code as well soon enough. [00:31:10] Speaker C: Yeah, I mean, Q is integrated. I can't remember if the, like sort of the. The resources aspect was really part of that. [00:31:19] Speaker A: Yeah, I know. [00:31:20] Speaker C: I haven't used it in a while. [00:31:21] Speaker A: I haven't used it in a while either. [00:31:23] Speaker C: It was sort of terrible. [00:31:24] Speaker A: Yeah, I mean, I think the. [00:31:26] Speaker C: It was. [00:31:26] Speaker A: It's more of the code assistant, you know, scaffolding assistance is my understanding of Q so far, but maybe they've added some magenta capabilities. It's been a while since I've looked at it again because it's so bad. [00:31:37] Speaker C: Well, I mean, it would be nice to like, you know, be able to natural language query your. Your ginormous AWS infrastructure and have it just figured it out. [00:31:45] Speaker A: Right. [00:31:45] Speaker C: Like, that would be fantastic if they can get there. But I don't know if it's there yet. [00:31:48] Speaker A: I mean, as long as it's not hallucinating servers, I don't have. [00:31:51] Speaker C: Right. Well, and then there's the. The whole access too is a nightmare. [00:31:55] Speaker A: Right? [00:31:55] Speaker C: Like, is it going to spit back only. Only the results that I have IAM access for? Like, yeah, no, I don't want to. [00:32:03] Speaker A: Be on that team has to figure that one out. [00:32:04] Speaker C: That's. [00:32:05] Speaker A: That's brutal. All right. In January, we talked about Deep SEQ quite a bit and we mentioned that you could get access to deepsea in Bedrock through the Marketplace or via the custom model import. But Amazon said we can make it even easier for you. And they did that by making Deep SEQ part of the bedrock serverless model. This new service model allows you to spin resources up dynamically in the background as you use the Bedrock Chat API interface and this is fully managed DeepSeek R1 models, all available to you via Bedrock without importing or using the Marketplace third party one. So this is nice. You'll be able to then tune these and do all kinds of other things as you go in the future and use RAG etc with deepseek. So if you're okay with the ramifications they may have stolen all their data from OpenAI, you can use Deep Seek in your in your product. Good luck to you. [00:32:58] Speaker C: I mean OpenAI stole all their data from everyone else. [00:33:01] Speaker A: So yeah, I mean it's all, it's all turtles all the way down. So. [00:33:06] Speaker C: And that's the thing I like the most about these things. You know the memes that came out for OpenAI still haven't used the model yet. [00:33:15] Speaker A: I've only played with it in some toy experiments but and I had to write a little bit of python code one day because I was curious and it wrote good python code like it was work cool. So all right, for those of you who've been in the Amazon world for a long time, you know all about the well Architecture review. The well architecture review is all about building cloud infrastructure baked on proven best practices and promoting security, reliability and cost efficiency. And to achieve these goals, the AWS well Architecture framework provides comprehensive guidance for building and improving your cloud architectures. That doesn't always meet what you need, but it's a generic guideline. As your system scales, connecting well to framework reviews becomes more crucial, offering deeper insights and strategic value to help organizations optimize their growing cloud environments. It's also particularly painful if you have hundreds of Amazon accounts where each one of them you need to now do well framework reviews. And typically you do them in a couple different ways. One was that they gave you a self service model where you could go into the console, you could create a well architected review and you answer the questions. The other way is that you get your solutions architect to do it with you and they fill up the form. Or now they have a new way with the well architected Framework Accelerator solution that uses generative AI to help streamline and expedite the well architected framework review process. By automating the initial assessment and documentation process, the solution significantly reduces time spent on evaluations while providing consistent architecture assessments against AWS principles. This allows teams to focus on more on implementing improvements and optimizing AWS infrastructure. The Solution Corp through the following features rag to create context aware detailed assessments an interactive chat interface. You can talk to it about it an interview with AWS architected tool which pre populates workload information and initial assessment responses directly into the tool. I just mentioned to you a couple. [00:34:57] Speaker C: Minutes ago, this has the potential of being really amazing. I have very mixed feelings about the well architect framework process. I've done both the self serve many times and even the walkthrough from technical, technical count support and I always just feel like it, it lacks, it lacks the ability to find any real problems. [00:35:22] Speaker A: Right. [00:35:23] Speaker C: Like once you get past the like, you know, regional distribution and being able to rehydrate data sort of problems, it sort of falls down very quickly and, and doesn't help solve complex issues that may arrive due to conditions. And so I'm sort of hoping that, you know, introducing AI into this mix might give it that ability to sort of have a lot more context into your deployment as it's asking you questions for that for the assessment. So it would be cool if that is how this actually shakes down. [00:35:59] Speaker A: Now I have to ask you, did you look at the diagram of the solution that they put into the blog post? [00:36:05] Speaker C: Oh no, I didn't. [00:36:09] Speaker A: Oh God, I'm not, I'm not sure they had the well Architecture Framework reviewer review the architecture of the well Architecture Framework accelerator because it's, it's a lot. [00:36:22] Speaker C: You could just put all the logos and splash it up with spaghetti, you'd be fine. [00:36:25] Speaker A: Yeah, you know it basically has cloud front involved, it has an alb, it has waf, it has cognito, it's got bedrock in here, it's got buckets, it's got SQs, lambda functions. Textract is involved at one point. Open search, serverless and embedded generated embeddings from knowledge bases. And there's a lot to this. Yeah, it's not actually a lot of infrastructure. Like it's one easy to instance that you run the, basically the server on that runs all this other stuff. But yeah, I was like, I don't know that you use the well architecture framework for this architecture because this is a lot and I do hope this becomes actual product versus this kind of bolt on. This is one of these solutions created by professional services that they're now offering to you as a solution creation which are always kind of a bit onerous in their own approach. But I do, I do like it. I just, I have doubts. [00:37:21] Speaker C: Yeah, I mean they're being really thorough in this documentation. Like they're calling out the WAF and the application load balancer in front of the EC2. So you know, they are some of it is semi intentional. But yeah this is a lot. [00:37:35] Speaker A: I mean someone even commented because you can comment on these blog posts and the person said the great blog and a lot of effort invested for some valuable results. A few observations. It took an experienced engineer one to two days deployed depending on other priorities to get the best from. It relies on detailed and lengthy architecture document that the customers that could potentially benefit from the most are least likely to have created. Yep. Generated 20,122 words and 533 paragraphs. Wow. And that would take more than at least a day to read, digest and extricate benefits from and the remediations need to be detailed and this is not detailed, only recommendations and there's no concept of high risk issues versus others and so on. So not quite there yet. But definitely a cool idea and it's definitely you know, when they created some of the other walker things like it's always been kind of a question of like can't you automate a lot of this stuff? Like don't you know from Amazon configurations if you're doing some of these things it asks these dumb questions like are you using a waf? It's like well I have Cloudfront enabled and I have shield. So yes. Should have known that I couldn't really do. [00:38:37] Speaker C: Yeah. Why do I got to answer this? You know that is a lot. Is it paid by the word? Is that how it's built? [00:38:45] Speaker A: Context windows, my friend? Yes. So I keep an eye on this one. I hope it gets better because it's a cool idea. [00:38:55] Speaker C: Um, yeah, like there's a lot of potential here, you know, like it's. You know and I, I do think that there's a lot of value that you can get out of a well architected framework and they just by making you think critically about your app as you go through it. But the. The process itself other than that doesn't add enough for me. So be cool. Agreed. [00:39:22] Speaker A: In our final Amazon announcement for this week, AWS is announcing the general availability of a multi agent collaboration for Amazon Bedrock allowing developers to create networks of specialized agents that communicate and coordinate under the guidance of a supervisor agent. This new capability allows you to tackle more intricate multi step workflows and scale your AI driven applications more effectively. The Bedrock Multi agent collaboration introduces key enhancements designed to improve scalability, flexibility and operational efficiency. Inline agents allow you to dynamically adjust agent roles and behaviors at runtime, making workflows more adaptable as your business needs evolve. [00:39:54] Speaker C: Do you think that supervisor agent just stands around? Doesn't really do Anything and then takes credit for all the other agents work. [00:39:59] Speaker A: Yes, because that would be funny. [00:40:03] Speaker C: Yeah. Yeah. I have yet to string enough together to make these things make sense to me. And like, like I said, I've just, just a whole bunch of failed experiments so far. [00:40:17] Speaker A: Yeah, I mean a lot of us are in the just failed experiments stage of AI. Just kind of the nature of the beast at the moment. But yeah, I'm glad to see more and more capability coming and I do. [00:40:29] Speaker C: Like the, the focus on, you know, it's more than just the models and, and, and training and you know, and the new heart fancy hardware. It's more about how to make this usable and easier to adopt. So I do like that focus which seems to be across all the cloud providers. [00:40:44] Speaker A: Yeah, I mean, I think, I think they have to. [00:40:46] Speaker C: That's the reality. [00:40:49] Speaker A: All right, Google is having Google Cloud Next. It's less than a month away, like three weeks. I actually went through and picked courses for myself to go to. But I like to point out two very important courses that you should be aware of and potentially sign up for. First is BRK2024 which is workload optimized data protection for mission critical enterprise apps. And the Other one is BRK1028 which is unlock value for your workloads. Microsoft, Oracle, OpenShift and more. Now you might read those titles and say those don't sound very exciting and they are exciting, but you really want to go to them because that's where I'm going to be on stage talking as part of both of those presentations. I have a brief customer testimonial portion that I am presenting. And if you come to that talk and you see me and I will have cloud POD stickers, this is a guaranteed location at Google Cloud Next where you will find me and you will find stickers. I mean, the rest of the time you have to hope you find Ryan or myself. There's really two of us going this year, so one of the two of us wandering around the keynote with their sticker and hoping that the best way. But if you want a sticker from the cloud podcast, these are the two sessions to go to. And I can guarantee I will be at because I'm not being paid, but they'll be very mad if I don't show up. [00:42:10] Speaker C: Yeah, they're sort of banking on it. Yeah. And I think I'm legally obligated to go to these as, you know, most. [00:42:15] Speaker A: Likely you are just as a support for me so that I can look at somebody who I know, and not be totally terrified. But these are actually really good talks. One is around, you know, basically moving, you know, highly critical, mission critical enterprise apps to cloud, what that means and how you architect for those things, which is a pretty good talk. I've seen the content, I can say it's exciting. Then the beakrk1028 I've seen some of the previous stuff for that one as well and I can definitely say that there's some very cool stuff for. NET developers. And that one, if you're a. NET developer, you should definitely go to that session. But anyways, you'll see me there, you'll get stickers. You can enjoy me talking for eight minutes each of the presentations. That's exactly the time as I was given. And the rest of the time Google product will be talking. So that's probably more important to you anyways. [00:43:04] Speaker C: Cool. [00:43:05] Speaker A: All right. Google has been directly confronting Kubernetes troubleshooting challenges for years, perhaps as they support large scale complex deployments. Google Cloud support teams have developed deep expertise in diagnosing issues with Kubernetes environments through routinely analyzing a vast number of customer support tickets, diving into their user environments, and leveraging their collective knowledge to pinpoint the root cause of problems. To address all of this troubleshooting they've done, and make it something repeatable and something they can provide to you, they're releasing the Kubernetes History Inspector, or khi, as an open source tool to the community. Effective Kubernetes troubleshooting requires collecting, correlating and analyzing these disparate log streams and manually configuring. Logging for each of these components can be a significant burden, requiring careful attention to detail and a thorough understanding of the Kubernetes Kubernetes ecosystem. But collecting the logs is the easy part. The real challenge both lies in analyzing the logs, and many issues in Kubernetes are not revealed by a single obvious error message. Instead, they'll manifest as chains of events, requiring a deep understanding of the causal relationships between numerous log entries across multiple components. And so KHI is a powerful tool that analyzes logs collected by cloud logging, extracts state information from each component, and visualize it into a chronological timeline. And furthermore, KHI links this timeline back to the raw log data, allowing you to track how easy each element evolved over time. [00:44:22] Speaker C: This is pretty rad. I mean, I like to to, you know, dunk on Kubernetes for being super complex, because it is. But I do like this type of tool because it's really the only way you're going to ever make me use it, to be honest, like, because even I'm redoing all my home stuff and I was going to do a Kubernetes lab and the whole thing and I started going down the path. I'm like, I just don't need any of this. Like, I want to learn it, but it's just not the important bits. And because I don't have, you know, like a giant, you know, web app or something that's going to be very complex in terms of traffic routing, I just don't need it. And then every time I do have to look at a workload where it's a complex drainage workload, it's, it's impossible because all the, all the different pods log different places and it's aggregated in different orders depending on how you're collecting. So it's a, it's been a problem. And in fact, every application development team that I've been a part of that use Kubernetes, there was actually built into the application a way to collect and analyze the logs for specifically that application on top of, you know, so, you know, separate them out from the, the cluster level logs that, you know, maybe shared across many workloads. So this is, this is cool because it sort of builds all that in and makes it easier. But, you know, and we were looking at the screenshots for this, right? It's, it definitely is a testament to how complex it could be. Right? Those screenshots are not simplistic at all. It looks more like, you know, like a Wireshark or what was the tool you had a tool that it looked like, like, kind of. Oh, like the developer tools for web tool. [00:46:06] Speaker A: Yeah, I found the Google. The Google Developer tools in Chrome. Yeah, yeah, yeah. It's a cool visualization though because like, even in ecs, I've had this problem before where I've had like multiple containers that talk to each other and then like, oh my God, why are we in this error? And it's like if I could see the state, I would have known that the other container crashed, which is why this error occurred at my container. I just dependency on it. So, like, there's definitely value in this visualization, but it's not exactly how I would have visualized it. So when I was reading through the article, I was very excited and then I saw the screenshots and I was like, huh, it's not bad. But it's definitely not how I, I thought it was going to look when I saw it. So it's definitely worth checking out the screenshots, but it's very cool. And I assume that it'll get polished and cleaned up now. It's something in the community. I assume other people who do kubernetes will get excited about this, I hope, and it'll get better and better. So. [00:46:55] Speaker C: Yeah. [00:46:57] Speaker A: All right. That was okay. I'm going to attempt my best Swedish Chef impersonation and say, hey, Shverse, which is basically says the Google Cloud launched their new region in Sweden. Google's new cloud region is open. It represents an investment by Google into Sweden's future and Google's ongoing commitment to empowering businesses, individuals, and the power of the cloud. This new region is the 42nd globally for Google and the 13th in Europe, opening doors to opportunities for innovation, sustainability and growth within Sweden and across the globe. [00:47:31] Speaker C: And as always with these new region announcements, I feel if anyone ever wants me to go inspect these personally would really appreciate the excuse to tread. [00:47:43] Speaker A: I was talking to someone the other day about data center tours, and I was like, I haven't seen the data center, and I can't tell you when because I stopped going to them. Even though I inherit data centers, when I take on new jobs, typically because I take a job where I'm helping migrate a company from on prem to cloud, you know, and I always insult the poor engine, you know, infrastructure guy, because he's like, do you want to go out to the data center? I'm like, where is it at? He's like, oh, you know, Switch or something in Las Vegas or, you know, some other place. And I'm like, yeah, no, like, does it have blinky lights? And he's like, yeah, it has blinky lights. And I'm like, yeah, I think I've seen it before. [00:48:13] Speaker C: Kind of loud in there. [00:48:14] Speaker A: Yeah, yeah, cold. And part of the problem is that, you know, when I go to data centers, my throat immediately starts swelling up because I've spent so many outages in data centers, and so I have, like, Babylonian response to them now. So I was like, I don't. I don't need to go. Don't matter. Yeah, I'm good. [00:48:33] Speaker C: I mean, I'll go. Just look at the. Outside of the building if. If someone will fly me to Sweden. [00:48:37] Speaker A: Yeah, sure, I'll. Yeah, I'm. I'm good with that plan for that. But, yeah, it was definitely, you know, the whole idea of touring data centers, it's just kind of crazy to me in general. But, yeah, going to Sweden just to make sure, you know, hey, there's a Data center there, there's a building. Yeah, I'm down with that. [00:48:50] Speaker C: That's works. [00:48:52] Speaker A: Yeah, right. And Google's announcing AI protection this week. As use cases increase. Security remains a top concern and they often hear that organizations are worried about risks that can come with rapid adoption of AI. Google Cloud is committed to helping their customers confidently build and deploy AI in a secure, compliant and private manner. Google is making it easier to mitigate risk through the AI lifecycle with their new AI Protection, a set of capabilities designed to safeguard AI workloads and data across clouds and models, irrespective of the platforms you choose to use. AI Protection helps teams comprehensively manage AI risk by A discovering AI inventory in your environment and assessing it for potential vulnerabilities Two, securing AI assets with controls, policies and guardrails. And three, managing threats against AI systems with detection, investigation and response capabilities. And then rolling all that into sec, their multi cloud risk management platform, so that security teams can get a centralized view of their AI posture and manage AI risks holistically in context with their other cloud risks. [00:49:49] Speaker C: That first bullet I didn't think was going to be that difficult, like discovering the AI inventory. And it is, it is very difficult. Like it's crazy. You think with, you know, full access to read APIs, you could just list all the things you have, but it's, it's complex because it's, it's data that's used for trading in this environment and, and so like these type of tools, it's really, you're querying kind of a relationship between many different services. So this is to build that inventory up. [00:50:17] Speaker A: So this is great. [00:50:18] Speaker C: I love the tools like this. [00:50:20] Speaker A: Yeah. And also it says it pulls in a model armor SDP discovery, AI related toxic combinations, posture management for AI threat detection for a, the notebook security scanner and the data security posture management all into SEC for this. Yeah, it's a pretty full feature out of the box, which I'm, you know, pretty impressive for a Google product. [00:50:39] Speaker C: Yeah, right. [00:50:41] Speaker A: And then finally, Google is announcing a fully managed tiered storage for Spanner, a new capability that lets you use larger data sets of Spanner by striking the right balance between cost and performance while minimizing operational overhead through a simple, easy to use interface. Tiered storage with Spanner addresses the challenge of hot and cold data and allows you to tier based on hard disks that are 80% cheaper than the SSDs that run to Spanner normally. Addition to the cost savings, you get ease of management, unified and consistent experiences and flexibility and control. [00:51:07] Speaker C: So you Think people stopped burning money with large spanner deployments so they had to do something? [00:51:12] Speaker A: I mean that seems like a logical scenario based on this. [00:51:15] Speaker C: Yeah. I mean this looks great. You know, the ability to, to have data stored cold and, and pay lower price for it. I left so fantastic. [00:51:24] Speaker A: And that's kind of automated is cool too. I don't have to do the work. [00:51:29] Speaker C: Yeah. I don't have to hydrate. I don't have to. Yeah. I don't have to move these things around. [00:51:34] Speaker A: Yep. All right, well, we're going to Azure and do our best without Matt here this week. And at least Claudia Service gets more features this week and that's of course because it's Azure Elastic san. Some of the new features this week include auto scale for capacity which is now in public preview helps you save you time by simplifying the management of your Elastic san as you can set a policy for auto scaling your capacity when you're running out of storage rather than needing to actively track whether your storage is reaching its limits. Which is a great feature if you have a runaway log file because it can cost you a lot of money really quickly. [00:52:07] Speaker C: Yeah. [00:52:08] Speaker A: They now have snapshot support generally available so you can snapshot your SAN to your heart's content. CRC protection to maintain the integrity of your data is now available. There's fully validated, optimized for costs with SQL FCI workloads. So it's basically been certified for FCI. There's new reduced TCO for Azure VMware on Elastic San as well as full AKS support, all certified now on top of the Azure Elastic san. If that was preventing you from using it, I'm not sure Kubernetes is the right solution for you because certified Kubernetes is really hard to find. [00:52:38] Speaker C: Yeah, I mean it's the storage layer underneath I would suppose. So if you're using a stored shared model running your database on. In the container. Yeah, I don't know. I mean it's, you know, it's these are these type of things is what I want if I'm going to have to manage infrastructure at this level like I want it to be auto scaling and fairly automatic. [00:53:01] Speaker A: Yeah, I mean if I have to make a decision to deploy a SAN in my cloud, I definitely want it to not be something I have to think about ever again. [00:53:07] Speaker C: Yeah, exactly. [00:53:12] Speaker A: All right. Microsoft has completed the EU Data Boundary for the Microsoft Cloud, an industry leading solution that stores and processes public sector and commercial customer data in the EU and European Free Trade Association. While the Boundary. The European commercial and public sector customers are now able to store and process their customer data and pseudo anonymized personal data for Microsoft Core cloud services, including Microsoft 365, Dynamics 365 Power Platform and most Azure services within the EU and EFTA. Interesting. They've worked on this for quite a While with phase one coming out in the first half of 2023 which was storage and processing, and phase two was in the second half 23 which is storage and processing of pseudoized personal data. Now phase three, which just took two years to do, was storage and professional services data in this last phase. So basically if you request technical support. [00:54:03] Speaker C: They had to clean up all the tech debt from the first two phases. [00:54:06] Speaker A: Exactly, yeah. So basically if you're requesting technical support for services such as Microsoft365 or the others, the professional services data provided by customers such as your logs and generated by Microsoft such as the support case notes are now stored within the EU and EFTA regions for all capabilities. So now they fully meet the Microsoft EU data boundary requirements and meet the highest levels of that standard. [00:54:27] Speaker C: Yeah, hopefully it's not just all duct tape and bailing wire on the back end, but you know, these things are hard to do, so I can only imagine the scale of Microsoft like that must have been a nightmare. [00:54:41] Speaker A: 100%. Azure Load Testing is celebrating its two year anniversary with a few announcements. Starting March 1, you'll benefit from significant pricing changes including no monthly resource fee. Limiting the $10 monthly resource fee to help you save on overall costs. I mean $10 is not a big problem. My Amazon or Azure or Google bill since a rounding error. 20% price reduction for the cost of the virtual user hour for 10,000 for greater than 10,000 VHUs is reduced from seven and a half cents to six cents. Those cents add up after time as well as the consumption limit per resource has been changed. We're also excited to announce Locust based tests. This edition allows you to leverage the power, flexibility and developer friendly nature of the Python based Locust Load Testing framework in addition to already supporting Apache JMeter load testing framework. I did not know about the Python based Locust Load Testing framework, but it's well named. [00:55:31] Speaker C: Yeah, it is. So they charged $10 a month just for having. [00:55:36] Speaker A: They did. They don't anymore. [00:55:38] Speaker C: They don't anymore. Yeah, I can see why that would sort of just make me it's not very cloudy. [00:55:45] Speaker A: Did it make you excited though? Because like the headline for this is Azure Load Testing celebrates two years with two exciting Announcements. And I was like, oh, exciting. And then I opened it up. I'm like, that's your announce sense. [00:55:54] Speaker C: No, no, basically you can scale. [00:55:58] Speaker A: You simulate over a hundred thousand concurrent users. So yeah, 100,000 locusts makes sense. [00:56:03] Speaker C: Still. [00:56:04] Speaker A: Suck on that. I gotta learn more about this because I've always wanted to. I hate JMETER tests. So it was easier to write locus based tests. [00:56:12] Speaker C: There's an alternative. [00:56:13] Speaker A: I'm totally intrigued. And so I'm gonna have to check that out. I don't. Again, I don't try to do a lot of load testing anymore. My. In my day we work because I hate it. [00:56:22] Speaker C: Yeah, but that's because JMeter sucks. [00:56:25] Speaker A: I know. That's why I didn't. That's why I don't do it. Yeah, this makes it better. I'll get back to you on that. I'm gonna. [00:56:31] Speaker C: I'll just test in production. Thank you. [00:56:34] Speaker A: Yeah, my load test is production load. Thank you. All right. And then finally Talking about those OpenAI things earlier, the Responses API and then the computer using Agent was announced about a month or two back now in private preview. Those are now available to you through Azure Foundry via the both Responses API and the computer using Agent. So there you go. You now have them in Azure Open API. I don't have anything else to say about it. Like, cool. This is one of the lackluster parts about having OpenAI and Microsoft being this closely partnered is that OpenAI does it and then Microsoft supports it. [00:57:09] Speaker C: Like, yeah, Hooray. [00:57:13] Speaker A: Yay. And I have an Oracle story for us this week because it was Oracle earnings, which apparently was a bit of a mixed bag. Oracle won some big cloud contracts, but their stock still fell because they missed their third quarter earnings targets. Expectations. Oracle shares surged last year amid the artificial intelligence boom, but are down 14 overall in 2025. The Oracle's guidance for the fiscal fourth quarter was also below Wall Street's expectations, implying fiscal 2025 revenue growth of only 7 and a half to 8% on billions of dollars. [00:57:41] Speaker C: So whatever. [00:57:45] Speaker A: Basically, free cash flow is also a bit of a challenge for them right now. Basically, they've been investing a lot of money in AI and data center expansion, and that has been a crimp on their free cash flow, which they're hoping to rebuild here as they wrap up their capital investments for the year. So they might get better, they might get worse. It's all gonna be about the economy, and the economy right now does not look great. So good luck to you, Oracle. [00:58:05] Speaker C: And if the economy stables up, then you still have the diminishing resources that it takes to host cloud and AI workloads, so. Yeah. Yeah. [00:58:13] Speaker A: Good luck. Right, well, that's it. We made it through, Ryan. [00:58:17] Speaker C: All right. [00:58:19] Speaker A: I'm definitely going to get on my elastic sand going. [00:58:24] Speaker C: I might load test your elastic sand with new Locus thing. [00:58:28] Speaker A: Yeah. Maybe get some Locust my elastic sand. You'll see how that works out. See if it auto scales direct. [00:58:33] Speaker C: I can just do full directory scans over and over. I bet you I could. I can wreak some havoc. [00:58:37] Speaker A: Yeah. And then you should log it to a file when you use directory scan. That way we can. We can, you know, just consume petabytes of data. Yeah, let's definitely do that. All right, well, have a great one. I will see you next week here in the Cloud. [00:58:49] Speaker C: Bye, everybody. [00:58:52] Speaker B: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services. While you're at it, head over to our [email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback, and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.

Other Episodes

Episode 81

August 21, 2020 00:43:39
Episode Cover

Ep 81 - Azure & GCP ... Are you ok?

It’s an unexpectedly short and sweet conference week on this week’s episode of The Cloud Pod. A big thanks to this week’s sponsors: Foghorn...

Listen

Episode 124

July 08, 2021 00:44:29
Episode Cover

124: The Cloud Pod now with millions of bugs

On The Cloud Pod this week, with the first half of the year full of less-than-ideal events, the team is looking forward to another...

Listen

Episode 83

September 02, 2020 00:57:32
Episode Cover

Episode 83: The Cloud Pod takes a Quantum Leap

Your hosts set right what once went wrong in this week’s quantum episode of The Cloud Pod. A big thanks to this week’s sponsors:...

Listen