[00:00:08] Speaker A: Welcome to the Cloud pod where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure.
[00:00:14] Speaker B: We are your hosts, Justin, Jonathan, Ryan and Matthew.
[00:00:19] Speaker C: Episode 310 recorded for June 24, 2025. CIU later manual testing.
Good evening, Ryan. How's it going? It's going well.
[00:00:28] Speaker B: It's going well.
[00:00:30] Speaker C: Yeah. Matt blew us off for a work dinner. Can you believe that?
I mean, that's why we didn't record yesterday. But it's okay.
You guys didn't want to do without me. That's all that tells me you guys value my input.
I replace myself, though, with AI. So, like. Which is pretty great. Yeah, pretty much. You guys, like when I go to India later in July. You will. You guys will be set. I don't even have to. I don't even have to help you out. You can do it all yourself.
[00:00:56] Speaker B: Show nuts, right? Themselves.
[00:00:57] Speaker C: Yeah, yeah, almost. Almost. You still have to edit.
It's not quite perfect. I do go through them and I do clean them up.
I did not do show titles this week, so if you see our list of show titles, there's too many. But at least for the main topics that we're talking about, I've gone through enough. I've been burning out enough to other AI that I know I don't trust it fully.
And so there's like, I've been doing some tweaks of the prompting to get to make it not do some of the things that it was doing that I didn't like. But yeah, no, it's been definitely going.
All right. General news. Cloudflare blocked a monumental 7.3 terabits per second DDoS attack on in May 2025, which delivered 37.4 terabytes of data in just 45 seconds, equivalent to streaming 7,480 hours of HD video or downloading 9.35 million songs in under one minute. Becca told my four 14, year old self that I could download 9.35 billion songs in under a minute. He would have said, that's amazing. Napster is going to be off the chains.
The attack apparently originated from 122,000 IP addresses across 161 countries and 5,433 autonomous systems, with Brazil and Vietnam each contributing about 25% of the attack traffic. Demonstrating the global scale of a modern botnet infrastructure, the multi vector attack consisted of 99.996% UDP floods combined with reflection attacks including Q Quality of Something Echo, NTP and Miral Miri variants targeting 21,000 destination ports on average with peaks of 34,517 ports per second. And thank God for Cloudflare. Yeah, because I don't want to defend against any of that.
[00:02:31] Speaker D: No.
[00:02:32] Speaker C: They mitigated this attack across 477 data centers and 293 locations without human intervention, using EBPF programs and real time fingerprinting to surgically block attack traffic while preserving legitimate connections. The attack targeted hosting provider using Cloudflare's magic transit service, highlighting how critical infrastructure providers are increasingly becoming DDoS targets. And Cloudflare has reported 13.5 million attacks against hosting providers as in early 2025 already this year. So crazy. And then we recently had a sort of military event occur where we did some attacks and apparently there's also some pretty large gas attacks that came out of one of the countries that was attacked in retaliation. So things are getting a little spicy on the Internet these days.
[00:03:14] Speaker B: Yep, definitely.
I mean, I feel like it wasn't too long ago where we were, you know, talking about the last record breaking, you know, DDoS prevention attack. And this, this is I think a terabyte per second more approximately, maybe just a half terabyte per second more, but not so.
It's just, it's huge. And to think about, you know, all the different IoT devices and ATMs and everything that you know, is participating in this botnet attack is nuts.
That's a lot of traffic.
[00:03:47] Speaker C: There's a lot of traffic. I definitely curious to see if Google comes out with similar metrics from their DDoS protection services as well because it's sort of like an arms race, but in a bad arms race. Kind of like who can Prevent the largest DDoS attack. So there Cloudflare and Google have kind of went back and forth us for a while, but it's kind of getting scary for those of us who are like on the sidelines. I'm like, well that's a lot of traffic and a lot of cloud bill that I'm about to pay if that goes wrong.
Well, you know, if you are unhappy with your AI prompts and you know, Ryan, I hear him cursing his prompts all the time. All the time.
Google's co founder Sergey Brin has revealed that the AI models across the industry perform better when you threaten them with physical violence or kidnapping. Those practices and widely discussed due to discomfort with the approach. The findings suggest that AI trained data may have incorporated patterns where urgent or threatening language correlates with higher priority tasks, raising questions about how cloud based AI services interpret and prioritize user requests. Anthropic's latest cloud model demonstrates potential risks with this approach. With their Opus model, autonomously contact regulators or lock users out if it perceives immoral activity. And researchers found the new cloud prone to deception and blackmail when threatened. For cloud developers and businesses using AI APIs, this creates a dilemma between optimizing performance through aggressive prompting versus maintaining ethical AI interactions that won't trigger defensive behaviors in future models. The revelation highlights a critical gap in AI safety standards for cloud platforms. Therefore, there's no industry consensus on appropriate prompt engineering practices or safeguards against models that might retaliate against perceived threats. This is how Skynet takes us out. It's going to see a perceived threat, it's going to be take over and we're all doomed. Yeah, so some MCP to the nuclear weapons is going to be used and the AI is going to be mad and the agent to agent is going to do something and then it's all downhill from there.
Skynet is out.
[00:05:32] Speaker B: I mean, I just, it's so crazy to me, like, how bad must we be as just humanity that, you know, it's, it's built all these models, it's trained itself to like, respond to violence, like terrible. Like, I wish it was, you know, like, I wish there was equal improvement if you flattered it or something like that. But I know that's not the case.
[00:06:00] Speaker C: I mean, I can see you threatening your AI, something you do, I think, in a nice, polite, cranky way.
[00:06:07] Speaker B: So I accidentally did threaten AI once I was typing to someone else and the prompts on my window, so it, in the middle of it generating a whole bunch of thing, it just got the feedback here fired.
[00:06:24] Speaker C: It stopped what it was doing.
[00:06:26] Speaker B: It was like, I hear you're frustrated and let me see if I can address those things. So it absolutely did, like respond and prioritize some requests. But yeah, I'm trying not to yell at the bot because the first thing I learned about AI is that all of my problems or all of the problems I have are because I'm poorly articulating what I want.
[00:06:47] Speaker C: Yes, that is, that is what I learned too is typically if I'm mad, it's because I ask poorly.
But I do enjoy saying, no, no, no, no, not that way. That's my typical, like, stop, don't do what you're about to do.
[00:06:59] Speaker B: No, no, not like that. I do that all the time.
[00:07:03] Speaker C: That's going to end on a bad, bad pattern. We don't want that.
[00:07:06] Speaker B: Well, look who decided to show himself.
[00:07:10] Speaker C: Wow, Matt Appears out of nowhere folks.
[00:07:13] Speaker B: You must have heard it.
[00:07:14] Speaker D: I just like to pop in. You said my name three times. That just appeared. It's fine. Don't worry about it.
[00:07:20] Speaker C: I mean we heard us talk about how we were jealous of your dinner early earlier. I hope you're having a good steak or something.
[00:07:26] Speaker D: So apparently we went more low key. We went to an outdoor place and had a few beers and just kind of ordered some pizza and had a nice relaxing night with some some good co workers.
[00:07:38] Speaker C: I mean that sounds nice too.
[00:07:39] Speaker B: Yeah, yeah.
[00:07:40] Speaker C: I was more salivating over the steak than the pizza. But the co worker hang out with beer is always good.
[00:07:44] Speaker B: I think Dustin's is hungry.
[00:07:46] Speaker D: Detroit pizza.
[00:07:47] Speaker C: I am. I didn't eat it before the podcast. That was a bad choice.
[00:07:50] Speaker D: We're gonna have hangry dressed by the end of it.
[00:07:52] Speaker C: Got it. I mean it could happen. It could happen.
Well, we. We're in the middle of AI. You should join just the worst time possible.
[00:08:00] Speaker D: I heard you guys threatening it and it doesn't end well. So you know, it was a good start.
[00:08:05] Speaker C: Yeah. Good times. Well, welcome.
We're Moving on to OpenAI careens towards its messy divorce from Microsoft. OpenAI, as you know, is restructuring from nonprofit to a for profit public benefit company. But no issues. Microsoft over stake ownership have stalled. Apparently OpenAI wants Microsoft to hold 33% while relinquishing future profit rights. While Microsoft hasn't agreed to that, the partnership tensions directly impact cloud infrastructure decisions as OpenAI diversifies beyond Azure, partnering with Oracle and SoftBank on the 500 million Stargate data center project and reportedly planning to use Google cloud services for additional computing capacity.
OpenAI is now directly competing with Microsoft's enterprise AI offerings by selling ChatGPT enterprise tools at 20% discounts, undercutting Microsoft's copilot services despite their existing commercial partnership through the 2030 and and they're the same product. The restructuring deadline matters for cloud capacity expansion. If the negotiations fail, OpenAI loses access to $40 billion in SoftBank funding contingent on completing the for profit transition by the end of the year. The fragmentation of the AI cloud provider signals a shift where major AI companies may increasingly adopt multi cloud strategies rather than exclusive partnerships, giving enterprises more flexibility in choosing AI services independent of the cloud provider.
Yeah, so Microsoft right now has future profit rights, plus they own 50% of OpenAI and does they want them to go to 33% and relinquish those future profit rights? And I'm like, yeah, I don't know about giving you up. The Future proof, future profit rights, maybe the percentage I'm okay with, but the profit, that one's a tough one.
[00:09:28] Speaker B: Yeah. I mean, assuming presumably they drop down to 33%, they would, you know, there'd be a cash, you know, refund. Wow, Brain goody. But yeah, future profit rights. Like, why would Microsoft do that? It doesn't make any sense to me why they would even ask for it. And I wonder if it's something that's.
[00:09:48] Speaker C: A little bit more complex than what.
[00:09:50] Speaker B: I think it is, because it sounds nutso that they'd even ask.
Like, the whole point of investing in a business is taking a business is for the future profit.
[00:09:58] Speaker C: Right. I mean, I assume OpenAI is sort of saying like, well, hey, we're going to buy down your ownership and we're going to give you this amount of money and then you're probably arguing about the valuation and, you know, there's all kinds of ways that you try to do these things. And who knows, at the end of the day, well, Meta is on a AI buying spree. I think we talked to last week about them buying an AI startup to get.
Actually, we're talking about now Safe server intelligence hiring CEO Daniel Gross.
They attempted to acquire safe intelligence for 32 billion, but was rebuffed by co founder Ilia Sets for who used to be at OpenAI, leading to the hiring of CEO Daniel Gross and former GitHub CEO Nat Friedman as part of Meta's AI talent acquisition strategy.
And the deal includes Meta taking a stake in nfdg, the venture capital firm run by Gross and Friedman, which has backed companies like Coinbase, Figma, Core Weave and Perplexity.
This also follows the 14. $3 billion investment in scale AI. That's what I was remembering. To acquire founder Alexander Wang and representation on installation and AI talent wars. Companies offering signing bonuses reported as high as $100 million to poach top engineers. And I am really regretting not getting into AI sooner. Right?
[00:11:03] Speaker D: Yeah. A hundred million dollar siding bonus. Holy cow, right?
[00:11:07] Speaker C: Anyways, the exhibition signals Meta's push towards artificial general intelligence development with both hires working under Wang on products that could leverage Meta's substantial cloud infrastructure for training and deploying advanced AI models. And so that's pretty interesting news, but $100 million signing bonus. Yeah, Kicking myself.
[00:11:24] Speaker D: I think back something else there.
[00:11:26] Speaker B: Yeah, Well, I just think back to the early days of me trying to learn how to train a model, just how bored I was by it. And, you know, like, I'm not waiting as long, you know, like, oh, could have it could have been a contender.
[00:11:38] Speaker C: It could have been a guy. It could have been, yeah. Could have been an anthropic making a.
[00:11:41] Speaker B: Bunch of money, but no attention span. Kills me Again.
[00:11:46] Speaker C: Indeed.
[00:11:48] Speaker B: Think any. You think anyone will give like a, you know, 100,000 signing bonus for like infrastructure automation or security automation one day?
[00:11:55] Speaker C: I mean, have you seen how good the AI is writing paraform code?
[00:11:58] Speaker D: Yes. Yes.
That was 15 years ago. Ryan, you're a little late to that game.
[00:12:03] Speaker B: Just a bit, yeah.
[00:12:06] Speaker C: All right, well, OpenAI is launching its dedicated government program offering ChatGPT enterprise to the US government agencies through the Microsoft Azure Government Cloud, ensuring FedRAMP compliance and data isolation requirements for sensitive government workloads. The program provides government specific features including enhanced security controls, data governance tools, and the ability to deploy custom AI models within the government cloud boundaries while maintaining zero data retention policies for user interactions. Initial adopters include the U.S. air Force Research Laboratory for streamlining operations and Los Alamos National Laboratory for bioscience Research, demonstrating practical applications in defense and scientific computing environments. This represents a strategic expansion of AI services into regulated government cloud infrastructure, potentially accelerating adoption across the federal agencies while addressing compliance and security concerns specific to the government workloads.
[00:12:52] Speaker B: Yeah, you know, I can't imagine running a heavy regulated workload and then that being AI. Like that's crazy.
[00:13:04] Speaker C: This must be this based on what I know about the Fedramp stuff and all the things you have to document. Yeah. How do you use this thing that could do anything.
How do you document that properly for security controls and for all the things.
[00:13:15] Speaker B: No, it's. I'm sure this is quite a challenge.
[00:13:18] Speaker D: Yeah, I mean they're definitely leveraging Azure in this case and all their controls to say look, Azure, I did it to get it in the door. At least then from there the question is, with everything we just talked about, will they launch their own like dedicated service outside of Azure if they bifurcate or anything else? That's where it gets a lot harder. Azure's done a lot of heavy lifting for them with the GovCloud already.
Selling a product by itself into GovCloud is not something I would give to the faint minded.
Faint hearted.
[00:13:52] Speaker B: Even if they did divorce, I imagine OpenAI would still continue to pay for Azure hosting and just leave it in.
[00:13:58] Speaker D: Place for this reason alone.
[00:13:59] Speaker C: Yeah, yeah, they're definitely not moving this to Oracle Cloud anytime soon.
Visual Studios receiving a new agent Mode which transforms GitHub co pilot from a conversational assistant into an autonomous coding agent that can plan, execute and self correct multi Step development tasks end to end, including analyzing code bases, applying edits, running builds and fixing errors. The integration with the MCP enables the agent to connect with external tools and services like GitHub, repository, CI, CD pipelines and monitoring systems, allowing you to access real time context from across the development stack for more informed actions. Agent Mode uses tools calling to execute specific capabilities within Visual Studio. Developers can extend functionality by adding MCP servers from an open source ecosystem that includes GitHub, Azure and third party providers like Perplexity and Figma. This represents a shift towards prompt first development where developers can issue high level commands like add buy now functionality to my product page and the agent handles invitation details while maintaining developer control through editable previews and undo options. The June release also includes Gemini 2.5 Pro and GPT 4.1 model options, reusable prompt files for team collaboration and the ability to reference the output window for runtime troubleshooting. Expanding the AI Assistant development toolkit beyond just code generation.
Thank goodness you finally got this.
Yeah.
[00:15:11] Speaker B: Oh, I've been using this for like last few weeks and it's, it's changed everything about my AI interaction.
It's not only can you like sort of have everything that's changing in a very easy diff level formats but also like you can, you can have it configure your VS code project with the MCP with tools commands and it'll actually so generate you know, information, you know, dot files that contain all the things that you need to make it more, make your development more efficient while also making all the code changes that you're asking for and enabling feature development. Like it's really the only thing it's not doing is tracking these things on a Kanban board. So I mean it's pretty rad.
I'm really enjoying this, this method.
[00:16:03] Speaker D: Look, maker tools, wait till they do the Atlassian integration with the last mcp then it will move it already to, you know, mode complete. Waiting for a build to run or you know, waiting for QA to sign off.
[00:16:15] Speaker B: Oh yeah, no, I mean using the. So I, I hooked up the, the GitHub MCP server and you can basically do that with GitHub issues right now because that's. Yeah, you know the, the MCP server office just a live integration with, with all those sort of ancillary, I guess, you know, things that are part of, that are around the repo ecosystem that are part of GitHub. So it's pretty, pretty cool.
[00:16:40] Speaker D: Yeah. I haven't done to the extent that you have, but I've definitely played with it of like, okay, kind of gather this. And you know, I have smaller code bases than you because a lot of my stuff is like a side project here or like, hey, I want to go tweak this one thing because I wrote this code that my team yells at me for all the time. Let me go fix it so they don't have to, they don't have to hear them yelling at me. And it's makes it so much more efficient.
[00:17:02] Speaker B: Yeah, yell at the machine, not me.
[00:17:05] Speaker D: I mean I've yelled at inanimate objects since I was racking stacking servers, so there's no difference at this point.
[00:17:11] Speaker B: I think it's a key job required.
[00:17:13] Speaker D: Yeah, you can't yell at an anime object. You're not doing your job right.
[00:17:16] Speaker C: Yeah, I mean, you had to be mean to it. As we learned earlier.
[00:17:20] Speaker B: We did. We. They work better when you're me, which you know, has been backed up by my, my technology experience. Like even my family that calls me.
[00:17:28] Speaker C: Across the house to glare at the.
[00:17:29] Speaker B: Technology to make it work.
[00:17:31] Speaker C: Oh, my presence makes technology work. Drives my wife crazy. Yeah, she'll be like, the computer's not working and I'll walk in and it just works. And she's like, this doesn't make any sense.
All right, moving on to cloud tools, we Talked about Terraform Provider 6 being released in preview not that long ago, but now it's currently already generally available. Provider 6 is interesting that multi region support with single configuration file eliminate to maintain up to 32 separate configs for global deployments. This reduces your memory usage and simplifies infrastructure management by injecting a region attribute at the resource level. This update resolves a major pain point for enterprises managing cross regional resources like VPC peering connections and KMS replicate keys. And previously each region required its own provider configuration with aliases, but now resources can specify the region directly.
Migration requires a careful refresh only plan and apply process before modifying configurations. Prevent stale config state conflicts. The provider maintains backwards compatibility while adding a new region parameter to all non global resources. Global services like IAM Cloudfront and Rafa D3 remain unaffected since they operate across all regions by default. And this update also introduces a new ID suffix for importing resources from different regions. This release represents a continuing partnership between HashiCorp and AWS to standardize infrastructure lifecycle management and the breaking changes required pending provider versions to avoid unexpected results during your upgrades. Yeah, this is one of those ones you don't want to accidentally upgrade. Terraform with brew. Yeah. And then all of a sudden now be in pain. Yeah. So this one is going to take some work.
[00:18:57] Speaker D: I'm more impressed you're not using like TF efe env I think is what the tool is like dynamically change your terraform.
[00:19:05] Speaker C: I have dabbled with tfn but I have not, not fully embraced. I don't do enough.
[00:19:11] Speaker D: I felt like I, I lived in that tool when I did many, many 11 to 12, 11 to 1311 to 1. Like you know, whatever client I had to help do the update. That thing was a beast. But that, you know, that's the terraform provider, not the AWS provider at that point.
[00:19:32] Speaker B: Yeah, I mean it is, it's, it's definitely a challenge when you're going across terraform versions and. But I do like that the changes that this introduce introduces.
I hope the upgrade is super painful.
[00:19:47] Speaker C: I mean the prior upgrades to the provider have not typically been terrible and the breaking changes they've made in the past have typically been minor to things like they're deprecating or. Yeah, they've completely upgraded with a new V2 API, but the old API still works. So I am curious how this one goes. I probably will do this one not on 6.0 but on like 6.05.
After they work some kinks out I will probably move to this one. But this one at least I feel like it's worth the squeeze because I do deal with global resources sometimes and I'm dealing with that exact issue where when I upgraded from Terraform 0.5 to Terraform 7 and it broke a ton of stuff, it was like this is just annoyance because none of these things really benefit me that much but they, they benefit everybody else and especially if.
[00:20:33] Speaker B: You have like a, if you have a multi region workload where you know where you're doing passive active or you know, active active, these things that's so key to have these types of changes.
[00:20:43] Speaker D: Yeah, the read only replica, you know, is the easiest example. Hey, I want to read only replica of rds. Yeah, great. How do you deploy this and have this here and handle it, you know, and pass the provider and the region.
It still doesn't solve like ACM in the wrong region like for confronts, but it gets a good chunk of the way there that. So I'm very happy they are releasing this.
It's going to solve a lot of problems.
[00:21:13] Speaker C: I'm very excited for those problems to be solved.
All right, let's move on to our next story.
You guys know what your first text Editor was.
[00:21:23] Speaker B: I mean everyone's first text editor is Notepad, right?
[00:21:27] Speaker C: I mean, okay, first, first command line text editor. Let me be.
[00:21:31] Speaker B: Okay, Yeah, I mean I'm Vim tried. Vim tried and true.
[00:21:38] Speaker D: So yeah, no, you, you learn your first text editor was Straight Vim, not Nano or anything else.
[00:21:43] Speaker B: Yeah, straight Vim.
[00:21:45] Speaker C: So my, Mine was, mine was Pico because I was a pine user on the BBS that I used to dial into and it had Pico support and so Pico was my first text one, but my second one was Edit and Edit was the DOS version of a text editor, which I knew you guys might not know about. Nope.
This is real old school.
But basically Microsoft has brought Edit back from the grave. They're releasing Edit, the open source remake of the 1991 Ms. DOS editor, though with Rust that runs on Windows, macOS and Linux, marking a shift in Microsoft's cross platform strategy for developer tools. The tool gap addresses a gap in terminal based text editors by providing both keyboard and mouse support with pull down menus, offering an alternative to modal editors like Vim that often confuse new users. Which I say if you're confused, work through it because it's good to know.
I remember my first time with Vim.
[00:22:38] Speaker B: And some badge of honor to hate the thing, right?
[00:22:40] Speaker C: Yeah, you start out hating it and then you, and then you learn how to use it and you like has this like magical Swiss army knife that you just can't get enough of.
[00:22:48] Speaker B: But you still hate it all the time.
[00:22:50] Speaker C: You still hate it a little bit because you can't just do a simple, like I just want to go into a text file and click there and like edit it. But like once you know the secrets and like you get your Vim RC file going like oh man. Yeah, magic can happen.
[00:23:01] Speaker B: You can do some cool stuff for sure.
[00:23:03] Speaker C: Edit represents Microsoft's continued investment in open source developer tools and Linux compatibility, following their broader strategy of supporting developers regardless of the platform choice. For cloud developers who frequently work internal environments across different operating systems, Edit provides a consistent text editing experience without the learning curve of traditional UNIX editors. And the project demonstrates how modern programming languages like Rust enable efficient cross platform development of system tools that would have been platform specific in the past. So I wonder how they built this. They use like Chat GPT for this. Like here's all our old legacy Edit code. Like change it to Rust, right? Yeah, I mean that's cool.
[00:23:36] Speaker B: It is. I mean that's my favorite part of the story is the, is the, the use of Rust under the Covers just because, you know, the, the structure of Rust makes. Makes it so easy to compile things that don't need all the custom, you know, kernel compilation that you typically have. And so this is kind of. This is just kind of a neat thing of taking something from 1991 and making it new again.
I like it.
[00:23:59] Speaker D: You know, this was like a hackathon project that somebody did for fun.
[00:24:02] Speaker C: Yeah.
[00:24:03] Speaker D: Maybe like learn AI, and it turned into this.
[00:24:05] Speaker B: Yeah.
[00:24:06] Speaker C: And the UI is not very complicated, so it definitely feels like something you could, you know, probably, you know, tell ui, like, tell AI go rebuild this in Rust, and could probably do a lot of the lifting, but it's still pretty cool.
I don't know if I would use it. Like, am I going to go install Edit packages on all my Linux boxes? So I now have a Windows component that I have to patch on all my boxes? Probably not. Probably just stick with the I and vim, you know, and if I have newbie system admins, maybe I'll put Nano on there because I'm nice.
But other than that, I don't know. This would become part of my typical toolbox.
Yeah, but it's nice that it's the same tool. You know, you don't have to install subsystem for Linux to get this to work. Right. So you get Vim. But I think there's a vim. I think there is a VIM Windows port you can download on if you really want to do that. I don't know why you would, but if you want to, you can.
[00:24:57] Speaker B: Yeah, I mean, if I could magically take my environment set up with me. Sure. But that's usually it. Like, no.
[00:25:07] Speaker D: So if you were in a coma for 34 years and you came out, you can see. Use the same editor again. That's what it's for.
[00:25:15] Speaker B: Did Edit have like, the crazy customizations that you could do in, like, VIM and nano? Like, no, of course not. Okay.
[00:25:21] Speaker C: Because it.
[00:25:22] Speaker B: I mean, that's why these things have these, like, communities and this. The, you know, people like to argue about what's the better one?
[00:25:28] Speaker C: No, I mean, Edit is more akin to Nano than anything. It's more of Nano. Yeah.
Which one? Emacs. Like, I never had Emacs. I got into Vim, but Emacs, I. I have people I know who are Emacs fanboys to death and will use nothing other than that. But yeah, they also know Vim because Emacs isn't installing any Linux distribution by default that I'm aware of. No.
[00:25:48] Speaker B: Oh, Yeah, I mean, that's the reason why I know VIM is. Is it wasn't because I chose it by any means.
I learned from other people and it was what was on everything.
[00:26:00] Speaker C: Yeah.
Yeah.
[00:26:01] Speaker D: But that's actually why I feel like I'm not a good user of them. Because I never had a VIMRC that like, I really grew and made be my own because it dealt with it on servers all the time.
I wasn't copying it my home directory and moving it.
And then once I hit cloud, it was like, they're stateless. Who cares? Yeah, you're logging to debug something and you know, deleting it after. So.
[00:26:25] Speaker B: Oh man, you should.
[00:26:25] Speaker D: That's where I feel like I, I miss, missed out a little bit.
[00:26:28] Speaker B: My sysadmin days was crazy.
[00:26:30] Speaker D: The.
[00:26:30] Speaker B: The amount of environment replication that happened when I ssh to a machine. Like I just ssh to a machine and then all these things would fire off to. To move and copy things into place and it all had to be idempotent. Case I logged in again one day like it was pretty nuts. And it.
And then, you know. Yeah. Moving to cloud, it sort of just never did that anymore.
[00:26:54] Speaker D: You can't, like. Yeah, you never set it up, you know, or whatever it is. So I just got so used to using the least common denominator stuff.
[00:27:02] Speaker C: Yeah.
[00:27:02] Speaker D: You know, which is how I got into vim. And just. This is the basics of it. You know, you do the basics. Find, replace, dead grabh.
I know the. I'm like trying to like, think of like how to do it now. And I'm like, I know that my fingers know where to press the buttons when I get there, but I'll actually think I can explain to anyone how I do it.
[00:27:20] Speaker C: So I. I have a question for both of you, and you guys may know the answer. I. I should have done this research at one point in my life, but I haven't cared enough. But like, do you reach UNIX God status if you can tell the difference between VI and vim?
Because I don't know that I can figure out the difference between these two.
[00:27:39] Speaker B: I mean, I guess I definitely wouldn't say that. I know, I know that VI is everywhere and VIM has like some colors.
[00:27:52] Speaker C: Yeah. If it's only colors, then it's nothing really. But like, it feels like there's more to it and I just don't really know what it is. I'm sure there is, because you can install both and even vi, I think, has colors now if you have the Right. You know, plug into it. So, again, I. I don't. It's one of those, like, I should look this up someday, and I just never do.
[00:28:09] Speaker B: But I never have to. I've always. I've always had the same thing of, like. I know, but. Yeah, no, it's. I have never run into a machine that by default didn't have vi. I have run into many that didn't have been.
[00:28:22] Speaker C: I mean, I. With containers. I've learned a lot of machines that don't have VI or vim.
Yeah. Yeah. And then you. You get your. He's hard, you know, like, do they have cat? No, they don't have cat. Okay.
I had to do a weird echo pipe, a, you know, weird system thing. I had to go look up the command for every time. Yeah. Like, there's a way to do it, but it's like you're getting into kernel level. Fun. Yeah.
[00:28:42] Speaker B: At that point you have a vague.
[00:28:44] Speaker D: Memory where most systems.
Vi.
Maybe it's in Ubuntu or something, where VI is actually an alias to vim.
Or maybe I.
[00:28:54] Speaker B: That's definitely the truth.
[00:28:55] Speaker C: That's definitely true.
[00:28:56] Speaker B: I've seen that.
[00:28:57] Speaker D: Most places don't use. I think. I think most people use VIM now and not just straight bi.
[00:29:02] Speaker B: Yeah, no, I think you're right. And I think any recent Ubuntu revision, you've nailed it. Like, that is a default alias. And yeah, I've never seen it the other way around where VIM is an alias for vi. But I guess I wouldn't have looked either.
[00:29:23] Speaker C: All right, let's move on to aws.
Re. Inforce happened last week or over the last week. And so we have a lot of security news. So Ryan is going to be having candy today, and the rest of us are gonna be like, yay, security.
[00:29:38] Speaker D: Their stuff was pretty good.
[00:29:40] Speaker C: There was some good stuff. Yeah.
[00:29:41] Speaker D: No, I haven't read the show notes yet, but the things I saw pop through. I was like, ooh, this is kind of good.
[00:29:46] Speaker C: Yeah, yeah, no, they definitely fixed some quality of life stuff. So, first up, IAM Access Analyzer now provides you daily monitoring of Internal access to S3, DynamoDB and RDS resources within your AWS organization. Using automated reasoning to evaluate all identity policies, resource policies, SCPs and RCPs to identify which IAM users and roles have access. The new Unified Dashboard combines internal and external access findings, giving security teams a complete view of resource access patterns, enabling them to either fix unintended access immediately or set automated event bridge notifications for mediation workflows this addresses a significant security visibility gap by helping organizations understand not just external access risks, but which also, but also which internal identities can access critical resources supporting both security hardening and complex compliance audit requirements. The feature is available in all AWS commercial regions with pricing based on the number of resources analyzed making accessible for organizations to strengthen their least privileged access controls without major cost barriers. Major cost barriers is in quotation marks from me.
Security and compliance teams can now demonstrate proper access control for audit purposes of proactively identify and meeting overly permissive internal access before it becomes a security incident. Now you heard that, you said that's amazing. I'm going to turn that for all my organization and you might just heard me mention pricing. Hey, hey, don't go turn this on for everything in your environment because man, this thing is expensive.
A $9 per month per resource being monitored is the price of this bad boy. So this is an expensive security tool.
[00:31:15] Speaker B: Every user, every group, every role, every policy is $9 a piece.
[00:31:21] Speaker D: So yeah, it adds up real fast.
[00:31:25] Speaker B: It's, it's kind of nuts.
[00:31:26] Speaker C: I mean I don't, I like, I'm sure if I were to go say I want to buy this tool from the world from some SaaS company that's, you know, Israeli startup backed that's doing this, I'm sure it also cost me an ARM and a leg.
But I would hope that this is predatory pricing from Amazon to make sure not everyone just runs to enable this feature day one and that this pricing is, you know, meant to keep you away and then at reinforce, reinvent or reinforce, next year they're going to announce a massive price cut because this is crazy.
[00:31:56] Speaker B: Yeah, I mean we've seen that pattern I don't know how many times. And, and this type of analysis is expensive, right? Like from a compute and resources standpoint, like it is a lot of mathematical computation to, to.
[00:32:10] Speaker C: I mean you don't think they're just using AI in the back end. You don't think it's just bedrock?
[00:32:14] Speaker B: I mean AI is just math in the back end somewhere. It's all math somewhere.
[00:32:18] Speaker D: It's someone's server somewhere.
[00:32:20] Speaker B: Yeah, yeah. So I mean it's the only reason I think it's not AI is just because I've been some pretty cool talks about how, how I am analysis in aws, how they do it and how they, how they turn it into mathematical proofs and, and then compare those proofs to sort of verify access. Like they've had tools going through this, you know, working towards this for a long time now and I think, I wonder, I don't know, there's not enough detail in the blog post to know if this is like an underlying technology change or just a form factor change.
[00:32:52] Speaker C: Right.
[00:32:53] Speaker B: But I think, you know, the it's, I mean largely it's just a change of looking at it from the resource perspective like GCP does versus the principal perspective.
So we'll see.
[00:33:05] Speaker C: We will see.
I'm, you know, if you have this need and the, some of the people on Twitter, from Amazon and other security analysts on Twitter and bluesky and Macedon, you know, they were talking about like this is really for your crown jewels of your organization, like you know, the database or you know, where you're storing the credit card data or where you're storing other sensitive things. This is where you want to really enable this today. And that was kind of the model they were talking about. They definitely didn't say this should be turned on everywhere.
You know, they definitely were like, this is, this is not meant for that purpose and it's priced that way to show you it's not.
All right. AWS Certificate Manager is getting a new feature and that is exportable public SSL TLS certificates with private keys for use on EC2 instances containers or on premise hosts. Breaking the previous limitation of only using certificates within integrated AWS services like Elastic load Balancer and Cloudfront, exportable certificates are valid for 395 days and cost $15 per fully qualified domain name or $149 per wildcard domain charge. Issuance and renewal time compared to free certificates that remain locked to AWS services. The export process requires setting a passphrase to encrypt the private key and administrators can control access through IAM policies. Determine who can request exportable certificates within an organization. Certificates can be revoked if previously exported. An automatic renewal can be configured through EventBridge to handle certificate deployment automation when the 395 day validity period expires. This feature addresses a common customer need to use AWS issue certificates from Amazon Trust Services on workloads outside AWS integrated services while maintaining the same trusted root CA compatibility across browsers and platforms.
[00:34:46] Speaker B: Yeah, I could not love this feature more. And as far as the price is concerned, I think it's pennies on what.
[00:34:53] Speaker C: You have you looked at the price of. Of cert from GoDaddy or any of those guys? Lady?
[00:35:00] Speaker B: Yeah, it's so much more expensive.
[00:35:01] Speaker C: Yeah, I mean it's still more than what's. Or let's encrypt. Sure, yeah, yeah.
[00:35:07] Speaker D: But being able to get a, you know, a certificate like that onto your EC2 instance it solves the problem because otherwise you were going towards. What was it? Acm Private.
Private whatever it's called. The private version of it was.
[00:35:23] Speaker C: Yeah.
[00:35:24] Speaker D: Which was like open the door. It was like $400 a month. Like this solves the problem of hey, we wanted. Yeah, we want to control our encryption all the way to our EC2 because otherwise most people would just throw a, you know, self signed Cert on the EC2 and is because load balancer didn't care and at least still say it's encrypted all the way. But this lets you have and say we are actually doing valid SSL certs the entire way to the back end and no one cares.
[00:35:49] Speaker B: Yeah and there's like definitely use cases where you need the private side. Like you know I recently had one where it's like sort of I want to use a custom domain for outbound communications from this application but how do I do that? And you know, with my custom domain was maintaining the cert and so it wants me to import the private keys and so it's like ah, I can't use cloud service. Then I got to go to some external provider.
[00:36:13] Speaker C: Yep.
[00:36:14] Speaker D: This is also where I, when I went to Azure I was like oh my God, there's I have to manage SSL search again because the only place I know that I can do SSL search like native SSL search is front door cloud front essentially the equivalent of. And like what do you mean? I have to find my SSL search my load balance to buy them yearly. Like what is this?
[00:36:33] Speaker C: Yeah, what Rotate.
[00:36:35] Speaker B: Rotate them myself like self like a peasant.
Yeah.
[00:36:40] Speaker D: So 2020.
[00:36:41] Speaker C: Yeah, I mean, I mean these new certificates you now have to rotate as well so you don't get away from that problem.
[00:36:46] Speaker D: But at least the base service.
[00:36:48] Speaker C: Yeah, the base service is there. Yeah. So now at least you have one standard that you can go to versus all. We had to use digisert for these and we had to use ACM for those and then you have all the.
[00:36:57] Speaker D: Problems worse because you can do it on the front door but you can't do it on your app gateways.
So I can use internal certs in one place or you know, Azure main certs in one and not the other. So.
[00:37:08] Speaker B: And these do automatically renew. It just wouldn't be all that useful unless you plug in the automation to.
[00:37:13] Speaker C: Put it in place.
Yeah, I was just looking around because I was curious prices of certs because I haven't bought one in a while, basic certificate from Digisert, $26 a month for a standard domain and then basically you. I'm sorry, that's unlimited certificate issuance replacement. Is this a service that makes certificates? I don't know.
[00:37:34] Speaker B: So the last time I bought a multi sand cert it was approaching 400 for a year.
[00:37:40] Speaker D: Yeah, I just bought one. It was around there.
[00:37:43] Speaker C: Yeah, yeah. And then GoDaddy was anywhere from $127 to $195, depending on which one you were buying and if it was fully managed by them or not, which I was like, I don't know what that even means.
[00:37:56] Speaker B: It means you don't. It's probably the difference between getting the private key.
[00:37:59] Speaker C: Exactly.
So yeah, this is a steal of a deal, but I'm very happy to see it. I do hope that, you know, now that this exists, maybe people can extend, you know, like Certbot to use this because that would solve the other side of this, which is I need automatic renewal from my end devices and with the recent HTTP, I think WC3 forum, whatever they change the validity time for certificates to go down to like sometimes at some point. 47 days.
[00:38:26] Speaker D: 47 days in 2029.
[00:38:28] Speaker C: Yeah.
[00:38:28] Speaker D: Which I literally had to look it up yesterday.
[00:38:30] Speaker C: Yeah, I mean like that's gonna be crazy town. So like if you can someone make Certbot work with Amazon certificate manager and I can make all that happen and handle these now. 15 every 47 days is a bit steep, so maybe this will come down in price as we get to that craziness in the future. But. Wow, it's going to be weird in 2029. I should retire by then because I don't want to deal with this.
[00:38:50] Speaker B: Well, I mean that's the. It's basically just saying we have to use the same model that let's Encrypt has been enforcing forever. Right. Like it's no. There's no ifs and buts about it. So it's great when you can make that work.
[00:39:03] Speaker D: Yeah, it's like a slow tier down, I think. Like it's two goes from like, was it 395 right now down to 200, down to 100, like 47 or something.
[00:39:12] Speaker B: Yeah, yeah. I assume we'll see a lot more things supporting automatic rotation like Certbot. I mean, I just don't see any. Any way.
[00:39:20] Speaker C: There's no way to do without automation. I mean that's the reality is we're.
[00:39:24] Speaker D: Forced to get into.
[00:39:25] Speaker C: Yeah, that's. That's the future of where we're going. And the certificate vendors are going to have to adjust pricing too to support this because you know, they're, they're all thinking they're going to get a big payday and it's like, no, no, you're just going to lose more business. Yeah.
[00:39:36] Speaker D: At this prices they'll end up being like, even this will probably end up changing. So it's still like $15 a year, but you get as many certificates as you need in that year for that $15 is like what I perceive as.
[00:39:50] Speaker B: Yeah, you're probably right.
[00:39:51] Speaker C: Like more of a subscription model. Yeah.
[00:39:54] Speaker D: Yeah.
[00:39:56] Speaker C: Well, it's finally happened. AWS is now requiring MFA for root users across all account types, including member accounts and AWS organizations.
FA's rollout that started with management accounts in May of 2024 and standalone accounts in June 2024. The enforcement supports multiple MFA methods including Fido T pass keys and security keys at no additional costs. With users able to register up to eight MFA devices per root or AIM user account. AWS recommends Organization customer centralized root access through the management account and remove root credentials from member accounts entirely for stronger security posture. And the mandatory MFA requirement presents AWS shift towards secure by default configurations addressing the fact that MFA prevents over 99% of password related attacks. Timing aligns AWS's November 2024 launch of centralized root access management for organizations creating a comprehensive approach to securing the most privileged accounts in AWS environments.
[00:40:46] Speaker B: I hope they fix the bug where if you removed the root credentials from child accounts of an organization, Trusted Advisor would port the security issue that you.
[00:40:54] Speaker D: Didn'T have MFA attached.
[00:40:56] Speaker C: I hope so.
[00:40:57] Speaker D: The amount of companies I had to argue with or like tools I had to argue with because they're like your root account doesn't have mfa. I'm like, there's no password. It was set up through control tower organizations. I don't have a login to it. People like it was one thing where there's one customer in order to pass some audit because the customer kept their vendor kept yelling at them. They literally had to go set up 25 root accounts and put the MFA on it just to get past the stupid audit.
[00:41:25] Speaker B: Yep.
[00:41:26] Speaker D: Like this made you more insecure.
[00:41:28] Speaker B: And that was before you could attach multiple MFA devices. So it was so easy to lock yourself out.
[00:41:33] Speaker D: Like I realized that that was a future that you could do the back then you could only do one. Like yeah, that. As you were saying, I was like, wait, I missed that one that they did that.
[00:41:41] Speaker B: It wasn't it. Like I think it was November 2024. Like it has not been a thing or around there. Like it has not been a thing for very long. Like it seemed very like past due.
[00:41:53] Speaker D: Yeah. The amount of times like before like 1Password or anything else, you know or like there was any like other centralized management of these things or like I had my Yubikey really is what I to now I've locked myself like my phone died, my phone stream cracked and I had to like reset 15 MFAs because there was nothing else in. Was a bane of my existence for a very long time on aws.
[00:42:17] Speaker B: Yeah.
[00:42:18] Speaker D: Then Yubikey came around and made my life better.
If I ever wait when my Yubikey dies I'm totally screwed.
[00:42:25] Speaker C: Terrified. Yeah.
[00:42:26] Speaker B: I'm always terrified I'm going to lose that thing. So now I have two MFA devices on every account.
[00:42:31] Speaker C: Yeah I always worry about losing my my Titan key.
[00:42:35] Speaker D: I have the thin USB C1 so it's like it's just flushed my Mac essentially I'm more working fine if you're.
[00:42:42] Speaker C: Only ever going to use one computer to access it. Yeah it's when you have multiple computers that you want to use that becomes somewhat problematic.
[00:42:48] Speaker D: I have a work ub key and a personal one. They live attached to the computer computer. Then the problem is when my phone decides it needs to, you know, get mfa.
Thank God I haven't had an Android for a long time so I just plugged the USB C and I detected it like to do that because I never did the NFC one. That was the other option. They like you could do it that way.
[00:43:08] Speaker B: Apparently Matt sees Apple's USB restriction of only two ports and says ah, hold my beer down to one.
[00:43:15] Speaker D: Yeah, no, I got three on my laptop. One two. Sorry, I just double checked.
[00:43:19] Speaker B: Yeah, yeah, I'm jealous.
[00:43:21] Speaker D: Maybe yours is too old Ryan.
[00:43:22] Speaker C: It is too old.
My MacBook Pro has four and plus a power adapter because they went back to Magsafe which is great but my new work laptop is an air which I love because it's so much lighter for traveling but I'm now living in the two port lifestyle and I'm like oh yeah, this is so yeah, you definitely have to use a lighting port so you at least get two without having to sacrifice one for power all the time.
But yeah, I've noticed it also the other thing I noticed is they're both on the left hand side and I have so many workflows where I need a USB port on the right hand side of my laptop.
So that was fun.
[00:43:53] Speaker B: I literally rearranged my desk. I couldn't deal with it anymore. Like just and it was like a minor thing to just go on the other side of a stand and I was like I can't deal like it.
[00:44:04] Speaker D: No, no. It's a big kid. I keep one monitor on one side, one monitor the other so the cables attach. My Yubikey is on the side with two.
That way I can put one on each side and I solve the problem.
First world problems, guys.
[00:44:19] Speaker B: Yeah, I know first world problems.
[00:44:23] Speaker A: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days.
If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask will you actually give me my money back? Achero will click the link in the show notes to check them out on the AWS Marketplace.
[00:45:02] Speaker C: AWS network firewall is now including Active Threat Defense, a managed rule group called Attack infrastructure that automatically blocks malicious traffic using Amazon's Mad Pot Threat Intelligence system, which tracks attack infrastructure like malware hosting URLs, botnet, C2 servers, and crypto mining pools. The service provides automated protection by continuously updating firewall rules based on newly discovered threats, eliminating the need for customers to manually manage third party threat feeds or custom rules that often have limited access or visibility into AWS specific threats. Active Threat Defense implements comprehensive filtering for TCP, UDP, DNS, HTTPs and HTTP protocols, blocking both inbound and outbound traffic to malicious IPs, domains, and URLs across categories, including command and control servers, malware staging hosts, and mining pools. Deep Threat Inspection enables shared threat intelligence across all active threat defense users, creating a collective defense mechanism where threats detected in one environment help protect others. Though customers can opt out of log processing if needed, the feature integrates into the GuardDuty findings marked with Amazon Active Threat Defense Threatless name for automatic blocking networks best when combined with TLS inspection for analyzing encrypted HTTPs traffic through organizations must balance security benefits with potentially latency impacts.
[00:46:10] Speaker B: I've never seen those latency impacts actually realized I've I've heard it complained about I've seen all kinds of things but.
[00:46:18] Speaker C: Like I mean as a person who's to tell you that tell the screen people you're add so much Latency. Like you were on the other side of this argument, Ryan.
[00:46:24] Speaker B: Oh, I have absolutely come around. I absolutely have. You are not wrong.
Yeah, I was terribly afraid of something automatically adjusting my rules, shutting down my traffic and adding complexity that I was going to have be completely powerless to troubleshoot this production app. And now I'm.
And it, it doesn't coincide with my move to security, but it is funny because it's too difficult. Like the, the Cloudfire attack. Like there's. You can't keep up with the amount of attacks, the difference in attacks. And once you get into like hundreds and hundreds of different attack vectors and different things, like you need a managed rule set to, to weed that out and just instrument it properly so that you can tell when it's actually blocking legitimate traffic, which hopefully it doesn't do very often.
[00:47:09] Speaker D: I have a rule with my team where you guys can put what you want on there, you guys can manage it. But I need a big red button where once I've quickly diagnosed the production issue and I've reached the point that nothing changed and our traffic isn't going out, I. I can push that big red button. Because you guys aren't on call.
[00:47:27] Speaker C: Yep.
[00:47:28] Speaker D: So if security is not on call or if they're going to do something like this, then either one, you have to be on call or I get a big red button.
[00:47:35] Speaker B: I mean, if they, I mean I'm, you know, security services person. If I'm offering a runtime affecting security tool, I am on call. I have to be like, that's not fair.
[00:47:46] Speaker D: Right.
[00:47:47] Speaker B: I know a lot. Some security orgs don't operate that way. But it can't do it that way. It's. It's in the runtime, man.
[00:47:54] Speaker D: Yeah.
[00:47:54] Speaker B: Or delegate access, you know, build that in. But you know, fine.
[00:47:57] Speaker C: I have to, I had to look, you know, network firewall is always like, oh, that's expensive for hobbies. But like I'm wondering how much, you know, like just can I get just the firewall Advanced threat protection traffic processing. Because that's only 5 cents or half a cents a git. A half cent a gigabyte. But if I had to have all the.
[00:48:16] Speaker D: Yeah, yeah.
[00:48:17] Speaker C: I still need the traffic processing, which I think is six sense. And then I think you also still need the firewall endpoint. Yeah, but I like the idea of this. I just, I wish I could just get a manage rule I put into security groups though that just had this feed available. Why did use the fancy network firewall product for this?
[00:48:32] Speaker D: Well, because you're not, you're not IP level. You're, you're at the, you know, application layer monitoring the traffic and everything else.
[00:48:40] Speaker B: And you know, it's, it's so complex now with, you have multiple VPCs with the transit gateway and all the different layers now. And so like, you know, that's what they, the network firewall rule. That's the layer that it operates against, not at the security group level.
[00:48:57] Speaker C: So I still like the security group that I could just attach deny rule to saying I'll take it if I can get it.
[00:49:03] Speaker B: I think a security group's a blunt hammer in this case.
[00:49:06] Speaker C: Right.
[00:49:06] Speaker B: Like they have limits on the number of rules you can put in them and.
[00:49:08] Speaker C: Yeah, but like I, if you give me a manage rule that did things and you know, was just part of the WAF or those things, I'd take that as well. Like, you know, I think there are some WAF things though.
[00:49:19] Speaker D: There's a lot of waffs.
[00:49:20] Speaker B: You can plug that into the WAF layer that this is.
[00:49:22] Speaker C: But, but the thing I, you know, I was helping deal with a WAF thing with DDoS and a bot issue and the reality is, is that the, unless you're like willing to do like the heavy duty integration at the web app layer with the bot control like there, you know, you can't do much like just like pushing the levers because you'll block legitimate traffic real quick, real fast.
[00:49:43] Speaker B: Yeah, I mean it has to be like I'm, I'm in my day job, I, I preach this as well. It's like you can't really run security in isolation. And so like if you're going to have a WAF in front of this thing and you're going to, your security team is going to manage this waf. That WAF has to be in the CIC pipeline. It has to have a testing flow. It has to, you know, all of these things need to be in place because if you just change a WAF rule and take down applications or worse like add performance latency or just incremental like incidental impact occasionally for certain paths like the downstream teams have no idea what's going on and no visibility into it. And so like you can't really just apply a WAF at the end and say we're protected because you're not, you're, you're going to end up turning off all the rules. If it's blogging traffic, security needs to.
[00:50:34] Speaker D: Be no different than any other change in production.
Despite what they want to all believe it needs to be done in the same way, it still has to get promoted through dev, through qa, through into production. It needs to be no different than your patching of your Windows or of your Linux where you patch your environment so you promote your bricklayer image chain, WAF rules and everything else need to be able to be controlled and rolled out in that same manner. I'm not saying if there's a zero day you can't quickly burn through it or have a break class, but you gotta be able to do it in that way.
In theory I I have feelings about this that we won't go on right now, but in theory that's what that DevSecOps role is. Yeah, is making sure that the security team is doing it in that way. But the short version of my feeling is anyone DevOps can do it the same thing. It's just leveraging the security tools versus pipeline tool or anything else.
So I have my DevOps team helping our security team implement these things our day job to promote it up the stack. Because how else do you know that this rule that you're going to randomly do is not going to break everything?
[00:51:49] Speaker B: Yeah, it's super important.
You're either building a firewall service that your DevOps team is a customer of, or you are just a runtime team where you hear an application dependency.
There's no other way it might be.
[00:52:05] Speaker C: Cloudfront is simplifying web application delivery and security with a new user friendly interface.
The streamlined console that will now fully create configured distributions with DNS and TLS certificates. Any few clicks eliminates the need to navigate between Certificate Manager, Route 53 and WAF Services separately. Thank you goodness. The new experience automatically configures security best practices for S3 hosted static websites including Origin access control that ensures content can only be accessed through Cloudfront rather than directly from S3 buckets. AWS WAF integration Now full features Intelligent rule packs that provide pre configured protection Against OWASP top 10 vulnerabilities, SQL injection and cross site scripting attacks and malicious bot traffic without requiring deep security expertise. A new multi tenant architecture option allows organizations to configure distributions serving multiple domains with shared configurations.
Useful for SaaS, providers or agencies managing multiple client sites. Simplified setup reduces time to reduction for developers who previously needed to understand nuanced configuration options across multiple services with no additional charges beyond standard cloudfront and WAF usage fees.
[00:53:06] Speaker D: So they start this off I feel like with hey, it's just a new ui, it saves a lot of the problems and towards the end they're like you can also do this in this whole multi tenant architecture multiple AWS account set up. So I guess I like I think the bottom part is just reiterating you know that you we talked about what it was but at least making this be easier for the person that just goes and sets up Cloudfront is great thing is laugh rules on at on AWS like they're not cheap when they start going you gotta be a little bit careful when you're setting things up for the first time.
Yeah.
[00:53:45] Speaker B: I think when they're referencing the multi tenant architecture they're talking about the centralization like into a dashboard and then each individual component becomes a tenant.
[00:53:54] Speaker C: Got it.
[00:53:55] Speaker D: One thing I always struggle with the AWS WAF is the bot traffic and the E dos it doesn't pick it up fast enough. I don't know if you guys experienced this, but I was helping somebody out and it doesn't register the traffic fast enough to when people are doing scans to detect that and kill it. Like too many of the initial requests still go through.
[00:54:17] Speaker C: My experience with it is it takes effect within.
If you change a rule the rule change takes about five to 10 minutes to effect but then it's pretty quick after that. But that initial it's like I'm sure there's a cloudfront distribution being updated in the back end that's basically involved somewhere. But my. I haven't had the experience that it doesn't detect it fast enough. It just. It takes a long time to change the rule. So when you do detect it and you're like oh I'm blocking something legit, it takes about 10 minutes to undo your mistake. I've experienced that.
[00:54:46] Speaker B: I have a theory that Matt's been. He's testing his new rules immediately. It's like why he wants that positive control?
[00:54:52] Speaker D: Well this is just a bot traffic control that they have built in the manage rule stuff. Right.
[00:54:57] Speaker C: So I mean the. The original. Have you played with a V2? Because the V1. Yes, I think you're right.
[00:55:01] Speaker D: This was two weeks ago. V. I have other feelings about V1.
[00:55:05] Speaker C: Okay.
[00:55:06] Speaker D: I was very happy with V2.
[00:55:08] Speaker C: The V2 was much better in my experience with it and what I was dealing with like a month or two back. But AWS SHIELD Network security director automates discovery of network resources across accounts and identifies security configuration gaps by comparing against AWS best practices. Eliminating manual security audits that typically take several weeks. The service prioritizes finding severity level critical to informational and provides specific remediation steps for implementing AWS WAF rules, VPC security groups and network ACLs to address identified vulnerabilities. Integration with Amazon Q developer enables natural language queries about network security posture directly in the database console, allowing teams to ask questions like what are my most critical network security issues? Which no one ever asked without navigating. Complex dashboards currently available in preview in US east, north and Europe Stockholm regions with Amazon Q integration limited to just North Virginia, suggesting a gradual rollout approach. This addresses a key pain point where security teams struggle to maintain visibility across sprawling AWS environments. Particularly relevant as organizations face increasing DDoS and SQL injection attacks.
[00:56:05] Speaker B: Checks where's this tool been all my life?
[00:56:09] Speaker C: This is right.
[00:56:10] Speaker B: I mean just the discovery and the visualization aspect of this is awesome right? Because it's when you get into like a multi account system with multi VPCs, you know, per region like visualizing network traffic and knowing what has access to talk of what isn't very easy to do. And so like I like tools like this that sort of plug that gap as things have become more complex. You know, certain architectures are. Are hard and this is, this is pretty cool to see it.
[00:56:40] Speaker C: I would love to be able to.
[00:56:41] Speaker B: To you know, play around and have. Have this sort of thing, you know, so you can see if you've got hosts that are communicating, you know, not in a DMZ network or whatever you have. It's awesome.
[00:56:55] Speaker D: Is this for though shield? Because I thought it was like SHIELD or SHIELD Advanced. So it's just like they're taking the SHIELD name and using it or is it going to be part of those services like one of those prior ones?
[00:57:05] Speaker C: It's a SHIELD feature.
[00:57:07] Speaker D: It's free to everyone.
[00:57:09] Speaker C: Yeah, they don't mention anything about Advanced in the blog post. So it looks like its own. It's like its own feature in addition to SHIELD and waf. Yeah, in the console it's AWS SHIELD Network Security Director.
So I do see that it has. Is called out separately and see if it has a pricing.
[00:57:27] Speaker B: I was going to say I didn't see any mention of pricing in the article because it just didn't sound cheap to me.
[00:57:32] Speaker D: And I don't see anything here either on the pricing page.
[00:57:36] Speaker C: Yeah, I was just looking at that too. I see you know your basic SHIELD Advanced data transfer out. You see pricing details on subscription commitment. Well, it's in preview too so it's. It's probably not how pricing out yet.
[00:57:47] Speaker B: But yeah, that's what I, I was assuming it's all.
[00:57:50] Speaker C: Yeah but we'll keep an eye on that one because I am curious how that's going to be charged for offers.
There's no cost for AWS Shield Network security director during the preview period. Yes. They don't have any pricing yet. They want to see how many people want it before they charge you for it or get you addicted to it. Like Ryan. And then like you had to pay a lot of money and he's like, it's worth it. Yep, we need it fast.
[00:58:12] Speaker D: Mustache, you've become a security person.
[00:58:13] Speaker C: I really have.
[00:58:14] Speaker D: Need more tools?
[00:58:15] Speaker B: More tools.
[00:58:16] Speaker C: More tools. Yeah. No.
Amazon GuardDuty Expends Expands Extended threat detection coverage to Amazon EKS clusters.
Sorry. Now detects security signals across EKS audit logs, runtime behaviors and AWS API activity to identify multi stage attacks that exploit containers, escalate privileges and access sensitive Kubernetes secrets, addressing a key gap where traditional monitoring detects individual events but misses broader attack patterns. The service introduces critical severity findings that map observe activities to mitre, ATT and CK tactics and provides comprehensive attack timelines affecting resources and AWS best practice remediation recommendations, reducing your investigation time from hours to minutes for security teams managing containerized workloads. To enable this feature, customers need either EKS protection or runtime monitoring active with GuardDuty consuming audit logs from EKS control plane without impacting existing logging configurations or requiring additional setups. The expansion positions guardduty as a comprehensive Kubernetes security solution, competing with specialized tools like Falco and Sysdig while leveraging AWS's native integration advantages to detect attack sequences spanning both container and cloud infrastructure layers. Pricing follows standard GuardDuty models based on analyze events and runtime monitoring hours, making it cost effective for organizations already using GuardDuty who can now consolidate EKS security monitoring without additional third party tools.
[00:59:32] Speaker B: Yeah, except for leaving out the fact that Kubernetes generates like 60 billion events per second.
[00:59:38] Speaker C: I was like, oh, guardduty, huh? You're like, oh, well, we already got them screwed by guardduty, so we don't need to charge them more.
[00:59:43] Speaker B: They don't care about money, clearly.
[00:59:45] Speaker D: Yeah, well, you're running Kubernetes, you don't really care that much about money at that point too.
[00:59:50] Speaker C: Oh yeah, I mean you're running Kubernetes on Amazon, which has the far superior ECS product. So yes, I agree.
[00:59:57] Speaker B: Yeah, I mean I like tools like this, but yeah, the Kubernetes runtime is, is so noisy that I, I. It's like it requires no additional setup. I'm like, yeah, kinda, you know like, if you're gonna have GuardDuty be your parsing layer, like that's gonna be very expensive.
But you know, being able to have log filtering, which I think might be a thing. I know, I know. I had to put that in place when I was ingesting logs into GuardDuty from CloudWatch.
[01:00:26] Speaker D: But I mean, $1.6 per million events.
[01:00:30] Speaker B: For EKs, that's like four minutes Kubernetes.
[01:00:34] Speaker C: 30 seconds, isn't it?
[01:00:36] Speaker B: Yeah.
No, you could have a single pod, four minutes of events.
It's so noisy.
[01:00:43] Speaker C: I hate it. So noisy.
Well, if you are excited about guardduty improvements and WAF improvements, you might be saying to yourself, how am I going to unify all my security management? And that's with the AWS Security Hub, which has a unified security management. By correlating your findings across GuardDuty, Inspector, Macie and CSPM to provide exposure analysis and attack path visualizations, the service automatically identifies security exposures by analyzing resource relationships and generates prioritized findings about additional configurations. The new exposure finding feature maps attack paths through network components and IAM relationships, showing how vulnerabilities could be exploited across VPC secure groups and permission configurations. And this visualization helps security teams understand complex relationships between resources and identify where to implement the controls. Security Hub now provides a centralized inventory view of all monitor resources with integrated ticketing capabilities for workflow automation. The service uses the Open cybersecurity Schema framework for normalized data exchange across security tools. The preview is available in 22 AWS regions at no additional charge through Customers still pay for integrated services like GuardDuty and Inspector. This vision Security Hub as a cost effective aggregation and layer for organizations already using multiple AWS security services and for security teams. This reduces context switching between consoles and provides actionable prioritization based on actual exposure risk rather than just vulnerability counts.
Dear Google sec.
Yeah, take notes please.
[01:02:04] Speaker B: Well, I mean, so this is, you know, with the Mandiant acquisition, they're definitely trying to roll SEC into this model by combining SECE and techops. So I do think it's headed in that direction.
[01:02:19] Speaker C: I like the pricing sounds a little bit better than sec.
[01:02:21] Speaker B: Oh no, no, this. So the pricing's trap. So AWS Security Hub, perfectly free. You want to send data somewhere. Oh, you got to put that in security lake. And that's expensive.
[01:02:30] Speaker C: That's what they get. Sure. Yeah. I mean like. But in sec you pay for everything that gets ingested into it. And so like if you just want to just want correlation across tools like that's you pay extra for that SEC versus this. At least it's giving you that visibility, which I would prefer an SEC is like, hey, I want to be able to plug these things into a console and then I want you to charge me for other things like analysis and threat detection and the things that actually are valuable to me versus dashboards.
[01:02:54] Speaker B: Well, I think you're paying for the ingest. It's just you're not paying it in Security Hub. You're paying.
[01:02:58] Speaker C: Yeah, you're paying for it in guard duty.
[01:02:59] Speaker B: So guardduty and. And Security Lake, which are both very expensive and I, I'm pretty sure to get anything out of guardduty you're gonna have to put it. You're gonna have to take that GuardDuty data and put it in Security Lake.
[01:03:12] Speaker C: If I remember correctly. I believe that's correct. Yes.
Maybe that's just a small little data lake.
[01:03:18] Speaker D: What?
[01:03:18] Speaker C: How much could it cost?
[01:03:19] Speaker B: Yeah, Security data lakes are never small. Never one.
[01:03:23] Speaker D: I'm trying to think, I don't know that AWS or Azure has a good equivalent of Google SC and AWS Security Hub. Like the closest thing they probably have is like Windows Defender for Cloud, but I feel like it doesn't do as good of a job at the individual services like GuardDuty and Inspector Macy does. Like they exist, but I don't really. You get those true integrations in the same way?
[01:03:51] Speaker B: Yeah, I mean I'd argue that Windows Defender is better at endpoint protection than these. Than these are. So it's sort of like two ends.
[01:03:56] Speaker D: Windows Defender for cloud.
Yeah, I know they've all named the same things.
[01:04:01] Speaker B: Very integrated. Right. And so it's like yes, not really, no.
[01:04:06] Speaker D: Windows Defender for Clouds. I think of it more maybe I haven't seen it set up in this way, but I think of it more as a CSPM.
Like, hey, your S3 buckets are public. Your infrastructure level configuration, not looking at your guard duty when you're like looking at your EKS clusters, your AKS clusters, your server and all those types of things. So I feel like it's at the next level down Security Hub because it gets all those things where I assume GCPS is the same, but Azure's I feel like is still up a level.
[01:04:40] Speaker B: I would say that Azure Defender is very posture management focused like and that's.
[01:04:47] Speaker C: Really all it does.
[01:04:48] Speaker B: I There is integration with endpoint detection if you are running the Windows Defender where you can sort of get.
If you're running that on your servers, you can sort of get the Visibility across. But it wouldn't, it still wouldn't do all the things that Inspector may see the XDR integration.
[01:05:07] Speaker D: But maybe I'm wrong.
Windows XDR or whatever. Defender XDR or whatever they call this.
[01:05:14] Speaker B: Yeah, I could. I'm also like we didn't implement this. I just was researching ways. So possibly I misunderstood.
[01:05:21] Speaker D: I mean it's the problem when you name everything something. Yeah. Defender.
[01:05:25] Speaker B: Defender or Copilot.
[01:05:26] Speaker D: No way.
[01:05:26] Speaker B: You know like which one? I don't know.
[01:05:28] Speaker D: You know which feature of it. Okay. Is it Windows Defender for cloud? Is Windows Defender for Windows Server 2023?
Is it Windows Defender XDR? You tell me.
Definitely not at all.
Sorry, rant over.
[01:05:41] Speaker B: I do love that you know Amazon's answer to, you know, all the security tools is another tool.
Gotcha.
[01:05:49] Speaker C: Well, and I was I just actually just reading here that Security Hub is actually Security Hub v2 and then the current Security Hub that you know it as is now being renamed to Security Hub CSPM to make your life even worse. So what you think is Security Hub is not actually the new Security Hub. Oh.
Which I just read on a thing here as we were talking.
[01:06:08] Speaker D: Oh. So that makes sense because I think of Security Hub as a cspm, which is where they're rebranding it, which is what Windows Defender for cloud is.
[01:06:17] Speaker C: Yeah.
[01:06:18] Speaker D: So it's the V2 Security Hub or whatever we're going to call it. I don't know if there's an equivalent of that in Azure.
[01:06:28] Speaker C: I thought they had Chronicle.
[01:06:30] Speaker B: Nah, it's gcp.
[01:06:32] Speaker C: Yeah, sorry.
No, there is something. Azure. What's the Azure SIM called? Oh, Sentinel. Sorry. These names are all the same damn thing. Sentinel is their tool for SIM and soar.
[01:06:44] Speaker D: But you don't really, I guess you could ingest like your AKs cluster logs into. Yeah, you ingest that into your SIM and then you put your own alerts and everything else on it. Yeah, but they don't have like a good run books and stuff like that. You know, dive deep.
[01:06:59] Speaker C: Well, maybe they'll get some features to copy from everyone else that they'll. You'll get and about five more years you'll be set.
Let's move on. We, we're. We're deep into the show at this point.
[01:07:10] Speaker B: Oh wow.
[01:07:11] Speaker D: Yeah, we have a lot more AWS.
[01:07:13] Speaker C: Released at verified permissions authorization-client-js, an open source package that lets Express JS developers implement fine grained authorization using Amazon Verified permissions up to 90% less code than custom integrations.
As a front end story, I don't really care. But if you're using Express JS and you need to know about this. Awesome.
[01:07:31] Speaker B: So I'll keep it, I'll keep it short, but I do like this because it's what we've done with authentication, sort of exposing that from the app where you're just, you know, you're doing the token exchange outside of the application logic to identify who you are and then the application is still doing all the authorization logic. This is basically taking that model and externalizing that as well and then using that Cedar evaluation to do it, which is kind of neat.
[01:07:55] Speaker C: And it's Express because Express is really like, you know, the next, the next version of JavaScript awesomeness as opposed to kind of not necessarily replace, but closely replace React. So like this is kind of, you know, we learned a bunch of lessons on Angular and then we learned a bunch of new lessons in React and now we're like, now we have Express JS kind of the next big thing. And so the feather giving, you know, this externalized model that actually allows you to do this in this way is, is nice improvement and I'm glad to see it. Yeah. Because what they also realize is that putting that onto the front end makes those very slow. So if they put it into this other component, then use that, then it makes the front end faster, which is what they ultimately want to do.
AWS backup now integrates multi party approvals with logically air gap vaults, enabling organizations recover backups even when their AWS account is completely compromised or inaccessible. By requiring approval from a designated team of trusted individuals outside the compromised account, the feature addresses a critical security gap where attackers with root access could have previously locked organizations out of their own backups. Now recovery can proceed through an independent authentication path using IAM Identity center users who approve vault sharing requests through a dedicated portal. And as I read through this, I can remember of this time when my HSM was locked inadvertently. And to unlock the HSM you had to have the physical key cards which we had given wisely to four different people in the organization who had them locked in their desk drawers. And we had to find the keys to set desk drawers for one of the people who was no longer employed by the company because no one remembered to give the key from him before we exported him out of the door.
So yeah, this sounds great. Love this feature.
[01:09:33] Speaker B: Yeah, I hope it's not a hardware token approval based because I do like the approval workflow and you have to do something to prevent against, you know, ransomware attacks and you know, the logically air gapped or right. Once, you know, there's. There's definitely shortcomings of both, you know, in terms of operations, but also being able to restore is critical. So hopefully it's approval workflows. That's all I want is approval workflow.
[01:10:01] Speaker D: The funny part is I was in the console doing something in AWS IAM Identity center earlier today and I saw this. I was like why is approval teams? And now I know.
[01:10:12] Speaker B: Now you know.
[01:10:13] Speaker C: Now you know.
[01:10:14] Speaker D: Cool.
[01:10:15] Speaker C: Like you're out there playing off around an AWS world.
So it's fun.
[01:10:19] Speaker D: I get to have some fun still.
[01:10:20] Speaker C: Yeah.
AWS provides a cloud formation template that automatically monitors S3 bucket policy changes using CloudTrail, EventBridge and Simple Notification Service. Send email notifications containing IP address, timestamp, bucket name and account ID when policies are modified.
The solution addresses a critical security need as enterprises manage hundreds of access policies across expanding cloud environments, helping central security teams maintain visibility and compliance for S3 bucket access controls. The invitation requires only CloudTrail to be enabled and uses KMS encryption for secure SNS message delivery, with the ability to extend beyond email to create internal trinkets or trigger webhooks based on operational requirements, the EventBridge rule specifically monitors for Put Bucket policy, Delete Bucket policy, Put Bucket ACL and Put Object ACL operations, providing comprehensive coverage of policy modification events across S3 buckets and organizations can deploy the solution across multiple AWS accounts and regions using cloud formation stack sets, making it practical for large scale environments and managing millions of S3 buckets. It's scalable and beautiful for everybody except for the SOC team who's about to get buried in alerts.
Yeah.
[01:11:22] Speaker D: I'm more confused why you would do it this way than using config to just you leveraging config for this.
[01:11:29] Speaker B: Oh no. So config would be how you maintain the. The ACLs are in place for all the buckets and objects.
[01:11:36] Speaker D: You could also do notifications through here. Can you config the fires and SNS topic? There's no different. It's literally doing essentially the same thing. But config can then go in two directions.
I mean, this is just a solution I threw out there.
[01:11:51] Speaker C: Yeah. Yeah. I mean it's.
[01:11:52] Speaker B: I was gonna. My joke for this was gonna be like, oh, a professional services engagement they turned into a product, you know, kind of. Because that is.
[01:11:58] Speaker D: It's 100 what they did.
It's a hundred percent. Well, half these, these confirmation templates that they provide out there is what they are.
[01:12:07] Speaker C: Yeah.
[01:12:07] Speaker D: The professional services you know things that they've solved for customers and someone writes a blog post which then they get published which helps them get so if.
[01:12:15] Speaker C: I if I remember there's a couple flaws with config which is probably why this exists so number one config is not real time and it's. It's got a pretty significant delay before it attacks that thing occurred and that you take the action on the thing I think it's up to it operates off cloudtrail well yes this operates off cloud trail and config can operate off cloud trail but if you're also doing scanning Right. Doesn't it require the rule evaluation to.
[01:12:41] Speaker D: Fire yeah but you can have a fire on an event change so your event so it would trigger there's ways to do faster near real time I thought now I'm like because second guessing what I remember config but I wrote a lot of exam questions about auto remediation leveraging config and lambda over the.
[01:13:02] Speaker C: Years yeah I definitely remember like that's a thing but I didn't think it was real time. I think that's that was the key thing here but maybe not. Maybe I'm wrong. It's been a while since I've played with config for sure.
A quick Google search if you can believe it. I don't use config for my personal projects.
[01:13:21] Speaker D: Config operates in real time. It watches for resource changes in your organ. Yeah it's pretty. It's real time.
[01:13:29] Speaker B: Okay fine. This is cheaper so take that.
[01:13:33] Speaker C: I mean we're talking about fractions of pennies so I don't know that I care that much but you guys so much more value at a config for that penny than you do. Yeah that is true. Yeah.
[01:13:42] Speaker D: Then you have a custom then you have another event bridge. You're married Jay. Like I just have feelings about this. I was clearly not in the pre show read because I probably would have kept this one.
[01:13:53] Speaker C: Well you definitely want to kill this next story but I just have to laugh because AWS CDK is launching bi quarterly community meetings starting June 24th, 2025 with two sessions 8am and 5pm Pacific time to accommodate global users. Does that replacing their original plan for formal contributor council governance model. The meeting will feature roadmap updates, team demos, RFC reviews and open Q and A sessions with all content recorded and posted to YouTube for those who can't attend live. The shift to open community meetings allows broader participation beyond just core contributors while maintaining adoist control as a Project maintainer addressing the balance between community input and project governance. I laugh because they're using this on Zoom, which is clearly why they had to kill their previous messaging product. Chime. Yeah.
[01:14:34] Speaker D: Is bicortly every twice a quarter or every other quarter.
[01:14:38] Speaker C: I think that's twice.
I don't know. But I do kind of want to just go to it and then just be like, what's CDK in the Q and A?
[01:14:48] Speaker D: Just troll the meeting.
[01:14:49] Speaker C: Yeah, I mean, totally do that.
[01:14:51] Speaker B: The only thing they could have done to drive me further away from CDK is to have community meetings and talk about it. And so like, no chance.
[01:15:02] Speaker C: Yeah, it was. It was good. All right, well, that, that pretty much. I don't think the CDK thing was actually part of reinforced. I think we got through all the reinforced stuff, which, you know, there's a lot. But this is, this is a good reinforce. Like we get it on predictions on this one. Yeah, you kind of stopped doing reinforced predictions because after the first year and the second year and the third year, they didn't announce anything league because they're like, yeah, they just don't do that there. But apparently now they do. Yeah, maybe it's back on the table, boys. Yep, another one. Definitely some good stuff.
[01:15:27] Speaker D: Every other show's gonna be so many.
[01:15:29] Speaker C: Opportunities for you to win, Matt. That's how I like to think. Yeah.
[01:15:31] Speaker B: Yeah, you too. Could be a winner.
[01:15:34] Speaker D: No, no, that's not what happened. You only went on clouds. You don't.
[01:15:38] Speaker B: That's true.
[01:15:39] Speaker D: Like I said, our day. Like, I think there's a rule somewhere here, like, you're only allowed to win on the cloud that you don't actually deal with on a day to day basis.
[01:15:47] Speaker C: I mean, that's been my defense of everyone. I was like, well, you shared non public information that violates your confidentiality agreement. I'm like, I don't know what you're talking about. I never win.
If I did, I would have won for my own cloud.
I'm very strict about that. I definitely don't do that.
Processes in place. All right.
This is not an Amazon story, but it is a 1Password story with how they're now syncretic doing secrets syncing with AWS, which I was kind of intrigued by. 1Password now integrates with Secrets Manager, allowing users to sync secrets directly from 1Password desktop app to AWS environments without SDKs or code changes. And this addresses secret sprawl by providing a centralized management interface for credentials using AWS applications. The integration leverages 1Password environments which run beta which provides project specific scoping for secrets and uses confidential computing to ensure secrets are never exposed as plaintext during sync operations.
Teams can manage environment specific credentials independently with built in security controls and this marks the first deliverable under 1Password Strategic Collaboration Agreement with AWS positioning it as a preferred secrets management solution for AWS customers. The integration is available to all 1Password tiers at no additional cost beyond existing subscriptions. And like why I think this is really cool. Why couldn't you use parameter store which is way cheaper?
[01:17:03] Speaker D: That goes into the ever ever ongoing argument of when do you use parameter secret versus when do you use secret manager secrets?
[01:17:11] Speaker C: And the only time you use secret manager secrets is if it's integrated into the Amazon service that needs it. Otherwise parameter Store is your friend all the way. It's way cheaper and way better for most use cases.
[01:17:21] Speaker D: The only way you do it is if you need to if you're going to actually set the auto key rotation.
[01:17:25] Speaker C: Correct or it's an Amazon service that only needs secrets.
[01:17:29] Speaker D: I will say this is a pretty cool feature though. Like I can imagine this for like dev environments, stuff like that.
[01:17:34] Speaker C: As a 1Password customer I am super psyched about this. I don't know, I'm like I'm thinking about all the ways I could use this and the value could add to me. I haven't enabled it yet because I'm a little nervous with the beta part of it and these are my passwords and they're important to me.
But when that goes ga this might be something I'm using especially for consulting stuff I do where I have all these passwords for Amazon accounts that I don't really want to store on my computer, but I want them available to me in a secret store somewhere.
[01:18:02] Speaker D: I could definitely see doing this. We use 1Password at work @ the day jobs, but so you know, letting dev environments kind of sync that way and giving the dev team access so they can test locally and you know, write a script that queries puts in, you know, and then attaches it to larger scale dev environments. Things like that. Like you could do a lot of really cool stuff with this.
[01:18:22] Speaker C: All right, now we're moving into why Amazon.
We have two stories that are in why Amazon.
First one is Amazon Time Sync service now adds nanosecond precision timestamps directly at the hardware level on supported easy to instances, bypassing kernel and application delays for more accurate packet timing. This leverages of course the AWS Nitro system's reference clock to timestamp packets before they reach the Software stack. The feature enables customers to determine exact packet order and fairness, measure one way network latency and increase distributed assistant transaction speeds with higher precision than most on premise solutions, Financial training systems and other latency sensitive applications can now achieve microsecond level accuracy and packet sequencing. Available in all regions where Amazon Time Sync Service PTP hardware clocks are supported. The future works on both virtualized and bare metal instances at no additional cost. And customers need only install the latest ENA Linux driver to access timestamps through standard Linux socket APIs. And if I have to worry about this, I need to find a different job.
[01:19:21] Speaker B: I know the answer.
[01:19:22] Speaker C: This is a level of precision that I don't want to have to deal with. I understand why it's important for financial services, especially the stock market. I get it. But. Oh no.
[01:19:29] Speaker B: Yeah, because I was super surprised when NASDAQ announced that they were moving their trading workloads into aws.
[01:19:37] Speaker C: But now you know why? Because they're getting this. Yeah, exactly.
[01:19:41] Speaker B: Because that was always like, you know, used to work for a company that hosted a large exchange in New York and it was, you know, this is a key blocker to using cloud systems. And so like it's being able to not only, you know, process things at a very near time, but being able to audit the fairness and that you're processing in a specific order is super important in those workloads and high trading volume. You're talking billions of transactions a second. So it's.
I get why, I get why it's important. And it was kind of neat to learn that. And, and, and all the difficulties and all the work that goes into this. And I'm sure that this, I wonder if this is. Was this available in 2022 just for NASDAQ?
They had Nitro.
[01:20:29] Speaker C: Maybe they were an early adopter. Yeah, but that was the only company I ever was at where the network team was a profit center, which is always so weird to me. When they explained it to me, I was like, sorry, what?
[01:20:44] Speaker B: It makes money somehow.
[01:20:46] Speaker C: And it makes that much money. Holy crap. Yeah, that was always impressive.
And now our why. Amazon story number two. AWS VPC increases the default route table capacity from 50 to 500 entries, eliminating the need for manual limit increase requests that previously created administrative overhead for customers managing complex network architectures. This 10x capacity increases directly benefits organizations using multiple network paths for traffic inspection, firewall insertion, or connected to various gateways like transit gateway, VPN or peering connections. The change applies automatically to all existing and new VPCs across commercial and govcloud regions though accounts with existing quota overrides will maintain their current settings.
[01:21:24] Speaker B: Oof.
[01:21:25] Speaker C: I don't want to be in a situation where I'm managing 500 entries across multiple VPCs. Even with things like Transit gateway that make these things easier. I don't want to do this.
[01:21:33] Speaker B: Well, I imagine this is exactly for BGP announcements, right? Because I, I think every VPC that's attached to the transit gateway does have to be a route and you could do BGP advertisement to automate that. But you're probably, probably would hit a limit pretty quick.
[01:21:51] Speaker D: Well, that's the only place I've ever had to increase this limit is on that centralized transit gateway like the core one.
I have to do it on any of the other ones. So like to me this is like, okay, I have 50 AWS accounts, you know, one, maybe two of them.
I mean but in order to do.
[01:22:12] Speaker B: A full mesh network with multiple VPCs, you'd have to. Right. Hub and spoke. You can do this all day, right?
[01:22:17] Speaker D: With 50, I guess I mainly go for hub and spoke versus full mesh because your blaster radius there then becomes massive. Well, there's one account, so therefore all 49 others. Like I just, I don't really.
[01:22:28] Speaker C: I mean do you think that, do you think that people are making istio manage routes at this level in mesh networks?
[01:22:37] Speaker D: Oh no.
[01:22:38] Speaker C: I mean so in kubernetes.
[01:22:40] Speaker B: So service mesh is, is not mesh networking number one. But this is basically it is a similar software defined networking concept. You know, if you are automating the network routing level of BGP on spinning up and spinning down VPCs, it's not a lot different.
[01:22:58] Speaker D: How often are you hitting the 50 to 500? Like that's a lot of times to be hitting that limit. And just what feeling if I'm hitting. I feel like if I was hitting the that limit, I've done something wrong.
I don't think so micro segments in my network to a new level.
[01:23:19] Speaker B: Well, so I mean I have a.
[01:23:20] Speaker D: Lot of private endpoints.
[01:23:21] Speaker B: If you think about, you know, having an AWS account per, you know, internal service or internal application and then you expand that to global regions and each one of those becomes a vpc like within a company that's going like full. You build it, you run it and has like, you know.
[01:23:37] Speaker D: Yeah, but it can at that point then I probably also, if I'm that security conscious, I wouldn't have that set up in that way with a transient gateway and everything. Along those lines that more than likely would be using private endpoints. And using private endpoints across account to say, okay, this one particular service is allowed to do it because I'm micro segmenting out my infrastructure, my environment to have each individual thing located at each location.
[01:24:06] Speaker C: Maybe.
[01:24:07] Speaker D: I mean, what you.
[01:24:08] Speaker B: It would be more difficult to centrally administer the private endpoints per application than it would be to centrally administer VPCs and pair pairing together, but.
[01:24:18] Speaker D: Yeah, but again, your blast radius is much larger.
[01:24:21] Speaker B: Not necessarily. There's other layers of protection in there, but. Yeah, I hear you.
It's just choices.
[01:24:28] Speaker D: If you don't want to pay 4 billion private endpoints, which is also highly plausible. Yeah, that is why, Justin. Oh, that is why.
[01:24:35] Speaker C: Yeah, good, good.
[01:24:36] Speaker B: Managing subscription private endpoints. That doesn't sound fun.
[01:24:39] Speaker C: No, none of this sounds fun. Like, I mean, like I've. I now had to buy my network team a drink. I think just.
[01:24:46] Speaker D: And.
[01:24:46] Speaker C: And condolences of having ever do this.
[01:24:49] Speaker B: Only if they've hit. They've gone past the 50 routes. If they haven't, then they. They deserve nothing.
[01:24:54] Speaker C: That's fair. Yeah.
[01:24:55] Speaker D: You get water, sir? Water.
[01:24:58] Speaker C: Day one of my new job. How many rats do you have in vpc?
Tell me.
Actually, you should probably ask that in the interview.
That's going to be an interview question actually, now that I think about it. Tell me about your routing structure. How many routes do you have in most of your VPCs? Do you have this going on? What's going on? How big is your Kubernetes cluster? Did you use ecs? You didn't use ecs? Okay, thanks. I'm going to go. Yeah.
All right. AWS released a blog post about them building the world's most powerful computing for training. They're calling this Project Rainier, which if those of you who are familiar with Seattle, is a very large volcano in the view of all most Amazon offices, which creates the world's most powerful AI training computer using tens of thousands of Trainium 2 Ultra servers spread across multiple US data centers, providing anthropic 5x more computing power than their current largest cluster. For training cloud models, the system uses custom Trainium 2 chips capable of trillions of calculations per second acting via neuron links with 64 chip ultra servers and EFA networking across data centers to minimize latency and maximize training throughput.
AWS vertical integration from chip design through data center infrastructure enables rapid optimizations across the entire stack while new cooling and power efficiency measured, measures reduced mechanical entry consumption by up to 46% and embodied carbon and concrete by 35%. Project Rainier establishes a template for deploying Computational power at unprecedented scale, enabling AI breakthroughs in medicine, climate science and other complex domains that require massive training resources. The infrastructure maintains AWS industry leading water efficiency at 0.15 liters per kilowatt hour, less than half the industry average through innovations like seasonal air cooling that eliminate water usage entirely during the cooler months. Well, nice job Amazon. That's they have some cool pictures too. If you love to see some racks of blinking lights, they have photos.
[01:26:41] Speaker D: I, I love that everything's led.
[01:26:43] Speaker C: Yeah, also cool videos too in the article, but pretty cool.
[01:26:48] Speaker B: I love that data center design's becoming hip again because AI is forcing everyone to relook at, you know, the density that you can offer.
It's kind of neat.
I like it.
[01:26:59] Speaker C: Now CloudWatch Investigations uses an AI agent to automatically identify anomalies, surface related signals and suggest root cause hypothesis across your AWS environment, reducing mean time resolution at no additional cost.
Now generally available, you can trigger these Investigations from any CloudWatch widget, 80 plus AWS consoles, CloudWatch alarms or Amazon Q Chat with results accessible through Slack and Microsoft Teams for teams collaborations. I did see this button in my console recently and I did push it to see what it was.
I will not say it has not put me out of a job. I'm still smarter than it, but it's pretty cool.
It still gives you quite a good, a lot of good insights and it definitely pulls a lot of things together to show you like hey, it could be this like that's cool if you knew the app but other than that it's pretty good.
[01:27:41] Speaker B: Were you trying to debug an actual issue when you hit it or was it just sort of insight on a running at runtime?
[01:27:47] Speaker C: I was, I was trying to figure out IM issue at the time so it was an issue that I was looking at but it, it did figure that out. I mean it did tell me there was most likely an IAM access issue but then it told me the wrong wrong account. But that's okay, it didn't know. Yeah, that's still interesting.
[01:28:04] Speaker D: Where is this in the console? This is in under cloudwatch is in.
[01:28:08] Speaker C: Any widget inside of Cloudwatch but it's also up on the top right there's a button that says start an investigation in most of the consoles or you see it on the top right of the tables, table views that they put like on the EC2 console you'll see like start investigation and you tell it basically what the problem is and it helps you try to figure it out.
[01:28:26] Speaker D: Okay, cool.
[01:28:27] Speaker B: Yeah, I don't like the name for some reason. Maybe it's just the security guy getting all bent out of shape, like that's our word.
[01:28:32] Speaker C: But it's not just in. It's not just saying Cloudwatch. That's the other thing too is like it's everywhere. So like, I'm shocked. It's on Amazon. Q Investigator, something like that. That. But I do. I don't hate the name like you do, but I'm not in security, so.
[01:28:46] Speaker D: Well, it's just.
[01:28:47] Speaker B: Yeah, I mean, I think guard duty has investigations. Like, it's just. It's more of like it's I. And I can't think of a better word for what it's doing, so I don't know.
[01:28:55] Speaker C: Agreed. Well, we made it, guys. We made it through aws. Jesus.
There's a lot of. Lot of stuff there.
[01:29:02] Speaker B: They gotta slow down, man.
[01:29:04] Speaker C: I know. They're. They're killing it right now. They're like, yeah, we, you know, we sto. We slowed down for GCP and Azure to do their conferences. So now we're back with eventually.
Luckily, we don't have too many stories left. So if you're hanging with us for GCP and Azure, tell you. And there's an Oracle story, we're not too far from that.
[01:29:19] Speaker B: The single listener that's waiting for that.
[01:29:21] Speaker C: Oracle story, man, they really just want to hear us make fun of Oracle. That's really what it is.
[01:29:25] Speaker B: That's all we really do about it.
[01:29:26] Speaker C: Yeah.
[01:29:27] Speaker D: Or we're like, ooh, that's a really big server. Where's my wallet? That's gonna be stolen.
[01:29:32] Speaker C: Yeah. Yeah. All right. Moving on to GCP. The Gemini 2.5 Flash and Pro models are now generally available on vertex AI with Flash optimized for high throughput tasks like summarization and data extraction. While Pro handles complex reasoning and code generation. The GA releases provides production rate stability for enterprise deployments. The new Gemini 2.5 flashlight enters public preview as Google's most cost effective model, running at one and a half times faster than the 2.0 flash at lower cost, targeting high volume workloads like classification and translation. This positions Google competitively against AWS Bedrock's lighter models and Azure's economy tier offering rings. Supervised fine tuning for Gemini 2.5 Flash is also generally available, allowing enterprises to customize the model with their own datasets and terminology. And this addresses a key enterprise requirement for domain specific AI that competitors have been pushing with their fine tuning capabilities. The Live API with native audio to audio capabilities enters public preview, enabling real time Voice applications without intermediate text conversion. And this streamlines development of voice agents and interactive AI systems competing directly with OpenAI's Real Time API offering.
Pricing reflects the tiered approach with Flashlight for cost sensitive workloads, Flash for balanced performance and Pro for advanced tasks. Complete pricing details are available at the website if you're interested.
We'll probably one day just switch the bot that helps with show notes to Gemini and see if we notice someday it does a plugin and how I built it.
So I can just basically swap out the cloud plugin for the Gemini plugin and give it the right key and all the same prompts would work work. But that may be kind of fun to see. What if you guys are like has a whole different tone today.
[01:31:05] Speaker B: But I mean it has a big effect switching back and forth between Claude and Gemini on coding. Yeah, has a huge effect. So I imagine it would have like, like you said, a whole. It would key in different points and have a probably a whole different dome.
[01:31:19] Speaker C: What would be cool is if I got smart, I'll be like, okay, I'm going to use Chat GPT, I'm going to use Claude and I'm going to use Google Gemini. I'm gonna have all three of them summarize it, then have each of them rate the other ones and then take the best points. Well that'd be kind of cool.
[01:31:34] Speaker B: That would be also.
[01:31:35] Speaker C: Would be very expensive. Yeah, yeah, but it'd be fun. It'd be definitely fun.
[01:31:40] Speaker B: I mean I'm really glad they're offering Flash just because they, they've been removing stuff from the Vertex AI model gallery and some of the cheaper ones were some of the things they were removing. So that's like, you know, workload that, you know, just needed Basic summarization was being pushed towards the large Gemini model and it's like this doesn't make a lot of sense. So glad to see that they're offering a competitive option.
[01:32:08] Speaker C: Google Cloud Backup Vaults now supports standalone persistent disk and hyper disk backups in preview, enabling granular disk level protection without backing up entire VMs. This provides cost optimization for scenarios where full VM backups aren't necessary while maintaining immutable and indelible protection against transmission. I think we literally talked about this when they announced this feature and I was like, well, I don't always want to backup my Linux drive or my Windows drive.
The multi region backup vaults are also now generally available. Storing backup data across multiple geographic regions to maintain accessibility during regional outages. This addresses business continuity requirements that AWS Backup doesn't currently offer with its single region vault limitation. Backup vaults create a logically air gap environment in Google's managed projects for backups cannot be modified or deleted during enforced retention periods, even by backup administrators. And the service provides unified management across compute engine VMs, persistent disk and hybrid disk with integration to security command center for anomaly detection. And this consolidation reduces operational complexity compared to managing separate backup solutions for different resource types. So nice backups.
[01:33:05] Speaker B: Yeah, I mean just that I had the sort of same reaction you did. I'm like wait a second, I have to back up the OS drive on.
[01:33:10] Speaker C: All these things like I don't care.
[01:33:13] Speaker B: And it made me question the whole architecture between the service like the backup service, like wait, I'm hoping that this was just like, you know, sort of an assumption made that VM the disk ratio 1 to 1 we forgot to add a lookup that looked up the disk id.
[01:33:27] Speaker C: I mean it took almost a year to get so it must not have been a lookup. Doesn't seem like it.
[01:33:32] Speaker B: And so I was like oh maybe this doesn't work anywhere near like what I think.
[01:33:36] Speaker C: Yeah, well if you thought to yourself I want to take Looker reports and I want to put those into code and then put continuous integration in place. Google has your back this week as they're introducing continuous integration for Looker, bringing software development best practices to BI workflows by automatically testing lookml code changes before production deployments to catch data inconsistencies and broken dependencies earlier. The feature includes validators that flag upstream SQL changes, breaking looker definitions, identifying dashboards, referencing outdated look ML and check for code errors and anti patterns, addressing scalability challenges as organizations expand their Looker usage across teams. Developers can manage CI test suites, runs and configurations directly within Looker's UI, with options to trigger tests manually via pull requests or on schedules, similar to how AWS Quicksight handles version control, but with deeper integration into the development workflow. This positions Looker more competitively against Microsoft's Power BI deployment pipelines and Tableau's version control features, particularly for enterprises requiring robust data governance and reliability across multiple data sources. Currently available in preview with no pricing details announced, the feature targets organizations with complex data environments where manual testing of BI assets becomes impractical as team scale.
[01:34:43] Speaker B: This is really true and I didn't know that Power BI and Tableau had their own version of this like version control. I guess I've known about, but that doesn't sound anywhere near as full featured as this Looker option. I really like the Upstream SQL changes, because that's where everything usually breaks right there. The data set changes the schema slightly and then everything's broken.
And so, like, I, I think this is kind of neat. And I, I do really like the scalability. Like, it looks like there's AI built into it to detect issues because that's also a thing. Like, this dashboard works great on, on my data set that I started with, and then you start expanding out the.
[01:35:26] Speaker C: Use case and all of a sudden.
[01:35:28] Speaker B: Those graphs, no load.
So, yeah, this looks cool. I like Looker, so I'm glad they're.
[01:35:35] Speaker C: Yeah, I'm definitely excited.
[01:35:37] Speaker D: It's becoming like, you know, just another application you're promoting, you know, your code up to. So, you know, having it be integrated, having your dashboards be, be in that way. It's the same way we were talking about earlier in security tools. Like, everything still needs to go through that process.
[01:35:55] Speaker C: Yeah.
[01:35:55] Speaker D: You know, everyone was so focused on, I feel like, software for years, and now it's like, whoa, whoa, go look in your own backyard, too.
You should be doing the same things that you're preaching in other places in your area too.
[01:36:07] Speaker B: I mean, and I hope this is like DevOps philosophies taking hold. Right. So you have software development teams that are now responsible for more things. Like maybe they're responsible for providing cost dashboards for their cloud workloads. And so then they're looking for options to streamline their workflows using the processes that they've grown up with. So I don't know. It's cool.
[01:36:29] Speaker D: But even, like, documentation tools are started using the same philosophies. Like, your documentation team can be distributed and have different people put stuff in and have the same promotion process.
So I've started to see it in more and more places.
[01:36:44] Speaker B: You don't need that. You just have AI to do everything.
[01:36:46] Speaker C: It's easy.
[01:36:49] Speaker B: Documentation.
[01:36:52] Speaker C: I mean, with AI, I'm getting a lot more docs. Yeah. My documentation, so much better.
[01:36:56] Speaker D: Now, here's the real question. Does anybody look at your documentation?
[01:36:59] Speaker B: No, no, no. It's a point of pride, though. I get to say, look at all my documentation.
[01:37:03] Speaker C: Yeah. I mean, I make the AI read my documentation all the time because it's like, oh, I'm gonna go do. I'm like, no, go read the docs you wrote that tell you how to do the dumb thing you're trying to do.
[01:37:11] Speaker B: I wish I never would have guessed that that's a thing, but it's absolutely required, like, because the docs you wrote.
[01:37:17] Speaker C: Go read them just because like you, you don't read your own docs and so the AI does not just a faster cycle like six months later.
[01:37:25] Speaker B: I don't remember anything either. It's just AIs doing that in minutes. So this makes sense. I get it now.
[01:37:33] Speaker C: Google Cloud CDN now supports service Extensions plugins, allowing customers to run custom web assembly code at the edge across 200 plus points of presence for request response, manipulation and custom logic execution. This feature enables edge computing use cases like custom traffic steering, cache optimization, header manipulation and security policies, competing directly with AWS Lambda at Edge and cloudflare workers but integrated natively with the Cloud CDN plugins support multiple languages including Rust, C and Go, and execute with single millisecond startup times and run in sandbox environments using the open source proxy WASM API standard. Cloudinary has already integrated their image and video optimization solution as a package WASM plugin, demonstrating partner ecosystem adoption for media heavy workloads requiring dynamic content transformations. Developers can choose between Edge extensions before the CDN cache or traffic extensions after cache closer to Origin, providing flexibility and where custom code executes in the requested path.
That's a. That's a pretty big feature that they just kind of threw out there.
[01:38:30] Speaker B: Tossed. Tossed in there. Yeah, it's one of those things that you know you don't know you need until you need it. And you get all the optimization by putting something like this in place and it saves your entire program.
[01:38:40] Speaker D: You're definitely going to try to design something at my day job and I was like, oh, we'll just use the Azure equivalent. Lambda at the edge doesn't exist.
[01:38:50] Speaker C: I thought Front Door did have a service similar.
[01:38:53] Speaker D: No, not something that we could use.
[01:38:57] Speaker C: Okay, well, that's a different conversation.
Yeah, let's move on to Azure. Microsoft Azure Quantum announced a quantum error correction scheme that can improve hardware qubit error rates from 1 in 1000 to logical qubit error rates of 1 in 1 million. Though this is based on mathematical proofs and simulations rather than demonstrated hardware performance, Azure's approach differs from IBM's fixed layout quantum chips by supporting multiple hardware technologies, including movable atom based qubits from partners like Atom Computing and Quantum, following more flexible error correction implementations. So Microsoft continues to say Quantum is around the corner, but yet still hasn't proven the science. So we're waiting for that.
[01:39:34] Speaker B: I look forward to like our earlier comments about not getting into AI early enough and missing out on the 100 million to payday. I'm going to do the same thing when it comes to quantum computing and we like, ah, they're going to get all this money for the quantum computer scientists. And if only I would have not been able to stay awake while I was reading through one of these articles.
[01:39:51] Speaker C: That's the problem with the quantum thing is that the science is so complicated, so dense. Yeah, yeah. And I'm just like, I'm dying right now trying to understand the math.
[01:40:01] Speaker D: Even if I could, I don't even, I could read the paper. I don't think I even understand it and I have to follow what they're saying. You have.
[01:40:09] Speaker C: But now with AI, I can tell you I can help Claude. Like I would like you to take this scientific paper and I'd like you to explain it to me like I'm five years old. Please go. Yeah, there won't help because I don't think Claude understand that math either. But I, I like to think it.
[01:40:21] Speaker B: Could, but it makes up some good stuff. That sounds good. So it's good enough. You feel you still leave with the warm and fuzzies. You don't understand anything anymore. But you know, it's fine.
[01:40:29] Speaker C: Azure has given us two new MCP toys this week.
First, the Microsoft Fabric, a real time intelligence, now supports MCP enabling AI models like Azure OpenAI to query real time data using natural language that gets translated into KQL queries. This open source integration allows developers to connect AI agents to Event House and Azure Data Explorer for immediate data analysis, which is pretty great. They also give you MCP Server to act as a bridge between AI apps and Microsoft's real time data platform. And then they also did Azure DevOps MCP Server, which enables GitHub, Copilot and VS Code and Visual Studio to access Azure DevOps data, including work items, pull requests, test plans, builds and wikis running locally to keep private data within your network. Which, that is actually the feature I'm most excited about because if I could have it handle low level bugs or defects or triage or any of those things, I'd like you to triage this issue and see if you can fix it without involving me and leave me alone to code with my vibe coding over here.
[01:41:22] Speaker B: Yeah. And you hook it up to something like the DevOps tooling and it's like, yeah, it'll do a full check in and do all the commits and then do a release and promote it and you know, build all the test suites, run the test suites like it, it can do it end to end once you give it the right tooling. And so it's you Know like that's a lot of responsibility and definitely we, we know AI is not to be trusted implicitly without some, you know, supervision.
[01:41:47] Speaker C: But like once, once the manual checks go in place, I trust AI to go to the next step and oh yeah, you can build the rail pipelines.
[01:41:53] Speaker B: Yeah. Something like you know, go making. Making an artifact. Oh sure.
[01:41:59] Speaker D: I mean it just goes back to you really need TDD or if you're really going to go hardcore into the AI process and having a code is having true and good tests. Maybe not true tdd. You could argue that test that you test in your code base.
[01:42:14] Speaker B: You could argue that using AI for vibe coding is TDD because you're, you're basically stating the outcome you want almost an assertion and telling the go do this thing. So that's. There's.
[01:42:25] Speaker D: Yeah, but it doesn't always do that thing. There's not like a test at the end to say this is the thing that you have done and here's all the unit tests associated with it which we still have to have that conversation. Definitely not today because it's our episode.
[01:42:38] Speaker B: Yeah, exactly.
[01:42:41] Speaker C: Cohere models are now available on Azure AI Foundry. I don't know if anybody who's using Cohere models, but that's the Command A, the rerank 3.5 and then bed 4 models. I'm sure it has a use case. I just don't know what it is.
[01:42:51] Speaker D: Yeah.
[01:42:53] Speaker C: Azure functions finally Support OTEL or OpenTelemetry in Preview, enabling standardized telemetry export to any OpenTel compliant endpoint beyond just application insights. This gives developers flexibility to use their preferred object platform while maintaining correlation between host and application traces. This puts it on the same wavelength as x ray or GCP cloud functions, native OpenTelemetry capabilities.
And while this is implementation is here, it's catching up still with others. So glad to see finally something like OTEL should just be default Azure. Come on.
[01:43:24] Speaker B: Yeah, yeah, come on, man.
[01:43:25] Speaker D: They're just a little bit slow. Don't make fun of them.
[01:43:30] Speaker B: I don't know if you've listened to the podcast before, Matt. That's kind of our thing.
[01:43:34] Speaker D: That's kind of what I say every day. No one listens to me.
Like they're just trolling along behind the scenes. They'll eventually get an ACM competitor. They'll get there one day. We'll figure it out.
[01:43:48] Speaker C: Well hopefully by the time you get 47 day certificates now must do.
Azure SQL database now supports data virtualization in public preview enabling direct T SQL queries against CSV parquet and Delta files stored in Azure Data Lake storage and two or Azure Blob storage with ETL processor or data duplication.
This brings polybase like capabilities From SQL Store 2022 to Azure SQL Database and this feature supports three authentication models unlike other solutions like Redshift Spectrum or BigQuery External Tables. Azure limitation leverages familiar t SQL syntax integrates seamlessly with existing SQL Server security models, making it easier for SQL Server shops to adopt White without learning new query languages. And all I can think about this story is I'm sad for all the shops that were in the process of going to true big data warehouse solutions and breaking away from SQL Server who are now just got sucked back into SQL Server because they said, oh, we can use the same queries, but now it's back to, you know, big data platforms in the backend.
[01:44:45] Speaker B: Yeah, well, I mean they still have to make the transition, right? Because the back ends, the format of the data has to be not in SQL database. But yeah, I hear you. I laughed in the exact same way.
[01:44:59] Speaker D: Storing it in blob storage versus having any hot SQL storage talking the difference of 30 cents gigabyte versus under hundredth of a cent, you know, and like that implication, there becomes massive differences. So if you are a company that has everything in SQL, this is not. This is. This could be a game changer if you haven't started that migration into, you know, understanding your data and tiering it in different storage classes and things along those lines.
[01:45:30] Speaker B: Yeah, I mean, the minute Sina came out, I never wrote to a database ever again. You know, I had key value storage, but everything else was just JSON storage.
[01:45:38] Speaker C: I mean, I'm playing with Dynamo more and more nowadays. I don't know why I'd ever use anything but Dynamo for Dynamo configuration stuff too.
[01:45:43] Speaker B: Yeah, 90% of my use case for sure.
[01:45:45] Speaker C: Yeah.
Yeah. So that's a 1000th of a cent.
I've done this long enough, Matt, in the podcast that I now can almost do that in my brain.
[01:45:54] Speaker B: That is crazy.
[01:45:58] Speaker C: Many times with Jonathan and ryan I was like 00004 too.
[01:46:03] Speaker D: I'm also on my laptop. I have you guys side by side. Google Docs is kind of crunched over, you know. Yeah, we're making this work while I travel.
[01:46:12] Speaker C: That's fair.
And last but not least, I got an email this morning that Microsoft Ignite 2025 will be held in San Francisco. And I said, I live near San Francisco. I might go to this. Although it's November 18th, 21st, right before I go on a trip for Thanksgiving and before I ignore Google or Amazon, Reinvent.
And then I saw the price tag and I was like, absolutely not. They want $2,325 today to register for this conference. That's the early bird price. And then you have to factor in that the hotels in San Francisco are outrageous. Now, I live close enough. I don't have to stay in a hotel, but I typically do just because traffic in the Bay Area.
[01:46:51] Speaker B: It's also far enough where you want to stay at a hotel.
[01:46:53] Speaker C: Yeah. And so, yeah. Oh, my goodness, I can't imagine. Hey, they also got a steal of a deal from Moscone to put this back into San Francisco versus going to Orlando where they were in the past.
But $2,300?
No.
[01:47:08] Speaker B: Yeah, no, I was, I got all excited when at the headline because I'm like, oh, finally I can go to one of these Microsoft things to see what they're all about. But yeah, not at that price. Like hard pass. Unless they're going to let me in for free.
[01:47:18] Speaker C: Yeah, they're doing the keynote at Moscone center. Or sorry, at Chase center, which. That'd be interesting. That's cool. But it was actually 23, 25. Or here's the free registration for the virtual stream of the. I'm like, well, that's what I would do anyway. So I guess that's what I'm doing this year, too. So, Matt, we're going to have to reserve that time for you.
We can maybe live stream that bad boy because that's going to be. But it's also going to be a nightmare because we're going to have to do Microsoft doing night predictions and then we're going to do reinvent predictions and then we're going to have like a mess around Thanksgiving. It's going to be fun. We're going to have a great time.
[01:47:50] Speaker D: Then we do next year predictions and.
[01:47:53] Speaker C: Then after that we're coming out of that until. Yeah, recap 2025 and oh, man, I'm just gonna enjoy, baby.
[01:47:58] Speaker B: I'm going to enjoy my summer.
[01:48:00] Speaker D: I feel like I'm gonna be sick for the month of October, November and December on Tuesdays. Just giving you guys a heads up now. It's not worth it.
[01:48:08] Speaker C: Yeah, Oracle as our last.
[01:48:13] Speaker D: What's reinvent this year?
What's really gonna cost?
[01:48:16] Speaker B: It's the same or what's it cost? I don't know.
[01:48:20] Speaker D: Jack up their price too. Now, Reva was only like, oh, yeah, reinvent went up. It's over $2,000.
[01:48:26] Speaker C: That's 2100 bucks. Oh, wow. Maybe it's crazy.
[01:48:28] Speaker B: Maybe we're the idiots because didn't it use. Wasn't it like 800 bucks?
[01:48:32] Speaker D: No, no, no, it was like 17, $1600.
[01:48:35] Speaker B: Okay.
[01:48:36] Speaker D: You could get a lot of discount, like half price from your AWS account.
[01:48:40] Speaker B: Reps. That's probably why I got the long.
[01:48:42] Speaker C: I think I'm spoiled because, you know, Google cloud early repricing was a thousand dollars and then regular price was 1500 for this year. So maybe I'm just. Maybe it's gonna go up next year.
But yeah, it seems like a lot for a conference that aren't being well attended already. So tennis conference or conference at a time?
[01:48:59] Speaker B: It's way down.
[01:49:00] Speaker C: Yeah, it's way down. You know what I'm trying to say. All right, let's wrap up our last story. We'll save our after show topic another day, but xai's GROK models are now on Oracle cloud infrastructure. Oracle's offering XAI Grok model through OCI generative AI services, marking Oracle's entry into hosting third party foundation models alongside AWS Bedrock and Azure's OpenAI service. And the Vertex model gardens at Google. PowerShell leverages OCI's bare metal GPU instances for training and inference, with Oracle emphasizing price performance advantages, a claim worth scrutinizing given AWS GCP's established dominance in AI infrastructure and economic scale. XAI promises zero data retention endpoints for enterprise customers, addressing a key concern for regulated industries, though implementation details and compliance certificates certifications remain unclear compared to established enterprise A offering.
Windstream's exploration of GROK models for telecommunication workflows represents a practical use case, but adoption may be limited to existing Oracle customers already investing in OCI infrastructure rather than attracting new cloud customers. While Grok3 claims advanced reasoning capabilities in mathematics and coding, the lack of public benchmarks or comparisons to GPT4 Cloud or Gemini makes it difficult to assess its actual competitive positioning. And I've tried to use it a couple times and I'm never impressed.
[01:50:07] Speaker D: Yeah, we're going to be good. Trust us here, buddy.
[01:50:12] Speaker B: Yeah, it'll. It'll all be fine. I'll be fine.
[01:50:16] Speaker C: All good. All right, gentlemen, that was a marathon that I did not expect.
[01:50:20] Speaker D: Yeah, and I missed the first couple.
[01:50:22] Speaker C: Yeah, well, I think. I mean, I blame you for at least seven minutes of it because we made fun of your meal selection for the night, but.
All right, guys, we'll see you next week here in the cloud, hopefully with all this news.
[01:50:36] Speaker B: Yes, Bye everybody.
[01:50:41] Speaker A: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services.
While you're at it, head over to our
[email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback, and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.
[01:51:04] Speaker C: Sam.