293: Terraform Apply - Output Pizza

Episode 293 February 26, 2025 01:09:53
293: Terraform Apply - Output Pizza
tcp.fm
293: Terraform Apply - Output Pizza

Feb 26 2025 | 01:09:53

/

Show Notes

Welcome to episode 293 of The Cloud Pod – where the forecast is always cloudy! This week we’ve got a lot of new and, surprise, a new installment of Cloud Journey AND and aftershow – so make sure to stay tuned for that! We’ve got undersea cables, Go 1.24, Wasm, Anthropic and more. 

Titles we almost went with this week:

A big thanks to this week’s sponsor:

We’re sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You’ve come to the right place! Send us an email or hit us up on our slack channel for more info. 

General News

01:30 Go 1.24 is released! 

04:46 Unlocking global AI potential with next-generation subsea infrastructure

06:25 Ryan – “I was sort of surprised that this is where Meta is investing. I don’t think of them in that space, like I do internet providers and cloud hyperscalers.”

AI Is Going Great – Or How ML Makes All Its Money  

07:50 Sam Altman lays out roadmap for OpenAI’s long-awaited GPT-5 model 

08:54 Justin – “I’m definitely very interested in how, you know, like where does AGI come into their roadmap? Like I know they keep talking about it soon. Like, is that this year’s problem? Is that a problem next year? Is that a next decade problem? Like I don’t really know when AGI is going to be real on what their timeline looks like.”

09:31 Anthropic Strikes Back

10:31 Anthropic Projects Soaring Growth to $34.5 Billion in 2027 Revenue  

11:08 Ryan – “I don’t recommend anyone take investment advice from The Cloud Pod…”

Cloud Tools

11:37 The Terraform plugin for the Dominos Pizza provider 

12:55 Matthew – “There is a feature for hash card vault support for credit card data. And you know, another one which blocks the addition of pineapple as a topping.”

*Listener note: If anyone tries this, let us know how it goes! 

AWS 

14:30 AWS CloudTrail network activity events for VPC endpoints now generally available

15:21 Ryan – “Yeah, this is a neat feature. As someone who remembers, I guess remembers or dreads, can’t, I’m not sure what’s the right word, trying to troubleshoot connectivity to a private endpoint from a data center connectivity. There really is just no visibility or was until this feature was announced. So this is, I think, a fantastic addition and being able to log that information and act on that information for security purposes.”

20:03 Introducing the AWS Trust Center  

20:45 Ryan – “I know that the artifacts was seemingly very hard for non-technical auditors to navigate. And I’ve had to spend a lot of time walking people through that. So anything that makes this easier. I haven’t looked at this landing page, but I’m hoping that it’s sort of geared towards that audience of compliance people who are building reports for very specific frameworks. And it sort of lays it all out in an easy to find manner.”

22:57 Amazon Inspector enhances the security engine for container images scanning

25:12 AWS Secrets and Configuration Provider now integrates with Pod Identity for Amazon EKS  

25:29 Ryan – “This has been a, like a clear area where EKS was not the same offering as in Google or, you know, being able to sort of leverage these identities directly from your pod configuration and your secure, your namespace configuration and be able to tie that to sort of a distributed role identity. So this is something that’s pretty great in terms of being able to provide that. It’s at least one step closer to full workload identity.

26:21 AWS Re:inforce Dates announced

28:30 Exploring new subnet management capabilities of Network Load Balancer

GCP

31:17 Deep dive into AI with Google Cloud’s global generative AI roadshow

36:31 With MultiKueue, grab GPUs for your GKE cluster, wherever they may be   

25:29 Matthew – “What I found interesting about this is that this is something that Amazon and Microsoft really can’t do because of the way Google is built at a global VNet level or VPC level, where each of the other ones have isolated regions. So this is something that because of the way Google is instructed with that global VPC, you have the ability to more easily burst into other regions, versus on AWS or Microsoft, you have to build a VPC or VNet and then launch your workloads in there and then connect it all back. So it’s actually an interesting win that, you know, win or loss, depending on how you want to view it, that Google has, and that they are able to say, just go use the access capacity here. Don’t really worry about data, you know, laws or anything else that you might have to worry about. But, you know, you have this ability to go grab these things in these other places that could be cheaper or more expensive depending on where your origin of everything is.”

41:27 Announcing Wasm support in Go 1.24

42:01 Justin – “…if you can just natively go into WebAssembly from Go, I think that’s a nice feature. Yeah, one more reason why I should learn more Go. Yeah, I keep working on Python, but I could also learn Go. Maybe I could get some more utility out of Go, I think.”

Azure

43:02 Securing DeepSeek and other AI systems with Microsoft Security 

44:03 Ryan – “…the reaction to DeepSeek I find hilarious more than the tool itself, you know, because it is just sort of like, wait, China, no, we have to secure this stuff. And, you know, everyone knew about the security concerns of sending data to AI and sort of, you know, like, yeah, no, this is a thing to be aware of. then immediately forgot it. But the minute it was being sent to a Chinese company, was a different reaction in the industry. And so I definitely think that Azure is capitalizing on this for sure.”

46:39 Microsoft Cost Management updates—February 2025 

47:27 Matthew – “The nudges are kind of useful and they’ve been adding copilot into the console. And then I have fun with it when it’s like, you know, internal server errors, why my instance didn’t scale up properly. And then I just say, copilot, tell me what’s wrong. And it goes, yo, open a support ticket or like try turning it back on and off again.”

49:45 Generally Available: Scheduled Load Tests in Azure Load Testing

51:27 GA: 6th Generation Intel-Based VMS – DV6-EV6

Cloud Journey Series

Yes – It’s back! 

53:10 Should all developers learn Infrastructure as Code? 

Aftershow

Yes, This is back too! 

1:03:02 Man offers to buy city dump in last-ditch effort to recover $800M in  bitcoins 

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod

View Full Transcript

Episode Transcript

[00:00:07] Speaker A: Welcome to the Cloud Pod where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure. [00:00:14] Speaker B: We are your hosts, Justin, Jonathan, Ryan and Matthew. [00:00:18] Speaker C: Episode 293 recorded for the week of February 17, 2025. Terraform Apply Output Pizza Good evening Matt and Ryan. How you doing? [00:00:27] Speaker D: Good, how are you? [00:00:29] Speaker C: You know it's Monday. We don't know we record on Monday, so it's a little weird to me. But you know, it's the way this got to worked out this week with travel and all the fun of India and all the other things people are doing. So yeah, it's good. [00:00:43] Speaker D: We're. [00:00:43] Speaker C: Maybe it'll be more refreshed today cause we won't be burnt out by Tuesday like we normally are. Or Friday when we didn't work Friday. That's right, we didn't have to work today too. That helped as well. We still recorded at the normal time, which is late for Matt, so. So sorry about that. I did try to get it early. [00:00:57] Speaker D: Honestly, it works better for me. [00:00:59] Speaker C: Yeah, probably so. All right, well, we have a bunch of news to get through today. And first up, let's talk about general news. Go 1.24 is now generally available a bunch of new things in go 1.24, if you're interested. Fully supports generic type aliases, several performance improvements. The runtime have reduced CPU overhead by 2 to 3% on average across a suite of representative benchmarks. Tool improvements around tool dependencies for modules and standard library now includes new Mechanism to facilitate FIPS 143 compliance and improved webassembly support. We'll talk about a little bit later today. [00:01:33] Speaker B: I was laughing in the before in the pre read that we're doing just because the FIPS140 came up in my day job recently. So it's pretty funny that that's apparently a big enough deal. And I was sort of surprised to find that, you know, it was the underlying GO library that wasn't supporting the modern encryption algorithms. At least they fixed it. [00:01:57] Speaker D: I assume it was 142, which I don't feel like is that out of date compared that a lot of sticks still kind of reference that. [00:02:05] Speaker B: I think most of them still reference that, but I don't think that they actually had the. I don't think they were FIPS142 compliant either. [00:02:14] Speaker C: So just quick Internet research. FIPS142 came out in May 2001. FIPS143 came out in March 2019. So it's only 18 years newer but. [00:02:29] Speaker B: It'S only been recently sort of canonized. Sure in most of the control language. [00:02:37] Speaker D: Introduce the fifth interface I'm now gone down a hole of what is the difference between the Dash 2 and the Dash 3 and has to do with the added a trusted path and uses a trust versus a trusted channel and now there's a hole that I'm not going to go down while we all talk. [00:02:57] Speaker B: No, I researched this and none of it stuck in my brain so I just assumed that none of that was relevant. [00:03:04] Speaker C: So in142 assumes that all modules are hardware modules where FIPS143 covers hardware, firmware, software and hybrid modules. So you got that? I know there's a whole thing I just learned about UNIX standard so like Apple's technically a Unix operating system on macOS but they're only compliant with Unix standard 3 and there's apparently eight of them now. So like yeah, these things, you know, they happen but apparently, apparently your Mac out of the box is not Unix compliant. You had to set some feature flags to actually make it Unix compatible 3 but that way they can still get that certification. They have all the feature flags still built into the operating system which I think is just kind of crazy little rabbit hole I was down the other day. [00:03:49] Speaker D: All I'm Thinking is like gen 2 and feature and C flags you add and remove and then you really hate your life when you have to recompile your whole operating system because you've decided you want to add a new flag. It's always fun. [00:04:02] Speaker B: Nice. [00:04:04] Speaker C: Well that's a whole level of certification that we don't want to get into because it's all a lot of RFCs I don't have to read and ie documentation. All right. Meta is announcing their most ambitious subsea cable endeavor, Project Waterworth. Once the cable is completed, the project will reach five major continents and span over 50,000 kilometers which for those of you keep your home at track at home longer than the earth's circumference. Make it the world's longest subsea cable project using the highest capacity technology available. It'll be connectivity to us, India, Brazil, South Africa and other key regions. Waterworth will be a multi billion dollar, multi year investment to strengthen the scale and reliability of the world's digital highways by opening three new oceanic corridors with abundant high speed connectivity needed to drive AI innovation around the world. Google has apparently developed 20 subsea cable or sorry Meta has developed 20 subsea cables over the last decade including multiple deployments of industry leading subsea cables of 24 fiber pairs compared to the typical 8 to 16 pairs of newer systems. They are also deploying the first kinds routing system, maximizing the cable in deep waters at depths of 7,000 meters and using enhanced burial techniques in high risk cult areas such as shallow waters near the coast of Russia to avoid damage or ship anchors and other hazards. I added the brush apart to the article. They didn't say that, but that's basically what there is. And yes, this is all for AI guys. I mean, which is the silliest reason. [00:05:25] Speaker D: I just like to segue back, they're like, okay, how do we make this actually get caught by the news? And any automated system we add AI to an underwater cable makes perfect sense. [00:05:36] Speaker C: Yeah, I mean, is it going to move AI data faster? Meta stuff is all open source. It makes sense in the metaverse sense of we need high connectivity to make the metaverse more reliable. But investors run from the metaverse now, so they have to use AI. I guess that's what happened. [00:05:53] Speaker B: Yeah. I was sort of surprised this is where meta is investing. I don't think of them in that space. Like I do like Internet providers and in that hyperscale, when we talk about. [00:06:05] Speaker C: Google doing it, we're like, yeah, this makes perfect sense for them because the world travels to their system. But then you think about Facebook and you're like, well, most of their servers are still in the US and they use lot massive caching and so they don't really need this. But I can see some of their initiatives potentially needing high bandwidth, low latency connectivity. [00:06:21] Speaker B: I mean, it's a huge investment. So I'm sure. Yeah, yeah. [00:06:27] Speaker D: Today, when this is actually going to. [00:06:30] Speaker C: Be live, they all remember multiple years, they just announced they're building it. It'll take three to four years to probably build the whole thing. It's a 50,000 kilometers of cable. You had to manufacture longer. [00:06:41] Speaker B: Yeah, yeah. [00:06:42] Speaker D: I assume they'll do it in chunks too. Like they'll do the North America to South America, South America to Africa. Yeah, they can do it in chunks if they want. [00:06:51] Speaker C: Yeah, that's my assumption too, is that they'll, they'll have probably different crews doing different parts of it and maybe they'll do some of it simultaneously. But again, you got, I mean 50,000 kilometers of fiber optic cable is a lot of cable to manufacture, especially 24 pairs of it. Like to see how they do that at a factory. A lot of splicing I'm sure is involved. All right, well, Sam Altman has announced the roadmap for how OpenAI plans to release GPT5, the long awaited follow up to GPT4. Altman said it would be coming in months, addressing a release later this year. He further explained on X that they plan to ship GPT4.5, previously known as Orion in weeks as their last non simulated reasoning model. And simulated Reasoning, like the O3 model, uses a special technique to iteratively process problems posed by users more deeply, but they are slower than conventional LLMs like GPT4, O and other and not ideal for all tasks after 4.5. GPT5 will be a system that brings together features from across the current AI model lineup, including conventional AI models, reasoning models and specialized models that do tasks like web research, web search and research. [00:07:56] Speaker B: So further diluting their their $200 Pro subscription, right? I mean I guess it'll be part of the new model. [00:08:05] Speaker C: Yeah those keep upping limits and lowering limits of free tier so you get forced to buy the 200 model. That's how they squeeze that stuff in. I'm definitely very interested in how where does AGI come into their roadmap. I know they keep talking about it soon. Is that this year problem? Is that a next year problem? Is that a next decade problem? I don't really know when AGI is going to be real on what their timeline looks like. [00:08:33] Speaker B: That's a jobs in question for a lot of these. Yeah, for sure. We don't have our expert and I'm just going to sound dumb. We need the British voice explanations I have for sure. [00:08:47] Speaker C: Well everyone has been apparently waiting for Anthropic to produce a reasoning model, which I wasn't but sure, okay. Everyone else is from reporting by the information they say Anthropic is taking a different approach to reasoning and developed a hybrid AI model that includes reasoning capabilities, which basically means the model uses more computational resources to calculate answers to hard questions. But the model can also handle simpler tasks quickly without the works for work by acting like a normal LM. So they're basically doing ChatGPT5 before OpenAI is cool. This is apparently going to be released in the next few weeks as well according to the information. So you have OpenAI who now sounds woefully behind you have Anthropic saying we're coming out with something new that's more expensive and cheaper at the same time and who knows what we actually get. [00:09:30] Speaker B: Yeah. [00:09:33] Speaker D: Before the end of Q3 or before the end of Q1 we'll see where all these people are. [00:09:39] Speaker C: I feel like yeah it'll definitely be somewhere. I just don't know if it's where they want to be. That's the problem. And then we had another article from the information that apparently this misleading headline of the week, Anthropic product soaring growth to 34.5 billion in 2027 revenue which if you quickly read, you might have thought was right now, but it's not. They only make 3.7 billion revenue from alleged internal sources with a projection of revenue could be 34 and a half billion by 2027. So that's good. I feel like I should buy stock. [00:10:08] Speaker B: Yeah, I mean that's a big increase in two years, right? [00:10:12] Speaker C: So yeah, that's basically a 10x increase. [00:10:15] Speaker B: I don't recommend anyone take, you know, investment advice from the cloud pod. But yeah, I agree, we're not, we're. [00:10:21] Speaker C: Not accredited investors, we do not give investing advice. But if I could put money into Anthropic, I would do so. Yeah, personally, as my own personal opinion, I think that would be a good investment, but not what you should do or any of the, any of these guys. Maybe it'd be a good investment at this point. Well, I don't know, you guys. Have you been writing some Terraform code lately? Any new infrastructure projects that required you to do a lot of coding, modules perhaps? [00:10:45] Speaker B: Actually, I have for the first time in a long time. [00:10:48] Speaker C: Good, good. Has that made you hungry by chance, Ryan? [00:10:51] Speaker B: Well, anything makes me hungry. [00:10:54] Speaker C: Good. Well, on this week's cloud tools we found a Terraform plugin that'll let you order Domino's pizza and it comes as a provider. The Domino Terraform provider exists to ensure while you're waiting for your cloud to spin up, which some things can take a while. So this makes sense. You can get a hot pizza delivered. This is powered by the expansion of the Terraform resource model into the physical world. Inspired by Google's REST API for physical interconnects. The provider configuration from the documentation is pretty straightforward with you basically putting your credit card data in plain text because apparently they didn't use sensitive modules for that. Your name, email address, address you'd like and the store data that you need and basically applying this, you'll get a 12 inch hand tossed Philly cheesesteak pizza if you follow the example exactly as provided and you can update that to be whatever pizza you prefer. And this is pretty cool. I didn't have a chance to test it out mostly because I didn't want pizza, especially not from Domino's. But this would be a fun one to also forget this in your code and commit it to work. And then basically every time one of your people at work Terraform applies, you get a pizza random at your door. It'd be kind of fun. [00:11:57] Speaker B: That's a great idea. I hadn't thought of that. [00:12:01] Speaker C: Don't use your credit card. That's the key part of it. [00:12:03] Speaker B: No, no, no. Yeah. [00:12:04] Speaker D: There is a feature for hashcard vault support for credit card data and another one which is blocked the addition of pineapple as a tapping. [00:12:14] Speaker C: Boo. I like pineapple and pizza too. [00:12:17] Speaker B: Me too. [00:12:18] Speaker C: Yeah. I'm not a purist. This is where West Coaster is. Unlike Matt who comes from the promised land of good pizza. [00:12:24] Speaker B: Yeah, that's true. [00:12:25] Speaker C: Yeah. [00:12:25] Speaker B: I do like domino's is provided this sort of API access and programmatic access for a while. I feel like they were the first one with a CarPlay app where you could order pizz directly and watch it. You know, there, there's a whole bunch of stuff and I think it's. It's cool and good press, you know, and so kind of fun. [00:12:43] Speaker C: I mean where you can just text the dominoes basically and tell them you want a pizza and they'll send you a pizza. Like, I mean like they, they definitely are trying out a lot of ways to make it super easy to get a pizza any way you want to. So yeah, I'm. If you guys try this out, let me know how it goes. Next time I try pizza I will probably get a shot. But not in a corporate repo that I need to push to GitHub. [00:13:05] Speaker D: Just your credit card. It'll be fine. Just use your corporate credit card. [00:13:08] Speaker C: Yeah. [00:13:09] Speaker D: Security gnosis. [00:13:11] Speaker C: Yep. He's not going to notice because he's carrying the pizza. He'll probably be the culprit who leaves it in the repo. That's how that'll work out. [00:13:21] Speaker B: Yeah. [00:13:24] Speaker C: All right, let's move on to AWS news for the week. AWS Cloudtrail network activity events for VPC endpoints are now generally available. This allows you to send network activity events for VPC endpoints into CloudTrail. This feature helps you record and monitor AWS API activities, reversing your VPC endpoints, helping you strengthen your data perimeter and implementing better detective controls. Previously, it had been hard to detect potentially data exfiltration attempts and unauthorized access to the resources within your network through VPC endpoints. While VPC endpoint policies could be configured to prevent access from external accounts, there was no built in mechanism to log a denied action or detect when external credentials were used as a VPC endpoint. Now you can Opt in to log all AWS API activity passing through your VPC endpoint and CloudTrail records these events as a new event type called network activity events which captures both control plane and data plane actions passed through the VPC endpoint. This gives you several benefits including comprehensive visibility, external credential detection, data exfiltration prevention, enhanced security monitoring and visibility for regulatory compliance. [00:14:22] Speaker B: Yeah, this is a neat feature. As someone who remembers, I guess remembers or dreads. I can't remember, I'm not sure what's the right word. Trying to troubleshoot connectivity to a private endpoint from a data center connectivity. There really is just no visibility or was until this, until this feature was announced. So this is, you know, I think a fantastic addition and you know, being able to log that information and act on that information for, you know, security purposes is a, is a great thing and being able to sort of link that together and any kind of forensic investigation, it's going to be very powerful in identifying what was, if anything, you know, breached. So that's great. [00:15:03] Speaker C: Yeah. They mentioned in the article that, you know, before this you had to required you to build custom solutions to inspect and analyze TLS traffic which could be operationally costy and negate the benefits of encrypted communication. So I assume people were building a reverse proxy to the VPC endpoint so they could decrypt the TLS traffic to it. Is that how you would approach implemented that? [00:15:24] Speaker B: Yeah, I think you'd have to like. It's just a, it's a routing mess. [00:15:28] Speaker C: Oh my goodness. [00:15:29] Speaker B: A routing mess, you know, through. Yeah, because you, if you, I don't think you can do it through like bumping the wire and doing mirroring if you're having to decrypt the traffic in flight. So it's a. Yeah, no, I, I, so no one did it I think is really what basically happened. [00:15:44] Speaker C: I was just thinking like because I would, you wouldn't figure this out until you probably already had your VPC set up. So then the problem is now I need to, I need to basically modify my VPC to have this ability to do a separate routing action to have a. Yeah, like this would never happen. Yeah, well, yeah, that's good. I mean this has been an issue for a while is that there's a lot of things that hit APIs in your account also from Amazon side where they're doing things for your on your benefit where they've been adding more and more of those to cloudtrail logs as well as customers are demanding more and more full audit Accountability in their VPC and in their account. So, you know, this just makes sense based on the long last year or so we've seen them continuing to get more and more particular about auditing. [00:16:24] Speaker B: And security tools are getting better too. Right. So if you feed they, they can do more with the data that you send them and so they're, you know, I think that that's a driving some of these changes as well. Right. Used to just be like very, you know, simplistic logs in comparison and doing correlation and timing and that kind of thing. But now it's, you know, there's much more deduction going on into mapping out attack vectors and you know, now AI is becoming a thing for enhancing queries and collating data. So it's becoming pretty powerful. [00:16:57] Speaker D: Does this fix is this also. You know, I've seen on the other side where a lot of tools don't really like, you know, when you have a man, when you man in the middle attack with your security tools because they're MTLs. So is this just their solution to also that. [00:17:12] Speaker B: Well, this. So this would, you know, the only way to get the inspection before was to have that sort of decryption of the cell traffic. [00:17:19] Speaker C: Right. [00:17:19] Speaker B: And so that you can inspect it. And so this basically negates it. You don't need to do those decryptions because you're logging an event of the data versus trying to derive that information from. From actually inspecting the, the traffic itself. [00:17:34] Speaker D: So a jump ahead a reinforce announcements. Can it be that they're running more stuff over mtls? [00:17:42] Speaker B: Maybe. I mean it'd be nice to. There's a lot of, you know, service mesh sort of mutual TLS stuff that they have now. [00:17:50] Speaker D: Yeah. [00:17:50] Speaker B: Be kind of neat to see if they have like a transparent solution to that inspection because it is sort of a pain point. It's a lot of managing a certificates and it's becoming more and more of a standard which I've always been sort of opposed. But you know that how do you grant access to traffic and feel like you know what's happening on your network if it's all just encrypted traffic? So it's. I can kind of see both sides of it. [00:18:17] Speaker D: It just, I mean good news people are encrypting stuff on bad news, security people can't do their jobs. [00:18:22] Speaker B: Yeah, well, so they're just decrypting everything. [00:18:25] Speaker C: Right. [00:18:25] Speaker B: You're basically opting into your own man in the middle attack. [00:18:28] Speaker C: Yeah, yeah, exactly. The worst security hole I've ever had in a network was the qualyscanner prior life. Like nothing has access to everything on every port. Yeah, that's not a problem. [00:18:39] Speaker B: Not at all. [00:18:40] Speaker D: Yeah, definitely. I've never seen that before. Yeah, with many different tools. [00:18:45] Speaker C: I glad to see it. That's just a nice improvement in general. Well, AWS is working to earn your trust every day, as is one of their core leadership principles. And so they have the launch of the new AWS Trust center to continue to build that trust with you. This is a new online resource that shares how AWS approaches securing your assets in the cloud. AWS Trust center is a window into their security practices, compliance programs and data protection controls that demonstrate how they work to earn your trust every day. And my first reaction to this was didn't they already have this with an artifact and there used to be like a subpage where you could get like data center overview and they show you some photos of the data centers and they talk about all the loc, all the things they do in the data center for security. It looks mostly like a new landing page from my perspective, you know, to help you find stuff quicker, faster, easier. On this trust page. I don't know. What do you guys think? [00:19:39] Speaker B: I know that the artifact was seemingly very hard for non technical auditors to navigate and I've had to spend a lot of time walking people through that. So anything that makes this easier. I haven't looked at this landing page but I'm hoping that it's sort of geared towards that audience of compliance people who are building reports for very specific frameworks and it sort of lays it all out and easy to find manner would be nice. [00:20:05] Speaker D: It's a little bit more like like just said just like a landing page. Here's your instant, here's your, here's the overview. Like there's no detail from what I really saw in here, you know and. Or just links to you know like the data center page about capacity planning and BCP and you know, fun filled GDPR if you really want to, you know, have fun or you know, report of vulnerability and shared security model. Like they're just links to other places from everything I can see. I haven't dug in deep on it yet though. [00:20:40] Speaker C: Yeah, I mean they've done quite a few things trying to make it easier for auditors and other security GRC people who are not necessarily technical or who don't want to go through the certification process to find this data quicker. So like I said, I think it's mostly landing pages have already existed but it's nice that it's in one single place I can send people to. And then yeah, Artifact has always been kind of a pain because the, the biggest issue is if you're comp. You know, you're dealing with a customer who wants to see your Sock2 report and they want to see your data center Sock2, which happens to be Amazon, you can't just give them the sock 2 because of the way the contracts are written. So you have to give them access to AWS account to do that. And if they don't have a relationship with Amazon, that could be a big issue for some enterprises. So like they don't solve that problem, which is still an issue. But you know, it, it definitely they're, they're trying to make it better, but there's still some, some sharp edges if you're in the compliance world and not technical. Amazon Inspector has updated its engine powering container image scanning for ecr. This upgrade will give you a more comprehensive view of the vulnerabilities and third party dependencies used in container images. Apparently they are promising this will not disrupt any of your existing workflows or just make your scans report more vulnerabilities. So I appreciate that. [00:21:54] Speaker B: Yeah, I wish this, this article was a little light on technical details. I'd love to see like what, what enhancement it is. Like is it just, you know, visualization enhancements or are they actually inspecting these at a different layer than they work for? [00:22:08] Speaker D: I read as they're doing the scan differently and they're finding more stuff now and then, you know, what I foresee is if you were leveraging this tool, all of a sudden you have more vulnerabilities which is going to make your life fun as a DevOps or developer. So I kind of hope it's not that one. But I feel like the way I read it, it was so you come. [00:22:28] Speaker B: In on Monday and I'd rather them detect it, you know, here. But you know, makes sense. [00:22:34] Speaker C: The new generator they added in is an SBoM generator. So it basically looks at your components of your container. And so if you're doing GO or Java or php, it's looking at, you know, the basic things that have understanding of dependencies. Like so here's your dependencies so I can get a better inventory of like hey, you have this module and that module has a vulnerability in against the database. So look at your package jsons, it looks at all that kind of stuff and it supports, it looks like Go, Java, JavaScript.net PHP, Python, Ruby, Rust so far does not support. Well that's doesn't says doesn't support Java but it says Java up there. I think you have to. I think the only way to export Javas if you have Maven so that's a key thing but yeah, so there you go. Oh it doesn't support binaries. That's what it is. Okay. [00:23:20] Speaker D: So yeah there's a really good SBOM tool that I'll have to find that I've used in the past that like is a nice runs in a Docker container so you can just look spin up locally and generate your own sbom. I'll look it up for next week if I can't find it as we talk but it kind of does this less in your pipeline but I'm sure you can integrate it in. [00:23:41] Speaker B: Thanks. Cool. [00:23:43] Speaker C: Definitely check that out in a future episode. AWS Secrets Manager Secrets and Configuration Provider now integrates with EKS Pod Identities. This integration simplifies IAM authentication for Amazon EKS when retrieving secrets from AWS Secrets Manager or parameters from AWS Systems Manager Parameter Store. Still hate the name. With the new feature you can manage IAM permissions for Kubernetes apps more efficiently and securely, enabling granular access control through role session tags on your secrets. [00:24:11] Speaker B: Yay. This has been like a clear area where EKS was not the same offering as in Google or being able to sort of leverage these identities directly from your POD configuration and your namespace configuration and be able to tie that to sort of a distributed role identity. So this is something that's pretty great in terms of being able to provide that it's at least one step closer to full workload identity. [00:24:46] Speaker C: Every step closer is better. Well, Amazon Reinforced dates have dropped and they will be happening in Philadelphia if it still stands after the super bowl parade from June 16th through the 18th in Philadelphia. Open opens for registration in March. Chris Betts, the CISO of AWS will be keynoting as per usual. [00:25:08] Speaker B: Not sure that I'm ready to attend another reinforced but we'll see. [00:25:11] Speaker D: Yeah, maybe it's been a few years. [00:25:13] Speaker C: Since the first one I went to. I went because I went to the very first one ever and so I'm yeah, sort of Boston. Yes. The one in Boston. [00:25:20] Speaker D: Yeah. [00:25:21] Speaker B: I mean it's no longer in Houston. I wasn't going to go when it was. [00:25:23] Speaker C: Yeah, that's for sure. When it was in Houston in July. [00:25:26] Speaker D: You mean when you could hurricanes and 5,000 degrees. That sounds like a good mix. [00:25:33] Speaker C: Yeah. I'm more intrigued to go to it in Philadelphia. I am sort of. I like the idea of it. I just don't know if I'm ready to go back yet either. But I'm curious this year how this one turns out and what fundamentally people come back with. It's interesting. I don't actually know how the org charts change because Chris Betts is the CISO of It must be aws. And then Stephen Schmidt is still there. He's. He noted before. [00:26:02] Speaker B: So I'm glad you looked it up. I was looking, I was thinking the same thing, going, what happened? [00:26:08] Speaker C: Yeah. So Chris is the CISO of aws. Steve Schmidt is the CISO of Amazon. So I think he used to have a dual role where he was in control over both of them. And I imagine AWS has gotten too big to have one executive over both. That's my guess. [00:26:23] Speaker B: Makes sense. [00:26:24] Speaker C: I was like, wait, I don't remember Steve Schmidt leaving. That would have been something we covered here on the show. And I don't remember. [00:26:29] Speaker B: Yeah. [00:26:31] Speaker D: Does one CISO report to the other or does it report to this, to Internal, to aws and then it probably. [00:26:39] Speaker C: Reports into the AWS CEO. Then there's also probably dotted line. There's also dotted lines, I'm sure through at that level between Amazon AWS and the other business units. I'm sure there's lots of matrix. [00:26:50] Speaker B: So. [00:26:50] Speaker D: Yeah. [00:26:52] Speaker C: And one of those features that you only learn the hard way. Amazon is giving you the ability to remove subnets from network load balancers without destroying the entire network load balancer and recreating it, matching the capabilities that already exist in ALBs. And again, I didn't know this as a limitation because I don't use a lot of network load balancers, but definitely something I would have learned the hard way. [00:27:12] Speaker D: Yeah, this is one of those sharp edges you've run into in a really bad way. Like on a. Hey, we were going go make this simple change and all of a sudden now you hate everyone. [00:27:23] Speaker B: Yeah. I wonder like I was trying to think of like the. If you couldn't destroy the load balancer and you. So you just leave the subnet there, like what would be the operational downside of this other than the complexity and stupidity of it all? And it's, it's difficult to even get my head around like this. So yeah, I'm really glad that they've, they've provided this option. [00:27:43] Speaker D: Well, all I think of is a long time ago you couldn't do cross zone traffic for network load balancers. So I wonder if this is some legacy artifact of that where like they didn't want you to do it or like they didn't have the ability to do it therefore they just didn't remove like you couldn't remove it. [00:28:02] Speaker B: Well, I think network load balancing itself was a bit of, you know, Amazon black magic at the, at the network layer. [00:28:08] Speaker C: Right. [00:28:08] Speaker B: You know, so like it's, they've added more features over the years where it's probably more virtualized but at first it seemed very much like a kind of a direct connection sort of subverting a lot of traditional network. [00:28:26] Speaker D: Well, I mean I feel like they really were created like yes, the end users, like yes, the consumers want but I feel like they really were created for cross account private endpoints and just private endpoints in general. Like that to me is the real functionality of it. [00:28:43] Speaker B: Yeah. [00:28:44] Speaker D: You know, for anything else like yeah, I've used them for OpenVPN because I didn't have to get yelled at by a security team. That vpn, the V. The VPN server wasn't publicly available. I've done weird stuff like that to solve problems but it wasn't really a good solution in life. [00:29:01] Speaker B: Yeah it was. I've definitely load balanced non HTTPs traffic with network network flow balancers. Right. By utilizing that. So it's you know, trying to keep the, the abstraction from the, the EC2 hosts as much as possible. So you have the flexibility. [00:29:16] Speaker D: But yeah, I mean I've also definitely run SSH and RDP over them for fun. [00:29:20] Speaker B: Oh yeah. But yeah, I had a whole like emergency bastion service built off of these. [00:29:26] Speaker D: Yeah, that was always fun. How to not get in trouble with security departments. We don't have it open, I promise. [00:29:34] Speaker C: All right, moving on to gcp. If you keep hearing us talk about this generative AI stuff and you're like I don't really understand or you lived under a rock and you haven't checked out AI and you're curious about getting into it now Google is on the road with their generative AI roadshow. This event provides practical code level engagement with Google's most advanced AI technologies being Gemini. These events will show you how to leverage everything from Google cloud infrastructure to the latest models available to you. They just started this up in India and they move and then they move on to Europe and Apac in March and then hit the Bay, Seattle and Austin all in end of March 2025. So check it out if you're interested in getting some hands on experience in AI and you can get that directly from the Google experts. [00:30:18] Speaker B: These types of movements make me think like what is it not selling good now like, it's kind of a interesting sort of thing to have to push, but I mean, it's great information for those who need it, so I like that. But it is sort of struck me as odd. I mean, Google's like, most companies are US centric, and so anything expanding outside of that is great. We get a lot of attention. Indeed. [00:30:42] Speaker C: Yeah. I mean, it definitely seems kind of light for the US Part of the roadshow. I was kind of surprised how little they're doing in the US versus Europe and Asia pack. I mean, they're, they're hitting Jakarta, they're hitting Zurich, Bangkok, Warsaw, Toronto, Paris, Istanbul, and then, yeah, Bay Area, Seattle, Austin. [00:31:02] Speaker B: That said, I've already hit, like, I've already gone to like two, you know, practical applications of AI things and just free things they've had at the Google campus. So, like, it's, you know, there's. I think there's a lot more opportunity to hit those outside of the roadshow program. [00:31:17] Speaker C: So is that a, Is that a meta commentary on your ability to learn AI technologies? [00:31:24] Speaker B: Well, my ability to learn in general, I think maybe. [00:31:29] Speaker C: You said you're into two. Like, you still don't know anything about AI. I don't know what you're talking about. [00:31:34] Speaker B: I keep trying, but it's just not sinking in. [00:31:39] Speaker C: I'm starting to get the lingo down, but yeah, if I had to build a model, I couldn't help you. [00:31:45] Speaker D: There's a couple concepts that I understand, but you wanted me to implement anything or, you know, like building my own model or, you know, things along those lines. I just feel like I would need to, like, sit down for like a day and in utter silence and just let me go at it until I can figure it out. [00:32:03] Speaker B: Yeah, I'm definitely a consumer of AI at this point. Like, I know, I know how to implement it in the things that I build, but does that mean that I know anything about training a model or adding efficiency to the bot? No, no, I really don't. [00:32:17] Speaker D: I have opinions about it. Does that count? [00:32:20] Speaker B: I'm trying. [00:32:21] Speaker C: I mean, that's half our show is opinions. So, I mean, you've been to these. What did you think of them, other than us teasing you? [00:32:30] Speaker B: You know, like, it's, it really is, you know, the, the ones I've gone to are really, you know, producing or trying to get you understanding the. Of that agentic model, how to use AI, how to build it into your applications from a consumer perspective. [00:32:44] Speaker C: Right. These are they. [00:32:45] Speaker B: They really are still trying to drive, I think AI use cases to make them ubiquitous. It's surprising to me which, because it seems like that's something that's, you know, a model that's driving itself. But a lot of it was really a showcase of making it really easy to implement against AI without actually having to understand anything in, you know, in, in in depth about AI. And I think that may be what they're trying to combat, which is like a, to a lot of people that are outside of it. Like you have to be really deep into these models and understand that and it's really not true. Like you can call an API, you can get a summary based off of a whole bunch of data with just a simple payload. You can point it at large data sources and just ask questions. You don't have to know AI in depth to use it. And I think maybe that's what they're having to sort of push on there. And that was really, you know, transformative for me. Understanding like, oh, I don't have to like know what, you know, all these tokens are and how to use them and how many parameters and I can just call like a, a Google, I can just give them money and sweet, I'll do that. So it's kind of that, that sort of teaching that I, I've picked up anyway. [00:33:58] Speaker C: Yeah, that's good. So worth the time is what I heard. Or then you learn not to do models. So it's good. [00:34:07] Speaker B: Well, I learned that there's, you know, like there's, it doesn't have to be. I need to learn all that information for. Just because it's, you know, interesting and that kind of stuff and figuring out how to secure them and apply them appropriately is always good. But to just get started utilizing AI into your existing tool sets, you know, their services that you that attracted away. [00:34:29] Speaker D: But the engineer in you wants to learn all the details. [00:34:31] Speaker B: Of course I know how all the sausage is made. [00:34:34] Speaker C: Yeah, well, with all these people getting enabled at these workshops, the need for GPUs continues to increase constantly. Of course, the explosive growth powering application like machine translation to artistic creations. These technology rely on intensive computations that require specialized hardware resources like a GPU to address the scarcity in GPUs because everyone wants them and no one has them. Google's introduced the dynamic Workload Scheduler and it transformed how you access and use GPU resources, particularly within a GKE cluster. In addition, with the DWS offering an easy and straightforward integration between GKE and Q A cloud native job scheduler made it easier than ever to access GPUs quickly in a given region for a given GKE cluster. But you don't always have capacity in the region, but there might be capacity in other regions. And if you don't care about data privacy for the data you need to train, you might be able to use other regions GPUs. And so that's where this announcement comes in with Multiq. Multi, it's a queue feature with GKE and the DWS capability together to create multiq, a queue feature that allows you to accelerate your workloads across multiple regions. DWS automatically provisions resources in the best GK clusters as soon as they're available. And by submitting workloads to the global queue mkx, use them in the region with the available GPU resources, helping to optimize your global resource usage. So take advantage of all of those GPUs you found in Indonesia that I was using for my own projects because you now found them too and can now easily schedule your workloads there. [00:36:08] Speaker B: Yeah, I mean this goes. It's clearly such a problem with GPUs availability and training models where there, there's a slew of these sort of tools and announcements going towards this way, you know, trying to make it more uniform, trying to make it so that you can consume smaller units of GPUs. Again, like I said, I, I'm not training models, I'm not feeding a whole bunch of data sets into model. We're trying to generate rag and you know, trying to get that going. So I, I don't see it a lot, but I can see that this is a good tool. But it also seems terrifying to me, like if you're gonna globally distribute your data around in order to run these like sort of task and batch processes against gpu. All right, sure. There's a good reason. [00:36:55] Speaker D: What I find interesting in all this is that this is something that Amazon and Google or sorry and Microsoft really can't do because of the way if Google is built at a global VNET level or you know, VPC level, you know, where each of the other ones have isolated regions. So this is something that, you know, because of the way Google is constructed with that global vpc, you have the ability to more easily burst into other regions versus on AWS or Microsoft, you have to build a VPC or vnet and then launch your workloads in there and then connect it all back. So it's actually an interesting win that, you know, win or loss, depending on how you want to view it, that Google has and that they are able to say, just go use the excess capacity here. Don't really worry about data, you know, laws or anything else that you might have to worry about. But you know, you have this ability to go grab these things in these other places that could be cheaper or more expensive depending on, you know, where your origin of everything is. [00:37:59] Speaker B: That's true. I hadn't thought about the, you know, the. Just the, the network architecture itself lending itself to features like this. And you're absolutely right. The isolation of AWS makes this completely impossible. Something like that. You know, the devil's in the details with that global network and gcp. It's still a lot of work to get those regional things all tied together. You can do it versus an Amazon where you're jumping, you're doing some terrible things to make that work in the same way. [00:38:26] Speaker D: Just a transit gateway and then, you know, having to launch your ECKS cluster modes and then at that point you're already managed elsewhere. Yeah, like it's. You have a full platform team helping you out at that point. [00:38:41] Speaker B: Yeah. [00:38:42] Speaker D: And probably several million dollars in Amazon spend with your EDP or whatever they rename that thing recently too. [00:38:50] Speaker B: Although data transfer costs are still a thing in gcp. So you know, this is. While they make it easier to move the stuff around, you got to be careful with that. [00:38:58] Speaker D: There is everywhere. Microsoft had a long article that I have earmarked for later which is like a simple networking. It's like you know, step like I think I scrolled through. I was at like 5 or something of like ways that things are going to cost you money. People don't realize how much things cost. Network is expensive. [00:39:22] Speaker B: Still tries. [00:39:27] Speaker A: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance but remember to ask will you actually give me my money back? Our chair A will click the link in the show notes to check them out on the AWS marketplace. [00:40:06] Speaker C: Well, earlier we talked about Google releasing Go 1.24 and it's fantastic fit143 support and learned all about that. But there is something else we didn't cover which is the significant expansion to its capabilities for webassembly or wasm a binary instruction format that provides for the execution of high performance low level code at speeds approaches native performance. The new Go WASM Export compiler directive and the ability to build a reactor for webassembly interface or wasi. Developers can now export functions from their GO code to wasm, including long running applications and getting more and faster performance out of the box. [00:40:46] Speaker B: This is kind of nuts. Yeah, I didn't know that this was a thing like Go Deep integration website webassembly. I mean it makes sense to me. So yeah, but the fact that you. [00:40:57] Speaker C: Can just natively go into WebAssembly from Go, I think that's a nice feature. [00:41:01] Speaker B: Yeah. [00:41:02] Speaker C: Yeah. One more reason why I should learn more Go. Yeah, yeah, I keep working on Python but I could also learn Go. Maybe I could get some more utility out of Go I think ever knew. [00:41:14] Speaker D: That webassembly was abbreviated to WASM until this article? [00:41:18] Speaker B: Yeah, that's also news to me. [00:41:20] Speaker D: Gonna leave that one alone. [00:41:21] Speaker C: Yeah, yeah, I didn't know that one either. I also didn't know that the webassembly system interface is Wazi. So yeah, there you go. [00:41:28] Speaker D: I feel like we should have had a good show title with one of those. I don't know what it is yet, but I'll come back to you. There should have been a show title somewhere in there. [00:41:37] Speaker C: The Wazi wisdom. I don't know. [00:41:40] Speaker B: Yeah. [00:41:41] Speaker C: All right, let's move on to Azure, which I'm sure will be right for other fantastic show title options. So with the recent concerns around security and deep Microsoft of course capitalizing with this helpful article on how to secure Deep Seek or really any AI capability with Microsoft Security and they wanted to highlight several things for security around your AI estate. First, Azure AI Foundries Azure AI Content Safety built in Content Filtering available by default to help detect and block malicious, harmful or ungrounded content with opt outs for optional flexibility. The new Security Posture Manager with Microsoft Defender for Cloud AI Security Posture Management capabilities so you can now get told that you did it wrong, which I always appreciate by Security Posture Manager tools and then see all the data via Cypress Threat Protection with Microsoft Defender for Cloud allowing your SOC to review logs and telemetry to block real time attacks against the AI as well as XDR capabilities to further analyze threats and integrate all this with Purview DLP and Purview Data Security Posture Management to also learn how you've misconfigured it. [00:42:40] Speaker B: Yeah, it is kind of funny like the the reaction to Deep SEQ I find hilarious more than the the Tool itself, you know, because it is just sort of like, wait, China, oh no, we have to secure this stuff. And you know, everyone knew about the security concerns of sending data to AI and sort of, you know, like, yeah, no, this is a thing to be aware of and, and then immediately forgot it. But the minute it was being sent to a Chinese company, there was a different reaction in the industry. And so I definitely think that, you know, Azure's capitalizing on this for sure. But it's also, you know, I know that with Defender being used in a lot of places for sort of default endpoint protection, I see this as, you know, more and more necessary. Pretty funny. [00:43:27] Speaker D: Well, this is more Defender for cloud, which is there, I don't remember the acronym. They're seized, Pam. See Sam, essentially like the tool that like runs over your entire infrastructure and tells you what stuff's configured. Right. Wrong. And what you should be concerned about. Like it's a lot of integrations into there which even if you look at the screenshots, they still call out that they're in preview. So which I thought was funny, you know, and we definitely leverage this as a good check and balance at my day job because that's what I view these tools as is like, hey, we have these things, you know, set up in this way. Cool. We know they're not set up maybe the most optimal but because if they were, they would cost us 16x the cost, you know, so we have 16 other checks and balances in there to make sure that we are safe too. [00:44:16] Speaker B: But yeah, I mean just like anything, it's you, you need defense and depth, you need mitigation at several areas. And you know, if you turn everything on to 11, you're, you're not necessarily making your environment more secure, you're making it more unmanageable and more expensive. And so it is like everything, it requires a strategy. [00:44:33] Speaker D: Yeah. [00:44:34] Speaker B: And yeah, I mean more and more tools have these things. It's great. You know, native solutions all, you know, sometimes have advantages over non native solutions, like you know, an external cloud security posture management. But the tendency is to turn everything on and then have way too much overlap, which makes it incredibly inefficient. [00:44:54] Speaker D: You just buy 12 tools that do the same thing and have to do the same thing 12 times. The most effective way to do it. [00:44:59] Speaker B: Yes. I mean that's bad enough in tech and you add security tooling indirectly. Yeah, that's exactly what happens. [00:45:08] Speaker C: All right, this week we have the February drop of the Microsoft Cost Management Update newsletter and of course they're always rolling out cool things for FinOps. We always appreciate that. And for those of you with an EA agreement, you can now use the cost allocation field so you can support cost allocations based on hierarchy based on your departments and accounts. So fixed reporting. Good. Always good. Copilot has been a good way to get your cost queries answered by using natural language. And with view and cost analysis functionality, you can also directly navigate to cost analysis to custom views based on your prompts. And now to add to that powerful capability, they're giving sample prompts or nudges to the overview page to encourage and guide users to interact with Copilot more effectively as well. As Azure has now built out some focus introduction lessons for use with Azure to help you apply finops focus best practices directly to your environment. I like the idea of sample prompts are called nudges. I'm down with that. [00:46:00] Speaker B: Getting awfully close to Clippy territory. [00:46:04] Speaker C: It's always Microsoft. They're always like three steps from Clippy. [00:46:08] Speaker D: No, they're actually the nudges are kind of useful and they've been adding Copilot into the console. And then I have fun with it where it's like, you know, internal server errors, why my instance didn't scale up properly. And then I just say, copilot, tell me what's wrong. And it goes, you know, open a sport ticket or like, try turning it back on and off again. I'm like, this is useful. Thank you. But I'm gonna play with the one inside of cost management. I saw it pop up there the other day and I tried a few of them, but I was too focused on digging into specific data to have a good time, you know, playing with them. [00:46:44] Speaker B: Yeah, yeah, I'm still looking for the AI to be just that one, one further step. Like, you know, like it's good enough to tell me that, you know, the, the, the overall cost increased in this subscription because of the use of, you know, compute went up. But I really want it to, to go that one step further. And because, like, because there was more machines launched or because their machines were replaced with larger capacities and you know, that kind of stuff where it's, it's really not quite where I want it yet, but I think it'll get there. [00:47:17] Speaker D: Remember, they still have to make money on you so they can't give you too much information. [00:47:21] Speaker B: Yes, that's true. It's okay. It's not like developer teams ever turn anything off anyway. So. [00:47:29] Speaker D: Yeah, we have their, their anomaly detection on and I get like Six emails a day. That's like, now it's higher, now it's lower. And I'm like, what is it? Because it's like different ranges. I finally looked at it. It's like different ranges. And our development one, it's actually not bad. Funny enough, it's our production ones of like, hey, we did a deployment and we did blue, green. Or, you know, so like, there's an obvious cost to that or, you know, anything like network traffic went up because people are using our product more and went down because, you know, someone left. You know, like, it doesn't quite gain that level of insight. So it's just a lot of noise at this point. [00:48:06] Speaker B: Yeah. [00:48:07] Speaker C: If you like to do load tests and you like to schedule them, Azure load Testing is not giving the ability to now schedule all your load tests at a regular cadence. Azure load testing supports adding one schedule to a test, and you can add a schedule to a test after you've created it. So if you'd like to schedule taking down your environment with a load test, you can now do that. I always appreciate those features. [00:48:29] Speaker B: Hey, Chaos Engineering, I'm for it. How's she going to know? [00:48:33] Speaker C: Yeah, I mean, there's only one way to find out. [00:48:36] Speaker B: I do like the idea that it's scheduled for like, you know, a completely unmanned load test, which is crazy. But I mean, I guess, you know, if you're, you know, maybe you want to schedule your load test at, you know, 3:00am when your traffic's at its lowest point. Okay. [00:48:50] Speaker C: Yeah. Yeah. Or maybe you need to do a weekly load test for your environment on Saturdays and, you know, you don't want to have to somebody wake up and do it. It just happens automatically. And you read the reports on Monday. [00:49:01] Speaker D: Yeah, that's the way I read it. Of like, go running your dev or QA or whatever environment, run the load tests, see if it works, you know, without affecting your engineers doing the jobs. Or you could yolo it in production. Whichever one you want. Yeah. [00:49:17] Speaker C: Do it in prod. What could go wrong? [00:49:18] Speaker B: You can do both. [00:49:19] Speaker C: Yeah, yeah, you can. I mean, you can create multiple tests and you can schedule it 1. You can do one schedule per test. So think of all the savings. [00:49:26] Speaker B: You don't need a whole load test environment if you run in production. [00:49:29] Speaker C: Exactly. [00:49:31] Speaker D: Why do people trust us with their environments? [00:49:33] Speaker B: I don't know. I think they know we have no idea what we're talking about. It's my hope. [00:49:42] Speaker C: Well, the. If you're Looking for more CPU power from intel, the new 5th generation Intel Xeon Platinum 8537C Emerald Rapids processor which is part of the DV6 and EV6 generation intel based VMS Azure is now generally available. You get up to 27% higher VCP performance and 3x larger L3 cache than the previous generation intel did. EV5 VMs and up to 192 VCPUs and with greater than 18 gigabytes of memory per CPU. Azure Boost, which enables up to 400,000amps and 12 gigs of remote storage throughput and 200 gigabytes of VM network bandwidth. 46% larger local SSD capacity and 3x read IOPS on the SSD itself and you have NVME interfaces for local and remote disks and enhanced security through Total memory encryption technology. I still hate their naming convention DV6 and EV6. I'm like I don't, I don't. If there's like a secret code like I need to know. Like I mean I know like we talked about this before the show like Amazon's I know because I done it for so long. So I know a D and N and I and G stand for on the back of the thing and then in Google world it's not too complicated but I've never figured out the Azure naming convention in any possible way. [00:51:01] Speaker D: I'm going to figure it out. I will get back to you feel like they're like it's Microsoft some and somewhere in there there's engineers. So there has to be a standard formula to do it. So I'm going to figure that out and I'll report back. How's that sound? [00:51:16] Speaker C: Sounds good. Yeah, Let us know your cheat sheet. [00:51:20] Speaker D: Probably might involve a bottle of whiskey, but don't worry about it. [00:51:26] Speaker B: All right. [00:51:26] Speaker C: And then I have a cloud journey for you guys. I found this blog post about basically from Brian Grant published on it next. It's a five minute read, which I know you guys did I'm sure but basically he asked the question question should all developers learn infrastructure as code? And my default answer that is yes. And I think you both share that feeling with me. But he seems to go with the other argument of it. So he basically talks about in Stack Overflow's 2024 survey show that Terraform was only used by 11.9% of professional developers, which was a slight decrease from apparently what the number was in 2023, possibly due to OpenTOFU, but he's not sure. He said that's about one in eight which is roughly consistent with anecdote that he's Seen where there's typically one DevOps engineer for five or six software developers, or one platform engineer per approximately 10 software developers. Basically he's saying, you know, the things that you need to know IAC are very complicated. The DevOps teams has to be involved with a lot of companies, but the developers don't like doing it because they don't know enough to use IAC well. They mostly don't really understand it. They feel overwhelmed learning so many tools, they hate it, or they don't want to manage infrastructure, or they don't want to do ops. He also highlights many articles on Reddit where people in the DevOps community are seem to be typically infrastructure system admin types or ops background people. And then basically he goes into explain his argument of why he feels this is the case. So I thought it over to you guys. Do you think developers learn infrastructure as code? Do you think it's something they should do? [00:52:55] Speaker D: So I'm actually a disagree with your statement. I don't think they should learn it from scratch, but I think they should modify it to make it work. So I think your DevOps team should be there to help assist and to build out, you know, the vast majority. But I also then think the developer team should be able to tweak it and make it work for them. Hey, we're adding this new environment variable to our Lambda or function app. Let's just go drop in that value, you know, and then coordinate releases. Depending on how your company does releases, things along those lines, I think they should be able to tweak and to learn like a junior DevOps engineer would be. Hey, let's call the function with different variables. Let's add like simple things. Writing from scratch, I don't know that that's something I would expect or want developers to know. As you know, we talked about today, there's a lot of sharp edges in the cloud and you know, we don't want to walk into them that easily. And while pull requests do provide some check and balance, it's always hard to see what's not there, what is there. You know, sometimes even with a pull. [00:54:05] Speaker B: Request, I don't know, I feel like DevOps that's still sort of the what. What happened to DevOps and not what DevOps is supposed to be. [00:54:13] Speaker C: Right? [00:54:13] Speaker B: DevOps was really supposed to be about not having specific people charged with owning and maintaining the infrastructure that the application was hosted on. You know, the idea behind being able to provide infrastructure at API really took the, the idea that, you know, where the separation is of what is the application and, and really moved it over. And so like I don't see it as any different. Dev teams already have to know multiple languages, multiple frameworks and, and implementations. I don't think that you know, having them learn HCL is really that big of a task. And so like I really bristle when you're like oh no, it's just the one other tool. I will say that they're because of the way things are business wise and the amount of access that everyone has to these things that a developer does have to know a lot more holistically. But I also feel like everything else is punting areas of concern in areas that they don't need to be. And so like if a developer doesn't need to know infrastructure as code, then are you just having recreating the ops vs dev roles all over again or you know, if hopefully you're adding developer productivity by, you know, abstracting these things into platforms and having sort of managed services that are, you know, on top of the, Even the cloud APIs, you know, something that's more aware of your specific business logic. But yeah, no, I think in general, unless that, you know, abstraction is fully mature and fully abstracted away where you don't really have to know anything because completely managed by some other aspect of the business, yeah, you should as a developer learn infrastructure's code and understand the complexities of, of where your application's running. [00:56:11] Speaker D: I think though that I don't expect a developer to understand the vpc, the route table all the way down to that level, but I expect them to understand up higher like hey, here's your redis cache or your, you know, sqsq. Like I tell them to understand some of that and be able to tweak those things. I don't expect them to go all the way down. [00:56:36] Speaker B: I've seen the argument be everywhere like from developers not wanting to understand HELM deployments from kubernetes to which I think. [00:56:45] Speaker C: I think HELM is just another form of iac. So I think they need helm, but I think there's giving some paved roads. I think giving them a network, giving them certain components. I think that's something you can ask for a centralized team because it's a bigger deal because you have to integrate with, you know, if you have transit gateway, if you have multiple accounts, if you have a corporate environment, you have, you know, data centers that you're managing. So there's a lot of things that you do need to take consideration. Think about networking. So I do think there's Potentially in a mature enterprise, I think you do have some resources that still stay in a centralized team. And then I think as you get up into, you know, the app stack, that's where I think that you really want them to own things or have ability to do what they want at helm, et cetera. But in a startup where you know, you don't, you're not going to hire ops people as your first 25 people, you're going to do all the network stuff too. So it can go both ways depending on the maturity of the company. But I think in a mature medium to large enterprise company, I think having developers be familiar with their app stack is important. I think it's super important to being able to do innovation faster, to be able to be nimble, to be agile. But there's definitely trade offs. We are asking developers to know a lot. So you need to be able to guide them through the process and be able to have expertise they can tap into when they need it. Or if you can give them a paved road or a module they can use that does the complicated part for them, that's even better. And then I think this is why you also see the rise of things like cdk, because developers don't want to learn more tools, but they want the advantage. And so if you can give them tools they're familiar with that can convert to iac, I think that's an okay compromise as well. [00:58:29] Speaker B: Yeah, no, I do, I do think that it, I think it's important not to draw the boundaries on, you know, infrastructure. Like if you think about, you know, a, a team that's leveraging a centralized API gateway, you know, inside their business. Right. Everything has a service contract. You have to understand where those service contracts lie and what, what is actually in the cont contract. And it's just the more we call it infrastructure as code and we blur those lines, I think that it's important just to understand where you're getting efficiencies from and how you're coordinating as a larger, a larger team of teams. But yeah, absolutely agree with everything. You know, like I think it's important to know where your contracts intersect and how you make those conversations happen. [00:59:19] Speaker C: Agreement. Yeah, I think, I think he has some good points. I don't disagree with some of his opinions in his doc, you know, in his article. But I definitely, I'm opposed to them not doing it. I think they have to do some of it. I think it's a question of how much and what portions they do. [00:59:38] Speaker B: Yeah, no, I agree. And you Know, like, I do think that in order to add efficiencies, you're going to have to add those abstractions, you're going to have to add those platform services where you're building those. Those areas of efficiency into the business. [00:59:51] Speaker C: Right. [00:59:51] Speaker B: So, yeah, networking makes a lot of sense, but also, you know, if it's your containerized application and it's shared kubernetes, sort of ecosystem that needs to be offered as a service that's easy for developers to use. [01:00:05] Speaker C: All right, gentlemen, glad we were able to settle that debate. [01:00:09] Speaker B: We've settled nothing. Yeah, I know. [01:00:12] Speaker C: I can't wait for the feedback from listeners to us. [01:00:15] Speaker D: I mean, we all agree. [01:00:19] Speaker C: But yeah, if you disagree, send us [email protected] Tell us why you think we're wrong. I'd love to hear your opinions and we'll share them here on the show. Maybe, if you're open to that. All right, gentlemen, it's been another fantastic week here in the Cloud, and we'll see you next week. [01:00:34] Speaker B: Bye, everybody. [01:00:35] Speaker D: See ya. [01:00:38] Speaker A: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services. While you're at it, head over to our [email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback, and ask any questions you might have. Thanks for listening and we'll catch you on the next episode. [01:01:10] Speaker C: Hi, gentlemen. I have an after show. [01:01:12] Speaker D: Oh, no. [01:01:13] Speaker C: I know. [01:01:14] Speaker D: We probably do our homework for the show. [01:01:16] Speaker C: I know. So I mostly just because I think Jonathan has maybe mentioned it offhand, and I think maybe you guys are aware of it, but, you know, a guy in uk, basically, many, many moons ago, when bitcoin was worth nothing, bought 8,000 bitcoins, put them on, you know, put his private key onto a USB stick or hard drive of some kind and then accidentally threw it away. Have you heard this story before? [01:01:40] Speaker B: Oh, yeah. [01:01:41] Speaker D: He was like, scouring the dump then, like, hiring people to hell. [01:01:46] Speaker C: Yeah. So the latest turn in that saga has arrived and I just thought it was sort of funny. So basically, James Howell is the person here. He is the IT pro who lost 8,000 bitcoins in a landfill more than a decade ago. So I'm impressed that he thinks his hard drive is not rusted away to nothingness in the uk. He thinks he has one last chance to dig up his buried treasure before it's lost forever by buying the landfill itself. Now that's impressive that he has enough money to even buy the landfills. Does he actually need these 8,000 bitcoin? I mean, I realize it's $800 million potentially in value, but to buy a landfill seems a bit extreme. This has been an ongoing legal battle for quite a while, over 10 years. The latest curve being the Newport City Council, where he lives in Wales, has decided to close the landfill. James has offered to buy it and if approved, he would remove every piece of trash. He says, clearing out thousands of tons of potentially sparing the city the mukasa. Cleaning up the site. He says he would use a scanner with a AI trained detection technology and a magnetic belt surface. The long lost hard drive contain the only copy of his 51 character private key. But the Newport council appears unlikely to accept Howell's offer. The city has already secured permission to develop a solar farm on the portion of the landfill property, and Howell would rather clean it up and turn into a park. But the council believes the solar project is a better use and there's no other suitable place to put that solar park. Apparently, they rightly ignored his advances, including his offer to share the bitcoin fund money with the city council if he finds it. And as many people describe, this is fine, trying to find a needle in a haystack. But Howell says the needle is very, very, very valuable. $800 million viable, which means I'm willing to search every piece of hay in order to find that needle. So, I mean, man, 10 years. I mean, $100 million. Yeah, I mean, I might go to some extremes, try to get my bitcoins back, but going through the trash for 10 years seems maybe a road too far for me personally. [01:03:41] Speaker B: Yeah, I mean, cost benefit analysis has got to be really interesting. You know, I don't know, there's buying, buying a landfill, there's a construction of the magnetic conveyor belt thing and then also the development of the AI solution to process it. [01:03:58] Speaker D: Well, where are you putting the garbage? [01:04:00] Speaker C: Yeah, well, he's gonna have to truck it off to another landfill somewhere else. [01:04:04] Speaker D: So he's gonna pay somebody on the other side to have the landfill. And then I'm just like. Then I'm also like, ooh, magnetics and hard drives, I understand those don't really have an effect, but would you imagine finding the flash drive and be like, this is it. And then you've accelerated swiped it, Right? [01:04:20] Speaker B: That would be. [01:04:21] Speaker C: Yeah, I, I assume, I mean, yeah, no, I, Yeah, I don't know how that would work. It seems like a flawed strategy. Which is probably part of the reason why the Newport Council is kind of like, this guy is a blowhard who. You know, because, like, my thought, too, was like, okay, so you buy the landfill. I don't know, maybe I don't know what you're paying for it, but you buy it, and then you promise to clean it up. And then as soon as you find your hard drive, you run for the Canary Islands, never to be seen again with your $800 million. Like, I. I definitely don't. I don't know that I believe that he would actually do it. [01:04:55] Speaker D: Yeah. [01:04:55] Speaker C: You know, like, if he's willing to put a bunch of money into a trust and all these things, then maybe I'd be more confident in his ability to do this. But, I mean, I assume to build a solar panel farm, you also have to do some cleanup of the landfill. So if he's willing to do that for you and you could lock him in somehow. I don't know that I'm completely opposed to the idea, but I also find it just hilarious. [01:05:13] Speaker D: I think build on top. Like, when I was growing up, we played soccer on the landfill. An old landfill. [01:05:20] Speaker B: I mean, most of San Francisco is built off of landfill. Yeah. Like, it's. I wonder, like, if. If he buys the landfill and then gets partway through cleaning it up and unbails, like, they still have to do the same cleanup. So they're. Now, they're just up the price of the. Whatever he paid for the thing. So I don't know. [01:05:37] Speaker D: Well, that's why if you put. Make him say, okay, we know this is going to cost us $10 million, so put 10 million into the trust, and you can buy at that point. So that way, if you run away, we have some collateral at that point. [01:05:51] Speaker B: But, I mean, even if they don't, right, they're like. Let's say he skips down and they're. [01:05:55] Speaker D: You're not cleaning it up to that level. [01:05:57] Speaker B: They've already got the purchase price, and then they still have the cleanup tasks that they already have. [01:06:01] Speaker D: I don't think you clean it up that much like you think it was. [01:06:05] Speaker C: Like, roll dirt over the top of it, plant some grass, and put some solar panels on top of it. That's what they do in America. I don't know what they do in Britain. [01:06:13] Speaker D: That's true. We need the British guy here to tell us. But, like, in America, you do that maybe a few vents to make sure that, you know, methane gas doesn't explode. You know, if you don't want to Kill kids, you know, and you call it a day. So, you know, that's where I'm like, if they're actually removing it, you're gonna end up with massive holes, like, in the ground. So, like, that's also the opposite. [01:06:35] Speaker B: Yeah. [01:06:35] Speaker D: I feel like we should save this for when Jonathan's back and ask him what you do with landfills. [01:06:41] Speaker C: I'd love to keep an eye on this, but I do sort of find it funny because this has been a decade. So I'm just looking at the Wikipedia article, and they have all the sources. And so in 2013 when this started, it was $4 million worth of Bitcoin that he had lost. And then in 2013 or later that year, it was 7.5 million worth. And then it was 75 million in 2017. The value of his bitcoin is just continued to go up and up and up. It's the modern day treasure hunt. [01:07:10] Speaker D: I mean, it's almost better that he's lost it for this long. Yeah. [01:07:14] Speaker C: I mean, he would have sold it by now and wouldn't have got the money. [01:07:17] Speaker D: Right. [01:07:17] Speaker C: Or we wouldn't have got as much money. He would have got something. But. Yeah, absolutely crazy. But he's got to be leveraging most of that money to buy this landfill and do this cleanup work with his fancy AI magnetic, the conveyor belt. So I. I don't know. Like, I. Like there is a depreciating return on how much money this can be worth at some point. And then what happens if you're in the middle of it and the cost of bitcoin crashes? I mean, it could happen. It could happen. [01:07:42] Speaker B: Absolutely. It be kind of interesting. I wonder, like, it's on a laptop or seemingly like trying to. And so I wonder, like, you know, depending on how, you know, the waste was disposed of, like, it may not be even in the dump. Could have been recycled off for the gold. [01:07:59] Speaker C: That's the thing is like. Like, how long did. How long did it take for you to realize you had thrown away the hard drive with the key? And like, was it months or was it like. Because it was just a few days. Like, I feel like you have a better chance, but if it's been months before you figured it out. I mean, I haven't. There's documentary. [01:08:13] Speaker B: I think it was years, right? [01:08:14] Speaker C: Yeah, I think it was a while before he figured it out. [01:08:16] Speaker B: He said he mined it in 2009, and then since 2013, he's been hoping to recover. So I think it was like four years. [01:08:22] Speaker C: Yeah, it's In a landfill. I got your hose. I mean, and even if that hard drive was. Is still intact, is highly questionable. So, I mean, you might have to do forensic data recovery on at this point, which, who knows? [01:08:32] Speaker B: I'm sure you would have to. And hopefully it's solid state and not spade drives. [01:08:36] Speaker C: I mean, for 2009. [01:08:38] Speaker B: Yeah. [01:08:39] Speaker C: Are you throwing. Are you throwing away a solid State Drive in 2009? They were very expensive back then. [01:08:45] Speaker B: Yeah, no, that's true. It's true. [01:08:48] Speaker C: You know, it's one of those. One of those things where I. I swear to God, I threw it away. Then, like, I looked for all my trash cans and, like, can't find it. Like, what the hell? Then, like, go back to my office and sitting on my desk. Yep. That's the kind of thing, like, it's gonna be the last place I looked at my house. That's where I left it, so. All right, well, that's a fun story. I thought I'd share the latest on this because I was chuckling about it, but there's a documentary, if you want to know more about this. Richard Hammond put together a YouTube documentary about this guy and his search for. And it was. It's a few years old, but gives you kind of a pretty good picture of the story of this person trying to get his Bitcoin back. Good luck to him. [01:09:22] Speaker B: I just hope that one day it's found. Like, how awesome will it be if it's found? [01:09:27] Speaker C: I mean, maybe the solar company or. [01:09:30] Speaker B: Even if the device is found and they don't recover the key or something along those lines. I just want the story to. Beautiful. [01:09:36] Speaker C: I want the solar panel company to find it and then not give them the hard drive. I think that'd be better. Losers weepers. Finders keepers. [01:09:46] Speaker D: We're a little evil. [01:09:47] Speaker C: Yeah. Anyways. All right, gentlemen, see you later.

Other Episodes

Episode 214

June 05, 2023 01:00:42
Episode Cover

214: The Cloud Pod Loves Inspector Gadget

Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan, Jonathan, Matthew are your hosts this week as we discuss all things...

Listen

Episode

December 31, 2019 50m03s
Episode Cover

Google to kill Cloud Pod if not #1 by 2023 – Episode 52

Your co-hosts settle into the winter holidays by unwinding from Re:Invent and recording the last episode of The Cloud Pod of 2019. A big...

Listen

Episode 69

May 05, 2020 00:59:42
Episode Cover

69 - The Cloud Pod asks: Can you hear us now?

A big thanks to this week’s sponsor: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises...

Listen