[00:00:00] Speaker A: Foreign.
[00:00:06] Speaker B: Welcome to the Cloud Pod where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure.
[00:00:14] Speaker C: We are your hosts, Justin, Jonathan, Ryan and Matthew.
[00:00:18] Speaker A: Episode 297 recorded for the week of March 18, 2025. Save the date so you can get some skills in AI. Good evening Ryan and Matt, how you doing?
[00:00:28] Speaker D: Yay, AI skills.
[00:00:29] Speaker A: I think Ryan froze.
[00:00:32] Speaker D: Great start to the podcast guys.
[00:00:34] Speaker A: Let's get started.
Ryan will join us here in a minute, I'm sure.
But first up I have some follow up. If you remember, we talked about Microsoft quantum computing with the topological qubits a few weeks ago because it was a world changing event apparently. But we talked even last during that show that there was some skepticism out there about how real this thing was.
And so there's been some posts. Nature had an article again from a recent, you know, physicist summit, I guess where these people go or quantum theorists. I went to there's supposed to be a talk by Cheatin Nayak at the American Physical Society or APS where he was a present off what was done and apparently it was a beautiful talk per Daniel Loss with no details and that people have gone overboard and the community is not happy that Microsoft overdid it. Which is interesting because I also saw an article a couple weeks ago that I was going to talk about, but I forgot that the head of Quantum was summoned to Andy Jassy's office, explained how Amazon beat them to the punch and he also claimed his high skepticism of this reality. So no one's really buying Microsoft actually created a new topological qubit. There's some doubt apparently I don't have an interest description. I can't get to all of the details of this article, but basically they said that you know what they showed, which is a microscopic 8 shaped aluminum wire on top of indium arsenide, which is a superconductor at ultra cold temperatures. And the devices are designed to harness majornas, previously undiscovered quasi particles that are essential for topological qubits to work. And the goal is for the majora is to appear at the four tips of the H shaped wire emerging from collective behavior electrons and these Mary drones in theory could be used to perform quantum computing that are resistant to information loss. But no proof, no evidence and they think Microsoft's full of it. So that's, that's the consensus and I've seen it pretty much from anybody who's in, you know, quantum theory or physicists that they are highly doubtful. So you know, they maybe got a little bit Ahead of their skis on how exciting their announcement was and if Quantum is going to be as awesome. Hey, welcome back, Ryan.
[00:02:43] Speaker C: Hey.
[00:02:44] Speaker D: Feel like they really have to, you know, at this point, either come out with what they have and, like, show the research, let somebody else verify it, or go hide in the corner, which I feel like is what they're doing, you know, so they really need to figure out if they have what they said they have or why they came out with it so early if they weren't there. Like, something feels wrong at this point when everybody else in the industry is like, yeah, no. And you know, they're still like, no, no, we're good. And they're just sitting there quietly taking it from everyone.
[00:03:15] Speaker C: Yeah. Specifically because of, like, how big of a claim they made. Right. We've done it, we've achieved it. You know, they were.
[00:03:22] Speaker A: They claim they created new matter from nothing.
[00:03:25] Speaker C: So and so, like, it does feel very suspect.
[00:03:29] Speaker A: Yeah. Well, we'll continue to keep an eye on that just because I'm kind of morbidly curious about it because it's a cool idea. And, you know, I do fear what happens when you add quantum to AI. We're all screwed at that point, but, you know, we'll get there eventually, I suppose.
Well, I had written up a whole set of show notes yesterday about this rumor that Google was back in the market to buy Wiz, if you remember that last year. And then Google graced us with a blog post this morning announcing that they have entered into a definitive agreement to acquire Wiz to better provide business and governments with more choice in how they protect themselves. Google says, why buy Wiz now? Well, they see that their mandate consultants witness the accelerating number of severity of breaches. And most organizations are going digital. But most deployments are either multi cloud or hybrid, introducing complex management changes. While software and AI platforms are becoming deeply embedded across products and operations. And traditional approaches of cybersecurity struggle to keep up with this evolving landscape. And Google points out that they have already have threat intelligence, security operations and consulting. But Wiz provides them with a seamless cloud security platform that connects all major clouds and code environments to help prevent incidents from happening in the first place. Which is great. It scans your environment, constructs a comprehensive graph of code, cloud resources, services and apps along with the connections between them, identifies potential attack paths, prioritizing the most critical risks based on their impact, and empowers enterprise developers to secure applications before deployments.
And of course, because Google didn't say how much they paid for this bad boy I had to pick another article here where they're basically buying it for $32 billion.
In last July when we talked about this they were saying $23 billion. So that's a 9 billion increase. This is an all cash acquisition.
It'll be the Google's biggest in its 26 year history and it is the biggest M and a deal of 2025 so far. There was a quote in this article too from Wiz CEO Asaf Arapiport. Wiz and Google Cloud are both fueled by the belief that the cloud security needs to be easier and more accessible, more intelligent and democratized. So many organizations can adopt and use cloud and AI securely.
So yeah, there you go. I guess that all that smoke that Wiz put out there that they were IPO and they didn't need Google was just posturing and negotiation tactics as they've now agreed to be acquired assuming it goes through antitrust and all the other things which Google's already kind of dealing with some problems there. So I'm surprised that's a thing. But apparently there's a $3 billion breakup fee in this deal. So if the government says sorry, you can't buy it, Google's also pay $3 billion.
[00:05:52] Speaker C: Wow. I'm very surprised by this announcement just because they've been really touting the Mandian, you know Mandiant and both chronicle like into the, into the existing Security center tools and then a lot of these, a lot of these reasons why they're saying Wiz is better is specifically stuff that's been added like Security center enterprise. You can at least look at the cloud posture of both AWS and Azure.
So it's kind of crazy. I wonder what they have that they had to have from Wiz. It's kind of interesting with all the security tools that are out there, you buy the market leader for as much money as that.
[00:06:33] Speaker A: Well, I mean technically are they a market leader? I mean Palo Alto has a competitive solution to this. I could argue they're the biggest one as well. Yeah, there's also a bunch of open source projects for cnapp. So like Google could have taken the Amazon Playbook and just taken one of those projects and made a service around it, but they didn't go that way either. So it is sort of weird. I mean like I, I, I like Wiz. I think they're a very smart company. I've seen a couple demos of their product. I watched YouTube videos, I 32 billion. We're the wrong business guys. We're starting a company.
[00:07:04] Speaker C: I've Looked at it and I didn't want to pay for it. Wiz was very expensive.
[00:07:08] Speaker D: Yeah, yeah. It was not cheap in any way, shape or form.
[00:07:12] Speaker A: No, it is not cheap. In fact the, when they were looking to buy it last July, they said it was 500 million in ACV or ARR.
[00:07:22] Speaker D: I don't remember which one it was ARR, I think.
[00:07:24] Speaker A: Yeah, ARR. And they said now, you know, six months later it's 700 million in ARR. So that's a, you know, 200 million. That's a fast growing engine.
Probably because it's paying, you know, it costs a lot of money.
[00:07:36] Speaker D: They read that somewhere that they were expecting it to be close to a billion this year. Which.
[00:07:40] Speaker A: That's our target.
[00:07:41] Speaker D: Yeah. So then if you figure okay, 500, let's just you know, 7:50. So you're probably looking 7x which is normally, you know, what services or what product companies sell for. Like not a terrible price point. If they are actually going to hit that 1 billion this year. I think it's pretty low.
[00:08:00] Speaker A: Yeah, I mean I think it'd be interesting if they hit it because the economy is not in great shape.
[00:08:05] Speaker C: I think the economy was a large reason why they changed tech from going IPO as well.
[00:08:11] Speaker A: Going to ipo? Yeah, because all of a sudden it was like, well, an exit may not be as pretty as we thought it was going to be.
Yeah, it's definitely, I mean I'm happy for the Wiz guys. Like someone's buying a yacht or two and a helicopter to get to it.
You know, an all cash deal.
[00:08:25] Speaker D: Maybe a submarine.
[00:08:26] Speaker A: Yeah, but yeah, definitely interesting to see and you know like you had start asking the question like well does, does Google, you know, if they have this now the only thing they're really missing in their SEC Security portfolio is really Endpoint. So then like who's really available in endpoint like Sentinel One is kind of comes to mind. CrowdStrike. Well, CrowdStrike's probably too expensive for them because that would be really big acquisition.
[00:08:49] Speaker C: 32 billion. I don't know it's too expensive anymore.
[00:08:52] Speaker A: Well, I mean like their market cap is like 120 billion. So like you know, I don't, you know a sale price of that would be, that'd be monstrously large.
[00:09:00] Speaker C: Yeah.
[00:09:01] Speaker A: But you know, something smaller like Sentinel One might make sense for them.
[00:09:04] Speaker C: Absolutely would.
[00:09:05] Speaker D: But yeah, that tool's not too bad from playing, from playing with it at one point. It may not be a bad solution for them to integrate in at that point.
[00:09:13] Speaker A: Yeah, well especially after CrowdStrike took the world down, a lot of people are more interested in alternatives. And if you had a product that could do what CrowdStrike does without having to take the risk, then maybe this is a great one. Stop shop for your security needs. Between the consulting, the threat intelligence, cnapp now, and then potentially adding an endpoint, I wouldn't be shocked. So I'm just saying don't be surprised if someday they buy an Endpoint product, because it just makes sense to me.
[00:09:44] Speaker C: Yeah, same.
[00:09:45] Speaker A: All right, let's move on to AI. OpenAI is sharing their recommendations with the White House Office of Science and Technology, or ostp, for the upcoming US AI Action Plan. As Sam Allman, CEO, has written, they're on the cusp of what he considers the next leap in prosperity, the intelligence age. To do that, they must ensure that people have freedom of intelligence, by which they mean the freedom to express, access and benefit from AI as it advances. Protected from both autocratic powers that would take people's freedoms away and layers of laws and bureaucracy that would prevent the realization of them. So OpenAI is proposing five things in their proposal to the government. A regulatory strategy that ensures the freedom to innovate, an export control strategy that exports democratic AI, a copyright strategy that promotes the freedom to learn, a strategy to seize the infrastructure opportunity to drive growth, and an ambitious government adoption strategy.
I love when the company who's going to benefit from it the most makes all the laws to, you know, regulate it. It's always a plus.
Yeah.
[00:10:43] Speaker D: Especially when one of them is the government needs to adopt AI. Yeah, exactly like, hey guys, we're going to make a rule. You have to adopt AI. Oh, by the way, we're the AI company you're going to adopt.
[00:10:54] Speaker A: Yeah.
[00:10:54] Speaker D: What could possibly go wrong?
[00:10:56] Speaker A: Yeah, in the article it says advancing democratic AI around the world starts with the insurance that the US Government itself sets an example of governments using AI to keep their people safe, prosperous. And I'm not sure this is the right administration for this, but it's aspirational.
[00:11:10] Speaker C: I mean, it's all about the optics. Right? Like if it can be seen as this giant positive move and adopting technology, I can see it going through because I don't think any of the policies. It's going to be expensive no matter what. But DOGE is going to kill everything, I would assume.
[00:11:29] Speaker A: I mean, they're trying.
Apparently they're even taking over nonprofits that aren't even a government owned. That's what I saw.
[00:11:36] Speaker C: Oh, geez.
[00:11:39] Speaker A: All right. AWS this week has PI day 314 just passed us guys four days ago. And of course they wrote a blog post. This is the first year the blog post has not been written by our friend Jeff Barr, who stepped away from the blog at the end of 2024. And so I'll tell you, the article was not as fun as prior years.
The year's PI Day was a focus on accelerating analytics and AI innovation with a unified data foundation on aws. Several announcements I will cover here in a little bit were also included in this announcement about PI Day. But you know, I only care about PI Day because they wow us with some crazy metrics and it took almost, you know, the entire article to get to them. So they did finally drop that. S3 currently holds 400 trillion objects, which is exabytes of data, and processes a mind blowing 150 million requests per second. A decade ago, they didn't even have 100 customers storing more than a petabyte of the data on S3. And now they have thousands of customers who have surpassed the 1 petabyte milestone.
[00:12:34] Speaker D: How many of those objects are VPC flow logs?
[00:12:38] Speaker C: A lot.
[00:12:39] Speaker A: Quite a few.
How many of them are container logs with containers that are failing to start up?
[00:12:47] Speaker C: Yeah.
[00:12:48] Speaker D: And ALB logs, like. Yeah, there's a lot of logs in there that nobody ever looks at.
[00:12:54] Speaker A: Yep, lots and lots of logs, I'm sure in that group.
So, yeah, I, you know, happy to see PI Day still lives on, even without Jeff Barr. Hopefully they don't get distracted by this AI thing because that's not what PI Day is about. It's all about data metrics.
[00:13:09] Speaker C: Yeah, I mean, it's just, it's. It's always. I've always just staggered by the numbers. Right. Like, it's just. It's so cool.
[00:13:15] Speaker A: Yeah.
[00:13:16] Speaker D: 150 million requests per second.
That's crazy.
[00:13:20] Speaker A: That's a lot of nodes handling that.
[00:13:23] Speaker D: Just restart one at a time. Don't add an extra zero.
[00:13:26] Speaker A: Yep.
S3 is reducing the pricing for S3 object tagging by 35% in all eight regions to 0.0065 per 10,000 tags per month. Object tags or key value pairs applied to the S3 objects that can be created, updated or deleted at any time during the lifetime of the object. S3 object tags are used for a lot of use cases, including providing fine grained IAM access, object lifecycle rules and replication parameters between regions. I was thinking to myself, why would they need this? Most people don't tag their stuff in S3, but then they released a feature not Too long ago called S3 metadata, which allows you to easily capture and query custom metadata from your data and then store that in the object tag. I'm going to guess a lot of customers were very surprised about how much their tags were costing them and so Amazon agreed and gave you a discount. So you're welcome.
[00:14:14] Speaker C: Now we had a chuckle about that in the pre read. You know it's. But you know, that's. It's also like kudos to Amazon for reacting in a way like, you know, it's reducing the price and making it fair. That's cool.
[00:14:26] Speaker A: I mean I'm sure a tag can't cost that much to store.
[00:14:29] Speaker C: Like, I mean, I'm sure it's the API cost.
[00:14:33] Speaker D: Yeah, sure you have 400 trillion objects with, you know, name and purpose and cost center or whatever may tags you by default, but onto everything adds up quickly.
[00:14:46] Speaker A: I really thought about using, I mean using tags for object access for IAM. I hadn't really thought about that for S3 objects, but that's kind of brilliant. Like that's a good use case.
[00:14:55] Speaker C: No, I mean there's some really cool sophistication patterns of like, you know, tagging data by data classification so that you can create IM policies based off of that. Like if label everything top secret and you have a group or an IM policy that has that conditional, you can keep it in the same bucket but restrict access.
[00:15:15] Speaker D: That's how GovCloud works. Thank you Ryan for sharing all the secrets. All IAM policies.
[00:15:22] Speaker A: It's pretty cool.
[00:15:23] Speaker C: I like it. A lot of flexibility.
[00:15:28] Speaker A: Amazon S3 Tables is adding create and create table support in the S3 console. S3 Tables are of course now generally available and support create and query table operations directly from the S3 console using an interface to Amazon Athena without you having to go to the Athena console. Console. With this new feature, you can now create a table populated with data and query it with just a few steps in the S3 console and anything to maybe not go to Athena is a win.
[00:15:50] Speaker C: Well, the jokes on them. It's always the other way around for me is that I'm trying to build the schema from the table and I can't figure that out in Athena. This isn't going to help.
[00:16:01] Speaker A: Apparently. You don't have to do that. You just tell it, this is the T, this is the data file and then it figures it out for you. Which is why I didn't do it. What I would want to have happen. Have you ever tried that?
[00:16:09] Speaker C: It doesn't work. Yeah, no, it doesn't work.
[00:16:11] Speaker D: They gave you Q to do it for you.
[00:16:13] Speaker A: It doesn't even work on things like ALB access logs or flow logs. The things that, like Amazon should know the schema because they created it and they still can't do it.
[00:16:20] Speaker C: Still can't do it. Yeah, no, it's crazy. And then, yeah, Q isn't going to help anything. It's just going to.
[00:16:28] Speaker A: I would definitely try Athena Q if they will Athena Q service to help you write queries like I like with plain English because it's SQL ish, but not SQL enough. Like I was dealing with something or day I was trying to find like something in the URI path and I was like, is it. I know it's a contains, but how do they want their contains structured? And then I was like, is there Athena around here so I can ask? And then, nope. I just went to Claude, said, hey, I need. I'm running Athena query and I need a contains. What do I do? And it gave me the syntax I actually needed. I was like, oh, thanks, Claude.
[00:16:59] Speaker D: I was writing a custom IAM policy the other day and I was like, oh God, it's been too long since I've had to deal with the S3IAM policies. Well, how do I. How do I do this? Or strike?
[00:17:10] Speaker C: I mean, it's such a savior. Like I, I'm using BigQuery more than I use Athena these days, but it's the same thing.
[00:17:16] Speaker A: It's.
[00:17:16] Speaker C: It's SQL ish syntax and it's just like, why would I bother when I could just ask in plain language?
[00:17:23] Speaker A: I was, you know, I, I have this infrastructure that I maintain and I was updating the user data file to add some new things we were doing to the website. And, and I was thinking to myself, there's gotta be. I bet there's a. I could like rewrite this because like, my shell scripting is definitely getting dated. So I put the whole user data into. Into Claude and I had to rewrite it and it taught me things. I was like, I did not know I could do that. That's amazing.
[00:17:46] Speaker C: Yeah.
[00:17:48] Speaker A: So, yeah, Claude's amazing.
[00:17:50] Speaker C: Yeah, Bash is one of those languages where it is dying out.
[00:17:53] Speaker A: Right.
[00:17:53] Speaker C: So it's way more powerful than people know. But it's just what we remember from.
[00:17:58] Speaker A: Like, I don't write this every day anymore. And so, yeah, I'm writing a shell script. And I was like, oh, well, if you did this with variables instead. I was like, I had even considered using variables there. That's brilliant. Yeah, because I had like this whole list of files that I downloaded from S3 from the config. And it's like, just make those variables. Then you just put a list of the files you want downloaded. I'm like, oh, it's so much better.
[00:18:18] Speaker C: It's pretty cool.
[00:18:19] Speaker A: Yeah. And I feel like an idiot. I'm like, you know, I've learned and forgotten more than I'll ever probably know. That's how I now see it.
[00:18:28] Speaker C: Alternatively, going and learning Bash is probably a poor life choice, right? To.
[00:18:32] Speaker A: I have a whole book. I can share the book with you, I'm sure.
[00:18:34] Speaker C: I mean, so do I. I. I have. But it's like going through and getting that proficient again. I don't know.
[00:18:39] Speaker A: Yeah, No, I don't know if I need to get that. No, like, yeah, there's probably like 30 bash commands that I ever really used a lot of or shell commands that I use often. And I. Even some of those. I look at the syntax still, I'm like, oh, yeah, that was weird.
[00:18:55] Speaker D: Flags at fast M.
Do you. Can you figure out. Can you. Do you know remember the XKCD I'm referencing?
[00:19:03] Speaker C: I do remember the XKCD command.
[00:19:05] Speaker A: Yeah.
[00:19:06] Speaker D: Without looking it up, unzip a file.
[00:19:09] Speaker A: Yeah. I just know there's a bunch of letters I throw at it and if I'm right, it'll do it correctly. If not, it won't. And then I just. Then I just use commands. Yeah. Or delete it and try again.
[00:19:18] Speaker D: You stream TAR files to make things be more efficient. Used to you. You stream over S over SSH to move all that good stuff that I never will remember again.
[00:19:28] Speaker A: Oh, yeah, like remote shelling, you know, where you're. Yeah. Proxy shell. Like, yeah, those are things that I have done, but I had to look up every time I do them because it's so long. Which one? I use one of those. You set it up one time. You're like, all right, X shell set up. And like, now I'm done.
[00:19:40] Speaker D: And touch it again and you move it to a new server. You're like, crap. How did I do that?
[00:19:45] Speaker A: Yeah. And then I switched to a new company. I need to do it again. I'm like, shit, that command's on my old laptop and I didn't put in my GitHub before I left. Darn it.
Anyways. All right. Amazon is announcing the general availability of Amazon SageMaker Unified Studio, a single data and AI development environment where you can find and access all of the data in your organization and act on it using the best Tool for the job across virtually any use case this announced at Reinvent last year. The Studio is a single data and AI development environment bringing together a wide range of tools and standalone apps including Amazon Athena EMR glue, Redshift man workflows for Apache Airflow and existing SageMaker Studio. Because yes, we made a second studio to make it more confusing for you. So there's Unified Studio and their studio. In addition, they have announced several enhancements including new capabilities for Amazon bedrock in the SageMaker Unified Studio, including integration of the foundational models including Claude 3.7 Sonnet and Deep Seq R1, enabling data sourcing from S3 within projects for knowledge base creation, extending guardrail functionality to flows, and provides a streamlined user management interface for domain admins to manage model governance across database accounts. And Amazon Q Developer is now generally available in the SageMaker Unified Studio, the most capable generative AI assistant for software development. Streamlining developer development in SageMaker Unified Studio by providing natural language conversational interfaces that simplify tasks like writing SQL queries, building ETL jobs, troubleshooting and generating real time code suggestion. So in theory I should be able to go use this new studio to use my common language in the Q developer to get the THENA code I need. In theory. But then I'd had to actually like figure out how to use where even Amazon SageMaker Unified Studio exists in the console, which I'm not entirely sure.
[00:21:19] Speaker C: Yeah. And, and then once, once you're there like you have every tool and every option within that tool. Like this.
[00:21:26] Speaker A: Yeah. It's like this is a Swiss army knife and I don't know which tool I want at any given time. Yeah. So I'm happy this exists. I.
Am I going to use it anytime soon? I don't know.
[00:21:37] Speaker C: Like I'm sure there's, there's, you know, data teams that love this. Right. Like I think it's, it's, this is a tool that is built for them. It's built for data spelunking and reporting on the, on those jobs, you know, across large data as well. So it's, I'm sure it makes a lot of sense if you're in that world every day, but it's what I'm just trying to do like whatever my PODOC use case is, like I just want to graph out like how many people log in or use this feature. Do this thing gets a little complex.
Yep.
[00:22:09] Speaker A: Even like I'm just, I'm just logged into my Amazon account because I was curious now and where's the Unified Studio at exactly. Oh, create a Unified Studio. Oh, there's setup required. Oh, great. Let me search it.
[00:22:19] Speaker C: Of course there is.
[00:22:20] Speaker A: I'm going to come back to that later.
[00:22:23] Speaker C: Scale down to zero. Probably not.
[00:22:24] Speaker A: No, I'm not.
Well, if you were excited about those S3 tables earlier that you could query with Athena, but then you said to yourself, look, the S3 table is good, but I really need it in a lakehouse.
I have the news for you. Amazon S3 Tables with Amazon SageMaker Lakehouse is now generally available, providing you a unified S3 table, data access across various analytical engines and tools. You can now access the SageMaker Lakehouse from the SageMaker Unified Studio, giving you even more data to screw with in your studio. So there you go.
[00:22:57] Speaker C: And you'll need the studio, right? Because you'll need all those services. So you can do nine different ways of doing ETL and try and run a report across all of it. Makes perfect sense.
[00:23:05] Speaker A: Yeah. I mean it says it integrates with Athena and EMR Redshift and Iceberg compatible engines with Spark and PI Iceberg. I mean there is so much fun for a FinOps person to try to figure out what the hell you did later. Oh yeah. Poor finops men and women, I am sorry for you all that you're going to have to figure this out. Because the people who use it don't know what they use because they just use SageMaker Unified Studio.
[00:23:26] Speaker C: But, well, and how these things are priced. It's always on the. Like the computational power. So you write a report, you don't really know what data you're running across and how intensive that query is.
It's very.
It varies quite a bit.
[00:23:39] Speaker D: Most of the time people don't care. They just. They get the data. That's their job. They don't care about the cost. That's the CFO's problem.
[00:23:47] Speaker A: Or the cloud ops guy who's in charge of the budget.
[00:23:49] Speaker C: His problem now.
[00:23:50] Speaker D: Yeah, yeah, don't worry about him.
[00:23:54] Speaker C: No one else does.
[00:23:56] Speaker D: Yeah.
[00:23:57] Speaker A: AWS is announcing support for AWS Glue Data catalog with AWS Glue 5 for Apache Spark jobs. ADOS Glue data catalog Views allow customers to create custom views from glue 5 spark jobs that can be queried from multiple engines without requiring access to reference tables.
This just seems sticky, like a. What happened to Glue three and four? Where did they go? Because I remember Glue two.
[00:24:20] Speaker C: That's insane.
[00:24:22] Speaker D: We also don't like glue.
[00:24:23] Speaker A: But when do we take glue and we made ETLs into Spark jobs? Like, when did that happen? Like who, who thought that was A good idea. I'm. I'm terrified.
[00:24:32] Speaker D: Not the FinOps guy.
[00:24:34] Speaker A: Yeah, definitely not the FinOps guy.
[00:24:36] Speaker C: I'm still like, I kind of get why glue exists, but it feels like it doesn't solve the problem that it's intended to solve. And I, and I wonder if this is, you know, again, because I'm not deep in data problems every day if I don't understand it. But every time I've tried to use glue to sort of stitch together various data sets and be able to run reporting across all of them. Like, it's just doesn't work the way I want it to. So like it's.
I'm either rewriting the entire data set using glue or I'm, you know, like, and trying to apply that. You know, like, why would you need a catalog to do that? Why is it so complex? Like, just let me define a pipeline.
[00:25:17] Speaker D: I don't know.
[00:25:19] Speaker C: I'm angry today. I'm an old man shakes fist of cloud. Like I can tell. Yeah.
[00:25:25] Speaker A: Well then you're going to love this next story.
Amazon Route 53 traffic flow now offers you an enhanced user interface for improving DNS traffic policy editing. Route 53 traffic flow is a network traffic management feature which simplifies the process of creating and maintaining DNS records and large and complex configurations by providing users with the interactive DNS policy management flowchart in their web browser. With this release, you can easily understand and change the way traffic is routed between users and endpoints using the new features of the Visual Editor.
[00:25:53] Speaker C: Okay, thanks.
[00:25:54] Speaker D: So about 10 years ago when they updated the Route 53 console, I didn't like it then. Every time I go into today I get mad at it because I can't figure out a DNS entry in. Because you have to like select be like type and do that. I'm like so used to doing with Terraform and this just makes me mad thinking about how bad it's going to be. All I want to do is just put an A record in somewhere.
[00:26:18] Speaker C: I'm really afraid of what this is going to look like. I get it. Health checks and global load balancing based off of latency. It can get complex. Totally understand but really need a full picture look at the Visual editor.
[00:26:36] Speaker D: It looks like the cloud formation GUI that they built but for DNS entry.
Just thinking about it.
[00:26:44] Speaker C: Yeah.
[00:26:47] Speaker D: And then there's some geomap in the P. In the. In the blog post. I'm really lost by it. I might be mad at like.
[00:26:55] Speaker C: Well, geoproximity routing is like the. The best thing in the world, honestly.
[00:26:58] Speaker D: Yeah, but look at the diagram, look at the picture. It's just confusing.
[00:27:01] Speaker C: Yeah, I know.
[00:27:03] Speaker A: So again, it's like one of those opportunities for a psa. Friends, if you're listening to this and you're saying to yourself, this is an amazing feature that I am super excited about, I'd like you to spend some time on Indeed.com rethinking your life choices, because clearly your job has taken a turn for the worse. If you're trying to make a super complicated DNS routing policy that is going to be the driver of all of your availability and needs. And I strongly, strongly encourage you to find another job before this kills you in your sleep.
[00:27:30] Speaker C: It's just crazy to me. Like, it's a low code solution for DNS entries and it's just like it feels forced. I don't. I. I mean, maybe I'm not the use case or the, you know, the target audience for this, but.
[00:27:46] Speaker A: Jeebus.
[00:27:48] Speaker D: Yeah, I mean, I guess it's. If you have, you see this for like a gaming company or something that needed like end compute, like end user compute with low latency. So everywhere you're leveraging local zones all over the place, and then you can kind of make sure you route to the right local zone and things like that. Like that I could kind of see.
But that still feels like at that point you probably have a much more sophisticated setup and you're probably not leveraging a GUI for DNS entries.
[00:28:21] Speaker A: I mean, I'm in the GUI editor.
[00:28:23] Speaker C: Right now and I'm just looking at the. Over the blog post and the screenshots.
[00:28:27] Speaker D: It's just, you know, as I scroll down the blog post, I get more and more upset.
[00:28:31] Speaker A: I mean, I'm just, I'm just in the console trying to do one of these on right now and I'm like, this is a terrible way to do it. Don't do this.
[00:28:39] Speaker C: I do like the visual representation, like over the map of which region you're serving from. I will say that that is kind of neat. But the drag and drop, you know, WYSIWYG editor like to create a name record, like, no, please, no.
[00:28:57] Speaker A: Uh. Oh, look at that. Ryan frozen.
[00:28:59] Speaker D: Oh, again. At least this time we got a.
[00:29:02] Speaker A: Good picture of him frozen right there, like in the moment.
[00:29:04] Speaker C: Yeah.
[00:29:05] Speaker A: All right, let's keep on moving.
[00:29:07] Speaker D: All right, let's go.
[00:29:09] Speaker A: Come in. Totally ready to rant about Route 53.
[00:29:14] Speaker D: Don't worry, we'll talk about AWS backups.
[00:29:16] Speaker A: Yeah. Amazon is announcing the availability of AWS backup. Logically, Air Gap Vault support for Amazon FSX for Lustre, FSX for Windows File Server, and Amazon FSX for OpenZFS. Logically, Air Gap Vault is a type of AWS backup vault that allows secure sharing of backups across accounts and organizations, supporting direct restore to reduced recovery time for data loss event. It logically, Air Gap Vault stores immutable backup copies that are locked by default and isolated with encryption using AWS owned keys.
[00:29:43] Speaker D: I didn't even know that FSX for OpenZFS was JID.
[00:29:47] Speaker A: I think you missed that episode.
[00:29:49] Speaker D: I must have. I mean, I know I used the other ones in the past, but I've never used the Open for zfs.
[00:29:57] Speaker A: I like ZFS a lot. I worked at a startup that we had a bunch of zfs and it was a lot of fun, but it's also kind of getting a little long on the tooth and it's owned by Oracle, which is never a great sign.
[00:30:09] Speaker C: Oh, I didn't know that. Yeah.
[00:30:12] Speaker A: Hey, your two 10 cans and a string came back. Ryan, welcome back.
[00:30:18] Speaker D: Yeah, you're welcome. Comcast.
[00:30:22] Speaker A: Comcast, the best Internet service provider you can have.
[00:30:25] Speaker C: Yeah, I don't think it's my Internet provider. I think it might be this ancient relic of a machine I'm recording on.
[00:30:33] Speaker A: There's new MacBook Airs. You can buy one.
[00:30:35] Speaker C: I know.
[00:30:39] Speaker B: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask, will you actually give me my money back? Achero will click the link in the show notes to check them out on the AWS marketplace.
[00:31:19] Speaker A: All right, let's move on to GCP again. We are just a few short weeks away from the spectacle that will be Google Next.
And Thomas TK will be on stage announcing something probably with AI. I'm going to guess lots of AI. So prediction shows?
Yeah, just a hunch, but yeah, prediction shows will be coming up here soon, so get your predictions in order. But if you're going to be saying.
[00:31:44] Speaker D: If we're going to do the Google prediction, I might make you guys do the Microsoft prediction.
[00:31:49] Speaker A: I mean, if you can figure out which of the many Microsoft conferences actually announces anything for Azure. I will gladly do predictions for Azure.
[00:31:56] Speaker D: Damn it, I walked right into that one.
[00:31:58] Speaker A: Yeah, sure did. You sure did.
Well, if you are there at Google Next and you are looking to meet me or Ryan, probably because I think I'll drag him along with me, you can find me presenting as a guest at two breakout sessions, BRK2024, labeled Workload Optimized Data Protection for Mission Critical Enterprise apps, and BRK1 028 unlock value for your workloads. Microsoft, Oracle, OpenShift, and more. I will be at both those talks on stage at one point of them. And I look, trust me, you hear my voice every week on this podcast. You only have to listen to me for five to eight minutes each of those talks because I'm just the customer reference for both of these talks. And so I'll be there. I'll be talking very briefly and Google will have a bunch of product managers there talking for the rest of it who are going to announce cool stuff that you'll want to check out. So they're worth coming to. Not for me, but for that. But if you do come there and you want a sticker, I will have stickers. Ryan will be there. I don't know about Jonathan, but the two of us for sure will be there. Because Matt doesn't do Google, so he won't be there. But at least Ryan, I will be there. And you can get stickers at those sessions. And it's two places I can guarantee I will actually be because everything else on my schedule is like quadruple booked, so I can't tell you anywhere else I'll be. But I know I'll be at these two sessions. So there you go. That's my psa.
[00:33:16] Speaker C: And I will be taking wagers on our Slack channel for how many times Justin says AI on stage.
[00:33:24] Speaker A: I know the number. Okay.
[00:33:28] Speaker D: Ryan, that will be the tiebreaker for us will be how many times? Just AI.
[00:33:34] Speaker A: I don't know if these will be recorded for us to like play back the tape though.
[00:33:38] Speaker C: He came back awfully quick.
[00:33:41] Speaker A: I mean, I know, I know, I know I mentioned at least once, but that's all I can confirm or deny.
All right, well, last week we talked about AI protection for Google and this week we're talking about network security integration from Google. So I mean, we got protection. We got integration. Like they're working on trifecta of things here and they'll be integrating potentially wiz very soon too. So, you know, that's definitely a security integration. That's gonna have to be done so many Google Cloud customers have deep investments in third party security tools like Wiz, from appliances to SaaS applications that and they enforce consistent policies across multiple clouds. The challenge of those solutions is that each cloud application and environment comes with its unique paradigms and challenges and this may lead to network rearchitecture, high cost operations or difficulty meeting compliance requirements. To help address this, Google is announcing network security integration to address these challenges. This will allow you to integrate third party network appliances or service deployments with your Google Cloud workload while maintaining a consistent policy across hybrid and multi cloud environments without changing your routing policies or network architecture. To do this, it leverages generic network virtualization encapsulation AKA Geneve tunneling to securely deliver traffic to a third party inspection destination without modifying the original packet. In addition, the integration helps accelerate application deployments and compliance with a producer consumer model and this allows infrastructure operations team to provide collector infrastructure as a service to application development teams. Enabling dynamic consumption for infrastructure as a service and support for hierarchical firewall policy management further enforces compliance without delays. There are two primary nodes for this first is out of band integration which mirrors desired traffic to a separate destination for offline analysis. This is very similar to VPC Bump in the wire or VPC mirroring over at Amazon and this supports basically implementing advanced network security using advanced offline analysis to detect known attacks based on predetermined signature patterns. Improved application availability and performance by diagnosing analyzing what's going on over the wire instead of relying on application logs and supports regulatory and compliance requirements that is generally available for you today. In band integration, which is in preview, directs specific traffic to a third party security stack for inline inspection. This integrates natively with cloud network generation, firewall and third party firewalls as well as inserts your preferred network security switch into brownfield application environments and this one is in preview. Several partners have comments in this article, including Palo Alto Fortinet, Checkpoint, Trellix, Corelight, CPacket Networks, Netscout, and Extra Hop. If any of those are your vendors, they may have an integration in either the out of band or in the in band integration and check that out from your vendor.
I've stunned both of you.
[00:36:11] Speaker C: Well, I'm I'm trying to figure out whether I'm I think this is amazing or or it's a way to burn money.
[00:36:17] Speaker D: I'm kind of in the same boat is the sad part.
Like the out of band makes sense like in band so it's in line if I'm making Sure, I understand that correctly. Like didn't they already have that where you can route all your traffic through your, like through a firewall on the way out? You just do it with your route tables.
[00:36:34] Speaker C: Yeah, but you had to do it with a VM appliance.
[00:36:37] Speaker D: Okay. So this is more of a native service that handles it that you just plug in your, for lack of better term, key and it opens up Apollo Alto, you know, or whatever the other vendors were forward at checkpoint into it.
[00:36:50] Speaker A: Yeah. So the integration in band has a, does have an intercept capability where you have to intercept the traffic to then route it through a firewall or a cloud load balancer, et cetera, where you're a third party vendor, versus the bump in the wire which is just sending the traffic on over to the thing. So yeah, so you can either. You do it either way. It just depends on what you, you have a need for.
[00:37:11] Speaker C: I mean it is kind of neat because it is always one of those things where cloud data center like contention is that, you know, like you have a lot more control over the traffic in the data center. You can do things like route everything through a network appliance and get, get all that where versus, you know, the.
[00:37:26] Speaker A: Huge security hole in your network infrastructure to decrypt all the traffic, man. The middle attack. Yeah, you can do all kinds of fun, terrible things you shouldn't do.
[00:37:34] Speaker C: But yeah, and in the cloud you had to do this with, you know, terrible VM appliances that had a whole lot of capacity issues and or you.
[00:37:42] Speaker D: Just backhauled it over your direct connect back to your data center. There are a few, a couple companies that did that.
[00:37:47] Speaker C: Yeah, I wouldn't want to do that for inline inspection. That sounds bad.
[00:37:51] Speaker A: Yeah, that sounds like a latency killer.
[00:37:53] Speaker D: No, that's there. There's a couple large companies that, that was their.
They would backhaul all their Internet traffic from their VPCs across 200 gig or 10 gig direct connects and then inspect on the way out. That was their mode of transport.
[00:38:12] Speaker C: I've definitely seen it.
[00:38:16] Speaker D: And I was always like, wow, I'm impressed. You don't have latency issues and you don't have other problems. But they were able to do it.
[00:38:22] Speaker C: I mean the one time I had to work in that environment had a huge latency issue.
Giant problem.
[00:38:29] Speaker D: Yeah.
[00:38:30] Speaker A: Not everyone can have 100 gigabytes paths, you know, to their data center to the new massive amounts of inspection and traffic routing. So yeah, I guess they had to get a problem. I mean this is definitely a me too feature of Amazon because they've had bump in the wire for a while. I don't know about. Does Azure have a similar bump in the wire capability? I assume they do.
[00:38:49] Speaker D: Yeah, but I don't know if it's like a managed service. I know you can definitely route through their network firewalls on the way out or shove an alternative in there, but I don't know if it's true. Bump in the wire as in like it's just a plugin offering? I'll find out.
[00:39:06] Speaker A: All right, let's know.
All right, Google is introducing the latest version of Gemma. Gemma 3 is a collection of lightweight state of the art open models built from the same research and technology that powers Gemini 2.0 models. These are the most advanced, portable and responsibly developed open models yet. They're designed to run fast directly on devices from phones and laptops to workstations, helping developers create AI applications where people need them the most. Gemma 3 comes in range of sizes from 1B, 4B, 12B and 27B parameters, allowing you to choose the best model for the specific hardware and performance that you have. New capabilities built into Gemma 3 include built with the world's best single accelerator model, Gemma 3 delivers state of the art performance for its size, outperforming Llama 3405B, DeepSeek V3 and O3 mini, and preliminary preference evaluations on LM Arena's leaderboard. I love that the gamification now is called a leaderboard.
You can go Global with 140 languages with out of the box support for 35 languages and pre trained support for over 140 languages. I don't really understand what pre trained language support is versus 35 native language support, but I didn't have time to the research Create AI with advanced text and visual reading capabilities to analyze images, text and short videos, opening up new possibilities for interactive intelligent applications. Handle complex tasks with an expanded Context window with Gemma3 offering 120 kilobyte or 128,000 token context windows to let your application process understand vast amounts of information. Create AI driven workflows using function calling which lets you automate the task and build agentic experiences and high performance delivered faster with quantized models, reducing the model size and computation requirements while maintaining high accuracy. Alongside Gemma 3 they're also launching Shield Gemma 2, a powerful 4B image safety checker built on the Gemma 3 foundation. Shield Gemma 2 provides a ready made solution for image safety, outputting safety labels across three safety categories including dangerous content, sexually explicit and violence.
[00:40:55] Speaker D: Did Google just release the like any sort of safety features or did they have it before in Gemma?
[00:41:00] Speaker C: Well, the safety features are built into like model armor and into the security tooling.
[00:41:05] Speaker D: Yeah, but they just said shield for Gemma too. So I was trying to figure out if that's like an additional thing because I know like they.
[00:41:11] Speaker A: It's an additional, it's an additional image safety checker, so. Oh, okay.
[00:41:16] Speaker D: Yeah, extra, I guess it's specific around images, which makes sense.
[00:41:20] Speaker A: So yeah, so if you're not using your AI model for image stuff, you don't need it. So that's why it's optional.
[00:41:26] Speaker D: Yeah, I've never really. I played with it with AI for images, but never used it in like production for your machine or. Yeah, where I cared about that type of stuff as much.
I also just saw the 120k token context window. I was like, let's see how fast we can burn money.
[00:41:44] Speaker A: No, you're not burning money because it's on your phone, it's on your pixel phone. That's the beauty.
[00:41:49] Speaker C: I mean these smaller models are getting me into AI because my initial forays with the larger models, I'm like, this is not going to work. I don't really want to run huge hardware, but I want to have the ability to have a model locally in my own environment.
These are great, they're quick and you can run them on just normal PCs.
[00:42:12] Speaker A: That work better if you do have.
[00:42:13] Speaker C: GPUs, but they still still work, even on CPU.
[00:42:17] Speaker A: Because two blog posts better than one. Gemma 3 is also available on the Vertex AI model garden, giving you immediate access for fine tuning and deployments. You can quickly adapt Gemma 3 to your use case using Vertex AI's pre built containers and deployment tools.
And if that wasn't enough AI for you.
[00:42:36] Speaker C: Never enough.
[00:42:37] Speaker A: Never, never enough. Google is introducing Gemini robotics, their Gemini 2.0 based model for designing robotics. As Google DeepMind, they have been making progress in how they jump. Their GEMIN model solves complex problems through multimodal reasoning across text, images, audio and video. Gemini Robotics is an advanced vision language action model that was built on Gemini 2 with the addition of physical actions as a new output modality for the purpose of directing and controlling robots. The second model is Gemini Robotics. Er. A Gemini model with advanced spatial understanding enables roboticists to run their own programs in Gemini's embodied reasoning capabilities or er. I thought that was emergency room and I was very concerned there for a minute. No, embedded reasoning is the error. Both these models enable a variety of robots to perform wider range of real world tasks than ever before. As part of their efforts, they are partnering with Apptronic to build the next generation of humanoid robots with Gemini 2.0.
Apptronic Skynet. There's more letters, but it's the same thing.
[00:43:33] Speaker C: And I'm not like a nice person. So one of my favorite things to do is yell at technology the minute it has any kind of reasoning. This isn't going to go well for me. I'm going to be first on the wall.
[00:43:47] Speaker D: I already yell at technology. It used to say my job is yell at inanimate objects all day and make them do what I want. The problem is now the unanimate optics hunchback.
[00:43:56] Speaker C: Yeah.
[00:43:57] Speaker A: Have you seen this Apptronic robot? Like can you yell at that? Like, it's so cute looking.
[00:44:03] Speaker C: Challenge accepted.
[00:44:04] Speaker A: I posted the link in our, in our chat room.
[00:44:06] Speaker D: Yeah, I'm looking at the picture.
[00:44:08] Speaker A: He's 5 foot 8 in height. So he's taller than you, Ryan. And oh my God, he weighs 160 pounds. So you know, you only have to outrun him for four hours and then you're good to go.
[00:44:19] Speaker D: Ryan's true.
[00:44:20] Speaker C: I know that.
[00:44:21] Speaker D: Yeah.
[00:44:21] Speaker C: I don't think I'm out running anything.
[00:44:25] Speaker A: He's kind of cute. Like he's look friendly.
[00:44:27] Speaker C: It's. It's £160 and 58 and it's cute.
[00:44:30] Speaker A: I mean it has a cute face.
[00:44:32] Speaker C: Like.
[00:44:34] Speaker A: Yeah, when the eyes go red, that's when you know you're in trouble.
[00:44:37] Speaker C: Yeah, exactly.
[00:44:38] Speaker D: He looks like your character from wall E. That's where my braid went.
[00:44:43] Speaker A: Yeah, I can see the Wall E reference. Makes sense.
[00:44:45] Speaker C: Yeah.
[00:44:47] Speaker A: We did talk in the show notes that we talked about this two weeks ago or three weeks ago and I went and looked it up and no, that was Azure and OpenAI who were doing robot stuff too. So the Gemini is just catching up to OpenAI.
[00:44:59] Speaker D: We were terrified then too.
[00:45:01] Speaker A: Yeah. I mean, we're definitely more concerned when Microsoft was involved with it. I mean, I've slightly. Maybe that do no evil thing will get into the robot commands.
[00:45:10] Speaker C: Maybe. I don't know.
[00:45:11] Speaker D: What's the Will Smith movie for back in the day? Irobot I robot. Yeah.
[00:45:16] Speaker C: Because that was the first thing I thought of when I saw the Apptronics robot. I'm like, that looks a lot like the one that killed everyone in that one.
[00:45:22] Speaker D: Yeah.
[00:45:24] Speaker A: When the eyes go red, that's when you know to be worried.
Well, last week Ryan and I had a lovely conversation about how Gemini seems to be crap in the market. And so Google apparently listened to that episode before we even published it and is bringing new and upgraded features to Gemini users, including deep research 2.0 flash linking, gems, apps and personalization. The new Upgraded version of 2.0 flash linking gets the ability to upload files as well as longer context windows up to a 1 million token context window. 2.0 flash linking is a reasoning capability In December they pioneered a new Gemini product with Deeper Research. This goal was to save you hours of time as your personal AI research assistant, searching and synthesizing information from across the web in just minutes and helps you discover sources from across the web you may not have otherwise found. Now they're upgrading Deep research with Gemini 2.0 flash thinking experimental. This enhances Gemini's capabilities across all research stages, from planning and searching to reasoning, analyzing and reporting, creating higher quality multi page reports that are more detailed and insightful. Gemini now shows its thoughts while it browses the web, giving you a real time look into how it's going to solve your research task.
The Gemini Personalization feature in the Model dropdown basically looks at things like food related questions and it'll look at your recent food related searches from Google or to provide travel advice based on destinations you've previously Googled because that's not creepy.
And Gemini is now starting to be able to access calendars, notes, tasks and photos within the new Flash Thinking tutorial model. This allows Gemini to better tackle complex requests like prompts that involve multiple applications because the new model can better reason over the overall requests, breaking down distinct steps and assess its own progress as it goes. So say in a single prompt you ask Gemini, look up an easy cooking recipe on YouTube, add the ingredients to my shopping list and find me a grocery store that has open that is open nearby. It'll do that for you soon in Google Photos I'll be able to look at your photos and create an itinerary based on where you looked, where you took photos, or tell you when your driver's license expires. Assuming you've taken photos of it before, which I'm guilty of.
Gems are now available to everyone, letting you create your own personal AI expert on any topic. They're starting to roll out for everyone getting started with their pre made gems, or quickly create your own custom gems like a translator, meal planner or math coach. Just go to the gems manager on your desktop, write instructions, give it a name, and then chat with it whenever you want to do so. No, the Deep Research I played with because I wanted to play with it on OpenAI but I didn't want to pay outrageous amounts of money per month, which I think was $200.
And so I went into this and I said, I have a 12x12 space, I have this type of barbecue for my smoker, I have this type of Blackstone and I'd like to buy a new propane grill. Could you please give me layout recommendations and make some suggestions based on the best grill that I could potentially buy for this? And it went out and searched the web, went to a bunch of websites I never heard of before, came back with a full five page report with a grill recommendation or three grill recommendations. Actually several layouts that are linked to to show me, you know, different ways I can lay it out in my space, as well as a bunch of safety things I should consider about which ones I should put next to each other and how they need to be at least so many feet apart, not have cross contamination issues and lots of other great research data. That was very interesting. So I was very impressed with the deep research.
[00:48:34] Speaker C: That's very cool. I forgot, you know, and looking at some of these other features, like the gems and stuff, it's exactly what we were complaining about in the last episode, right? Like it's a little harder to string everything together and make it useful in Gemini. And it looks like they, they heard us.
[00:48:50] Speaker A: So I mean, that makes me wonder if they're checking in on our Google Cloud Doc for our topics. Like how do we know? Because the episode has not technically dropped. It drops tomorrow. It's true.
[00:49:00] Speaker C: Our show notes are in Google Docs.
[00:49:02] Speaker A: Yes, they are in Google Docs. Interesting.
[00:49:04] Speaker D: Yeah, but all the commentary is not. So the question is, where does Reverse Night operate?
[00:49:10] Speaker A: True, true. Hey, is that Apptronic robot I just saw walk up?
[00:49:13] Speaker C: Yeah.
Yeah.
Why are they all around my house?
[00:49:19] Speaker A: So weird. And then our final Google announcement for this week. Since they fixed Gemini, they finally released the general availability of the third attempt with Cloud Composer. Cloud Composer is the third version of the fully managed Apache airflow service. The first two are an abomination, and so they have now tried to fix it for the third time. This release represents a significant advancement in data pipeline orchestration, enabling data teams to streamline workflows, reduce operational overhead and accelerate time to value. The Cloud Composer 3 has a host of new features, including simplified networking, easily configure network settings with streamlined options, reducing complexity and management overhead. Evergreen versioning so it stays up to date with the latest Cloud Composer releases. You don't have to do it for them, which is an annoyance of the old ones hidden infrastructure. So you only have to focus on your data pipelines, Not Infrastructure Cloud Composer 3 handles the underlying infra, allowing you to concentrate on building and running dags. Enhanced performance and reliability per task, CPU and memory control, and strengthens your security posture.
[00:50:15] Speaker C: Finally, like when I first looked at Composer two, trying to, like, you know, answer a research question for work, it was nothing more than a glorified deployment template. You still had to deploy all the Kubernetes, all the Amazon or all the Apache Airflow servers. All the infrastructure all had to live within your project, deployed on your network.
If you needed to talk to another network, you had to plumb all the private service connects yourself and do all the things. So I'm really glad that GCP has finally figured out how to create a managed service. So good for them. Took long enough, but can we, can.
[00:50:51] Speaker A: We use this now? Or is this.
[00:50:52] Speaker C: We can use this one. We can use this one.
[00:50:54] Speaker A: Okay, good. Make someone happy somewhere.
[00:50:57] Speaker C: Yeah, I mean it's, it's exactly what you want, right? You want to be. You want to take the application function of Airflow and you want to expose that directly to the user. You don't want to manage the underlying software, you don't want to manage the underlying infrastructure, you don't want to have to configure custom networking, routing within your network to make it work. That's just not a very good user experience and I'm definitely not going to pay a premium. On top of just running these open source components myself, would I still have to run these open source components myself? And that was my biggest complaint with Composer, but now they, they've addressed all of those issues and with Private Service Connect, they've made the networking a lot easier and all of the, the compute and resources and everything is all run on the back end. In GCP's managed project, you don't have to deal with it and they just expose the right level of things directly in your project, like the ui, a storage bucket and I believe a metadata database, if I remember correctly. So finally, finally.
[00:52:07] Speaker A: Well, it's good. I'm glad they finally got it right because it only took them three attempts.
[00:52:12] Speaker D: So I think Glue's on version five. So, you know.
[00:52:15] Speaker A: Yeah, I mean, Glue's. Glue's had five attempts and still hasn't got it right. So. Yeah, that's true. Although I still don't know what happened to 3 and 4.
[00:52:22] Speaker D: Don't worry about those. They were that bad.
[00:52:24] Speaker A: They were that bad. Yeah, they were failed experiments.
[00:52:26] Speaker C: Understood.
[00:52:28] Speaker A: All right, let's move to Azure so this month's Cost Manager Update blog post, which we don't always talk about, has some interesting things that you should be aware of, so we'll cover it today. First up, optimizing AKS with new cost analysis capabilities allows you to get granular cost information in your AKS cluster. The views provide you with visibility into the cost of namespaces and all aggregate costs on all of your resources. You just need to install the cost analysis add on to your cluster to enable this, which for those of you who are familiar with kubernetes, adding an add on is a pain. So good luck to you on that. And then the most important announcement from this one is you're deprecating the AWS connector on March 31, 2025. You'll lose access to the connector and AWS costs in each data stored in the cost management service, including all of the historical data. Although they did say they will not delete the curve files from the Amazon S3 bucket they picked them up from. So thanks for that. They recommend moving to another reporting tool or if you want to want the roll up in Azure to standard to use a standardized focus format, an anacle solution in the Microsoft Fabric solution to analyze and report from various sources. So basically they're saying the thing that we built to make you see how much cheaper Azure could be by ingesting your curve file into our thing no one wanted to actually use and so now it's better if you just move to Focus and then use fabric, but you have to do all the heavy lifting, all that pipelining because we're not going to do it for you.
[00:53:42] Speaker C: Well even Amazon doesn't want to process their own cost and usage report, so I totally understand that they barely can.
[00:53:46] Speaker A: Produce the cost of each report, so yeah, they don't want to handle it.
[00:53:49] Speaker D: Yeah, you don't want to see the Microsoft Curl report. It's not much better.
[00:53:53] Speaker A: Yeah, yeah, you don't want to see the Google's either.
[00:53:55] Speaker C: Yeah, it's all just too complex. There's too many SKUs and all priced on different units and there's no common metric. And there's a reason why focus exists because it's also complex, but at least it's unified across the thing.
[00:54:08] Speaker A: Yeah, I mean it definitely makes sense to me that they would they would standardize to focus format for this data. That makes perfect sense because then they could say look, we're going to give you GCP and AWS and is import the fooocus and we'll Give you a standardized fabric interaction to do that. But they could at least done some of the lifting to pull the file over. Still, that would at least made it a little less painful.
[00:54:26] Speaker C: Can you export your custom usage in Amazon as Focus? I don't know.
[00:54:30] Speaker A: I haven't tried on Amazon. I know you can on Azure and on gcp.
[00:54:34] Speaker C: Yeah, those are the two I know of too.
[00:54:35] Speaker D: Yeah. I think fabric does natively connect over to aws, so in theory it wouldn't be that much heavy lifting.
[00:54:43] Speaker A: Okay. Yeah, my experience with fabric is none.
[00:54:47] Speaker D: So I have very little. I want to learn more because it's, you know what, you know, Azure is pushing a lot of things towards. I just need to.
[00:54:58] Speaker A: Yeah, they want to get rid of Snowflake, so they want you to go to fabric to.
[00:55:02] Speaker D: Yeah, Snowflakes Databrick. But you know, they're so tightly partnered with databricks too.
[00:55:07] Speaker A: I mean, I don't think they see databricks as an a, as an analytics platform as much as they see it as an AI platform. And I think they see Snowflake as a competitor who they want to kill and that's why they're creating fabric. But that's just my impression of it based on no insider information. One last FinOps thing, you can now exchange Azure OpenAI service provisioned reservations between projects and you can also still request refunds. So that's nice. And if you have opinions about the future of cost reporting, and I'm sure that most of you do, you can take the cost optimization survey that's linked in the blog post that we have here, so you can give your own feedback to Azure instead of just complaining silently that they're killing the risk connector.
[00:55:50] Speaker D: I don't think anyone's complaining about much about it.
[00:55:52] Speaker A: I don't remember when they announced it. I thought it was a shrewd move that no one used because no one wanted to give their Amazon cost data to Microsoft for any competitive reason.
Announcing the Microsoft AI Skills Festival. This is a global event this April and May designed to bring learners across the globe together to build their AI skills. From Beginners Explorers and to the technology gifted. Registration opens on March 24th with the kickoff on April 8th right in the smack dab middle of Google Next. Thanks for that. For tech professionals, you'll learn how quickly to build AI powered solutions using Microsoft AI apps and services. Gain skills and experiences working with agents, AI security, Azure AI foundry, GitHub's copilot and Microsoft fabric and more. Kickoff is that is in Australia at 9am on April 8th and goes for a full 24 hour globe spanning event. They're even trying to break a Guinness World record for most users to take an online multi level artificial intelligence lesson in 24 hours. Which I'm pretty sure they made up that Guinness record.
[00:56:45] Speaker C: But they're gonna try to win.
[00:56:48] Speaker D: That's way too specific.
[00:56:50] Speaker A: Yeah, right. It's one of those things that Google's just petty enough that they'll then break. They'll then try to break this because that's something Google would do.
[00:56:57] Speaker C: I hope so. That would, that would make me happy because I'm here for petty behavior.
[00:57:01] Speaker D: Challenge accepted.
[00:57:03] Speaker A: So April 8th, if you're not Google Next you can join this fun festive event learning AI skills.
[00:57:11] Speaker C: I do recommend things like that because I know a lot of it's rudimentary and how many times do you have to see interactions with an agent? But I find that this type of, this type of event and this type of training, it's great for understanding how you can use AI more, getting ideas at least it's really helped me in a sense like because it's you know, like I know the, the very obvious use cases but there's people that are coming up with some pretty clever things stitching agents together to do these very complex things that I don't know if I'm old and stodgy. I'm just like you. I didn't know that was possible.
[00:57:43] Speaker A: Yeah, I mean I think I occasionally I like to go to a beginner class just because you do learn something you didn't know or like the way they're teaching is different and so you can pick up things. So you know these are, these can be fun events. Especially like I haven't played with Azure AI Foundry so if I wasn't in the middle of Google Next I would probably. And I still check it out because it's going through all the way to. So you don't have to kick off on the first day. But I might go check out the Azure AI Foundry stuff because I am curious or like Matt said earlier, Microsoft fabric something he wants to learn more about. And they're going to be covering that at least in the AI context. So definitely worth checking out. And these are fun events a lot of times too where they have some unique things about them that are only available for limited times.
All right guys, our final story.
Azure is announcing that you can now invoke an Azure function based on changes to an azure database for MySQL tables. This new cable is made possible through the Azure database for MySQL trigger for Azure functions now available in public preview. And again psa. If you're using triggers and databases to do anything, you should really rethink your architecture.
[00:58:47] Speaker C: Yeah, no, this one. I mean the reality is if you're using MySQL I guess I was thinking Microsoft SQL Server, this is MySQL but yeah, you know, like this, this type of logic is very common right where you're, you're triggering workflows or business flows. And so I'm just glad it's exposed outside the database.
[00:59:09] Speaker D: I mean AWS did this years ago and I remember we were like, oh, we could do this to like make this application be you know, hot hot, you know, in multiple regions for like user management or something. And that we were like sitting there thinking about. I was like this is a really bad life choice. Like we should not be trying to manipulate the application via lambdas on AWS at the time. Cross region, cross this. Like I was like, there's no way that this is a good life choice. Again, back to the psa.
[00:59:41] Speaker C: So there is one valid use case that I like which is populating a data lake.
That's the one thing where it's like if you, because if you write basically a transaction commit and you can then use that metadata to populate in the format that you want with all the enrichment data at the time so you.
[01:00:03] Speaker D: Have the data is the problem. So your commit is going to trigger a function app to then require your SQL database which you have to hit the main one. You can't hit the secondary node or the read only node because at that point you know, you could have a latency issue. So you have to the primary notes, you're putting more load on the primary note to then copy it to your data lake. At that point I get where you're coming from, but I feel like you're.
[01:00:29] Speaker C: Right, that's not going to work the way I want it to work.
[01:00:32] Speaker D: Like I like the premise. I mean at that point maybe you queue it and sleep so it's on your read only replica at that point. So like maybe you do something like that. Like I still feel like it's a bad life choice and you probably want to go re architect everything that you've.
[01:00:49] Speaker A: Thought about in life. So I was thinking the use case where maybe you have a lot of business logic and store procedures is this, would this be something I could potentially.
[01:00:57] Speaker D: Still bad.
[01:00:58] Speaker A: I know it's still bad but like is it worse to keep it as stored procedures and as I refactoring it into a function to then basically call the function from in the Google, you know, inside the database to the function that I've now rewritten the store procedure like that seems like an okay stepping stone. Or am I completely missing you think that's bad?
[01:01:17] Speaker D: All I can think of. It's inside the database.
[01:01:19] Speaker A: It's inside.
[01:01:20] Speaker C: Well, but the compute in the functions externalized from the database. So at least you've introduced a scaling layer because that's.
[01:01:26] Speaker A: That's where.
[01:01:26] Speaker C: That's where procedures burn you every time.
[01:01:29] Speaker D: Or ddosing layer because if it. If it all. I mean, maybe by a substance wrong here.
[01:01:35] Speaker C: My code will never have problems. Shut up, Matt.
[01:01:41] Speaker A: I have AI. My code is beautiful.
[01:01:45] Speaker D: Did you read. There was something we were talking about at my day job about. It's called like Vibe engineering, where it's just people that.
[01:01:55] Speaker A: And like I.
[01:01:57] Speaker D: And we were talking about it the other day. I was like, this is the problem with where. Where some of this is going.
[01:02:02] Speaker A: Yeah, yeah, no, that's. That's the new thing you do with the cursor. AI. You just do a Vibe program a little bit.
[01:02:08] Speaker D: Yeah, Yeah.
[01:02:10] Speaker C: I was really. I wasn't sure how much ingest that was and I was too afraid to go look because if. If it was serious and I. I have suspicions that it might be.
[01:02:19] Speaker A: Oh no, it's serious.
[01:02:21] Speaker C: Oh, no.
[01:02:24] Speaker A: Yeah, there's been a lot of blog. A lot of blog posts. Like, I'm vibe coding on the weekend, just like, you know, throwing some stuff out there, see how it goes. Yeah, that's a very common thing right now, but it's cool. Like if you use cursor, which I have, you can definitely do some. I can see why they call it Vibe coding. It's because it's kind of cool. You're just talking to it and you're like, I want this thing and I want this other thing, and it just does it. And it's cool, but terrifying to see.
[01:02:47] Speaker C: Well, it was some of the practices that bothered me. It was more of like, you know, never refactor code, just start over again. You know, kind of thing. That was the part where I was.
[01:02:54] Speaker A: Like, ah, well, I mean, again, like, because you're not. It shouldn't. Like your vibe coding stuff should be like, I'm creating a simple mvp. It shouldn't be. I'm writing a production app that I care about that makes my family not starve to death.
[01:03:07] Speaker C: Like, I think I just didn't think they were that specific.
[01:03:11] Speaker D: You're telling me a POC has Never got all the way into production. Come on.
[01:03:14] Speaker A: I'm not saying that.
I would never say such a thing. That'd be blasphemous because it happens all the time.
But yeah, I don't know, like there's options.
[01:03:26] Speaker C: Yeah, I mean, I think that, you know, the reality is, is going to be somewhere in the middle.
[01:03:31] Speaker A: Right?
[01:03:31] Speaker C: Like this is an extreme example, but this is how AI is going to change. Software development is, you know, like it'll be, it won't be exactly this, but it'll move. Like it's already. Like we were talking in the beginning of the show, like my, the, my ability to like remember syntax is diminishing because I don't need to know it, I can just ask. They all just do it half the time.
[01:03:54] Speaker D: It's tab complete, tab completed. Yeah.
I was very terraformed the other day and I was like, tab, tab, tab cool, done. Yeah. Or even like.
[01:04:04] Speaker C: So it's like this is basically how.
[01:04:06] Speaker D: It works, you know.
[01:04:07] Speaker C: Now it's crazy.
[01:04:09] Speaker D: I feel like I've reached a new level of laziness where I'm just like, right click, add more debugging lights. It just populates the debugging lines for me. Add logging here, add comments and I'm like done. I've officially can become very lazy.
[01:04:25] Speaker A: I mean, I think that. Yeah, yeah, there's my, one of my friends, or not friend, but a former acquaintance, you know, he and I are catching up the other day and he was talking about his son who's, you know, turning 18 and was like, yeah, you know, he's, he was going to be a software engineer. Then he decided to become a programmer because programming is dead and, or become electrician because programming is dead in 10 years. And I was like, do I agree that it's dead in 10 years? Like, I don't know that I'm quite that pessic. I think it's gonna be different. Like, I, I definitely think it's gonna change. I, I mean I think that's what we talked about in the past here on the show. Like, you know, people are gonna have to learn how to think about systems and you know, more than just the simple part. It's like, okay, now I need to design a system that's gonna have all these pieces that interconnect and do things. And that's where you see AI today failing, is that it can't, you know, like really complicated multi layered microservices. Like it, it, it doesn't have enough space in its memory to have contacts about stuff. Now that'll change over time. It'll get more sophisticated where it can. But I think your ability to be a really good prompt engineer is going to be what kind of keeps you in business, at least for us, maybe on our kids. So I think that's the thing. He's on his wrong. I'm just hoping he. I'm just hoping he is.
[01:05:33] Speaker C: I think it's going to be like, you know what, what automation brought to the car industry. Right. There's still people that need to work in the car industry. There's people making the robots, there's still people doing design work. There's still, you know, a lot of people involved in making cars, but there's n. There's nowhere near as many as there once were because you don't have to stand out. All the quarter panels out of sheet metal and, you know, the welding. So I think it'll be a little bit of that flux as well, where there will still be programming that looks different, but there may not be as many programmers that might be true.
[01:06:04] Speaker A: Does the world need more programmers anyway?
[01:06:06] Speaker C: I mean, that's a question if we're all doing vibe projects. Yeah, it does.
[01:06:11] Speaker A: Someone's got to debug all that code.
I think you got to be, I think to be a future program, you'd be really good at debugging because I think that's what you're doing. A lot of is debugging outputs from AIs that aren't working the way you think they should.
[01:06:26] Speaker D: To be fair, I think today in order to become a good programmer, you have to be good at debugging. Yeah, like, that's the hardest skill I feel like to teach and the hardest skill to learn. You know, it's almost something you have to be ingrained with. And I feel like that's. It's like debugging, like troubleshooting as like an old school sad but like, if you're not good at it, you're not gonna, like, you'll go places, but like, you go much further if you have those skills.
[01:06:52] Speaker A: All right, gentlemen, well, that's another fantastic week here in the Cloud. We've covered everything. We will see you next week.
[01:06:59] Speaker C: All right, bye, everybody.
[01:07:01] Speaker D: Bye, everyone.
[01:07:05] Speaker B: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services.
While you're at it, head over to our
[email protected] where you can subscribe to our newsletter. Join our Slack community. Send us your feedback and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.
[01:07:28] Speaker D: Sa.