[00:00:00] Speaker A: Foreign.
[00:00:08] Speaker B: Where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure.
[00:00:14] Speaker C: We are your hosts, Justin, Jonathan, Ryan and Matthew.
[00:00:18] Speaker A: Episode 318 recorded for August 19, 2025.
One extension to rule them all and in the VS code bind them.
Good evening, Ryan. How are you doing?
[00:00:28] Speaker C: I'm doing well. Doing well.
Just the two of us today, huh?
[00:00:33] Speaker A: Yeah, just the two of us. Matt is on vacation, which he, well, deserves because, you know, he has a new kid and this is his opportunity to try to catch up on sleep, which won't happen, but you know, he's dreaming of it, so that's important.
Well, we have another busy week in the cloud here.
Getting right into it. There are some fun general news stories, so always going to hit those up first.
AOL is apparently, I think we talked about this once in the prior that they announced it was going to happen, but they've now announced the date is discontinuing its dial of Internet service on September 30, 2024, marking the end of the technology that introduced millions to Internet in the 90s and early 2000s. I was a customer. Well, my parent. My parents were. I was not.
Census data shows 163,000 U.S. households still use dial up in 2023, which has apparently only 0.13% of homes with Internet subscriptions, highlighting the persistence of legacy interest rate under underserved areas. I mean, I hope at this point they're going to, you know, Elon Musk's satellite service if they can't get anything but dial up at their house.
The shutdown reflects broader technology lifecycle patterns, combinations retired legacy services like Skype, Internet Explorer and AOL Instant messenger to focus resources on modern platforms. So yeah, that's end of an era, if you will.
[00:01:49] Speaker C: I mean, it's got its own sound bite still that everyone knows can hear in their head. And it's kind of nuts, but I mean, it does seem like it's time, right?
But I don't know, I'm spoiled, you know, living in the Bay Area with all kinds of infrastructure.
I'm sure there's places in the middle of nowhere where I go to avoid the Internet where people would much rather have the Internet.
[00:02:17] Speaker A: I mean, I am all about not having Internet when I go on vacation. So yeah, I have completely different needs and wants at this point.
In a weird article, the UK government wants you to conserve water.
And to conserve water, they'd like you to reduce your data center's water usage, which. Okay.
And to do that, they're Advising you as a as a UK citizen to delete your old emails and photos to reduce said water consumption by data center as they're facing a potential water shortage by 2050.
Apparently data centers require significant water for cooling systems with some facilities using millions of gallons daily to maintain optimal operating temperatures for servers. This high is often overlooked environmental impact of cloud storage or seemingly harmless archive data to ongoing resource consumption even when unused.
I will tell you that you can go delete all of your emails. This won't help you.
[00:03:07] Speaker C: In fact it'll make it worse because data at rest doesn't use a whole lot of resources, right?
[00:03:15] Speaker A: Unless.
[00:03:18] Speaker C: Deleting anything from a file system is expensive from a CPU perspective and so it's going to cause the temperature to go up, therefore more cooling.
This is a whole bunch of. I mean I tried to get this statistics from the article, but it does seem like a whole bunch of well meaning people that aren't necessarily technical kind of drawing a conclusion that's not quite there.
[00:03:41] Speaker A: Yeah, I mean I've never heard of this. This news source, TheJournal, ie apparently independently owned and managed media organization. We began Small in 2010 with the Journal and a sports site that is now the 42 and have since established as market leader in online news in Ireland.
They employ dozens of journalists across our publications. I'm going to use that journalist in quotation mark, but the journal, the 42 and the journal Investigates and the Journal Fact Check Unit are now among the most iconic of Ireland's digital brands.
Okay, so anyways, this is just a fun story to make fun of because turning, you know, deleting your old emails while maybe good for your data privacy is not going to help your water usage in your data center.
[00:04:25] Speaker C: So in the article they were quoting the National Drought Group and I was trying to figure out if they were technical or formally constructed and it seems like they're not either. It's just a. Yeah, no, one of.
[00:04:35] Speaker A: The things I think is sort of, you know, in the UK they have a very moderate climate until global warming messes that up for them. And so I've actually toured several data centers in the UK that have natural air cooling that don't require water to cool them. So. So I would say that, you know, yes, data center providers in the UK in particular should be able to take advantage of more water cooling things as well as intel back in the early 2000s ran a data center in a tent, circus tent in the middle of the Arizona desert where they basically prove that cold air is not really the Requirement as much as moving the air out of the heat source as quickly as possible is really the most important part. And so again, you know, we, we could do better job as data center people to not require cold air. But you know, cool air would be fine, that moves and that'd be more than sufficient. Although I don't know other GPUs. Maybe this is more important than it was back in the early 2010s. Maybe this is now a thing. But then that's where water cooling closed loop systems would maybe make some sense. And I've seen a lot of data centers building those out these days as well.
[00:05:40] Speaker C: Yeah, I'm sort of surprised by some of the water usage statistics just because, you know, like it's great in at least the U.S. about half of the U.S. you know, something where you can do evaporative cooling is good, but in the other half, like for the majority of the year, like it's not, not more efficient than a closed system. So it's kind of interesting that way. But I don't know.
[00:06:03] Speaker A: Well, there is actually a data center problem in the UK that we should talk about. In the UK is is currently planning nearly a hundred new data centers by the year 2030 representing a 20% increase from the current 477 data centers. With major investments from Microsoft, Google and Blackstone Group totaling billions of pounds. The expansion is driven by AI workload demands and the positions UK as a critical hub for cloud infrastructure. Energy consumption concerns are mounting as these facilities could add 71 terawatts terawatt hours of electricity demand over 25 years. With evidence from Ohio showing residential energy bills increased by $20 monthly due to data center operations. The UK government has established an AI Energy Council to address supply demand challenges.
Water usage for cooling systems is creating interest restrain particularly in areas served by the Thames Water and then Jilian water redirecting to one proposed site. New facilities are exploring air cooling and closed loop systems reduce the firewall impact we just talked about. And planning approval timelines of five to seven years are pushing some operators to consider building in other countries, potentially threatening the UK's position as a major data center hub. Yeah, power and cooling are definitely a problem and this, this article is probably the source of the journal's article earlier is that you know there is pressure on using water in data centers to cool them. I think that is a valid concern especially with 100 new data centers potentially coming online as well as powering. How do you power all Those hungry, hungry GPUs?
Nuclear power is probably your only solution at this moment in time to really support that. And so that could be a concern for people in the UK definitely.
[00:07:29] Speaker C: Or I mean, everywhere. Like there's just more, you know, some, some communities are more friendly to nuclear than others. But yeah, until cold fusion's a thing, we don't have any other options.
[00:07:42] Speaker A: I only a 20amonth increase due to data center operations doesn't seem that bad to me for some reason because I, I imagine California's cost of data centers is much higher than that in our. I mean, a rage is the expensive power grid.
[00:07:56] Speaker C: Yeah, it's, it's kind of crazy to me. I mean, because, you know, utility prices, at least in the Bay Area, have doubled effectively. And so I don't know how much of that is due to data centers and not. But yeah, $20 didn't seem all that bad.
[00:08:09] Speaker A: Nothing to do with data centers. Everything to do with litigation from massive wildfires.
[00:08:13] Speaker C: Yeah, well, that's true.
[00:08:18] Speaker A: All right, well, let's move on to cloud tools.
GitHub sorry. GitHub actions policy now supports blocking and SHA pinning actions. This lets administrators explicitly block malicious or compromise actions by adding a pound prefix to entries in the allowed action. Sorry, that's a bang prefix. Bang. Yeah, bang. Sorry. It's been a while since I've talked in lead speak entries in the loud actions policy Running a critical defense mechanism when third party workflows are identified as security threats. The new SHA pinning enforcement features requires workflows to reference actions using full commit shaws instead of tags or branches, running automatic execution of malicious code that could be injected into the compromised dependencies. This addresses a major supply chain security gap or compromise actions could exfiltrate secrets or modify code across all dependent workflows, giving organizations rapid response capabilities to limit exposure. GitHub is also introducing immutable releases that prevent changes to existing tags and assets, enabling developers to pin tags with confidence and use dependabot for safe updates without risk of malicious modifications. These features are particularly valuable for Enterprises managing large GitHub Actions ecosystems, as they can now enforce security policies at the organization or repository level while maintaining the flexibility of open source action Marketplace.
[00:09:28] Speaker C: Definitely. I mean, this is something that's been very relevant to my day job, which is, you know, like I've been arguing for months now to not expand permissions to cloud and other integrations for GitHub Actions just because I'm not a fan of the security model. It's pretty easy to reference an action and then, you know, create your own branch and reference just that specific one, changing the code in it like and someone can easily update a tag and then you're referencing that and it's, you can execute whatever you want and it's a, it's just not, not a great model. And so it's, I'm really happy to see these specifically the, the immutable tagging because I think that that's really, you know, putting that control in place as just a knob where you can execute policy is, is going to be really handy and crucial to running this securely.
[00:10:19] Speaker A: Yeah, I mean I idea of going to shaws instead of tags or, or branches kind of makes me think of guids now in my GitHub code that one concerns me just a little bit. But yeah, I think we've seen enough violations of supply chain that I think this is super valuable. And so I, I, as much as I'm going to hate it and I'm going to complain mercilessly about shaws in my pinning, I'm going to appreciate it one day when security is a problem.
[00:10:46] Speaker C: I mean it is a guid, there's no other way around it. Right? It's just using the content itself to generate the GUID for uniqueness.
[00:10:54] Speaker A: You know.
[00:10:58] Speaker C: I don't know, you gotta, you gotta be unique, you gotta reference it somehow.
[00:11:04] Speaker A: Curo pricing plans have gone live for aws. They originally, when they first announced Curo, they gave us pricing data and it looked pretty good. I mean, I think it was much less than what they've actually now officially proposed. I think that's maybe because of the massive amount of people who were very interested in buying the solution. So the near tier, the new tier pricing is free, which gets you basically Nothing. Pro for $29 a month, Pro plus for $99 a month and Power for 2.99amonth.
This transitions all those users from the preview way this model to allow broader access to their cloud development tool. The pricing structure is based on Vibe and Spec requests, with a free tier offering 50 vibe requests monthly and paid tiers providing varying amounts of both request types plus optional overage charges for flexibility.
New users receive a 14 day welcome bonus of 100 specs and 100 vibe requests to evaluate the tool's capabilities before committing to a paid plan. With media plan activation and modifications available, the tool integrates with Google GitHub and AWS Builder ID authentication, suggesting it's positioned as a cloud development assistant or automation tool that works across your major platforms. Curo appears to solve the problem with cloud development workflow optimization by providing request based interactions through the exact nature of what Vibe and spec requests accomplish isn't detailed in the pricing announcement and why they have very clear output differences when you actually use Curo. I don't really get the pricing differences either.
[00:12:22] Speaker C: Yeah, and so I read through too, trying to figure out what, what the difference is between Vive and a spec. And it really does seem like the difference of like an interactive chat session where you're just asking questions versus you know where I think the real value is for Curo is that in that spec as they call it, which is literally it's, it's creating sub agents to, to accomplish a larger business goal. So not only is it doing code, it's managing like ticket execution and tracking that and sort of handling those larger projects. And so I think it's great. But yeah, I kind of put off by the free plan not including anything and then the 14 day limit for new users. Like I just feel like that's too constricting and it'll keep me from trying it since I was late to adopt it and didn't get in while it was in preview.
[00:13:12] Speaker A: Amazon Athena is now supporting Create Table as select with Amazon S3 tables, enabling users to query existing datasets and create new S3 tables with the results in a single SQL statement. So if I data transferring workflows by eliminating the need for separate ETL processes and S3 tables provide the first cloud object store with built in Apache Iceberg support. And this integration allows conversion of existing Parquet, CSV, JSON, HUDI and Delta Lake formats into fully managed tables. Users can leverage Athena's familiar SQL interface to modernize their data lake architecture. I mean I like this just because. Oh no, if I went to the. Finally I went to the trouble of building a Athena query and then you can create a table so I don't have to do that again. Thank you very much.
[00:13:51] Speaker C: Yeah, no, I mean it's the, it's the provisioning or partitioning of data in your table on the fly right? Like that's the part where this is super valuable because you could write a query and then have it basically put that in time based buckets or however you want to partition it. And that was always the nightmare that we run into as we're following documentation in aws trying to get Athena to work and it's usually around the partitioning where, where it breaks down right? Like because it's. The format of the structure is slightly different and then therefore the partitioning now isn't, isn't available to work across the data Set.
So I'm really happy to see this. I hope it works as advertised.
I kind of feel like this is them punting on, you know, table creation for as bad as their documentation has been for this. So hopefully this works.
[00:14:40] Speaker A: Yeah, I mean the 1Q feature I would probably be very excited about is if they created, you know, a whole way to modify and optimize all of their, you know, complicated query language for building the tables in the backend. Like just, here's my data. You just queue it up and then you just produce something that works.
That's all I want. If you could make that queue feature, I think I'd be pretty happy. I think you can get pretty close now actually. But an automated button I just push. Here's my data magic. That's what I want.
[00:15:10] Speaker C: And I'm sort of surprised that they're not announcing that like point it at S3 and then run a query, you know, like.
So like I feel like I. There's something I don't understand as far as the, the announcement or the, the technology on the back end, but this is pretty close.
Select star from S3, then create table. Right. Like that's how that works.
[00:15:33] Speaker A: Yeah, yeah. And then I say that's a really bad, not optimal thing to do. We could take this data and we can make it better. Like I, I want Clippy. Clippy for Athena tables. That's what I need.
[00:15:42] Speaker C: Oh God, that sounds like.
[00:15:45] Speaker A: So you're trying to use a new data source that happens to be based on Apache Axis slugs. Yes, that's exactly what I want. Please just use whatever version of the 12 types of documentation you have for this to just make it happen.
[00:15:57] Speaker C: Please turn this into an iceberg parquet table of some sort so that it doesn't suck.
Thank you.
[00:16:04] Speaker A: Exactly.
Well, Aurora has apparently turned 10, which blows my mind I think there. I think this is a very lovely rebranding of rds to be 10 years old. I don't, I don't think Aurora's been around that long, but they're Saying it's been 10 years since GA. With a live stream event on August 21, 2025 featuring technical leaders discussing the artificial decisions to decouple storage from compute that enables commercial database performance at 1/10 the cost. Key milestone announcements included Aurora D SQL which just went general availability in May, a serverless distributed SQL database offering 99.99% single region and 5.9 multi region availability with strong consistency across all endpoints. Storage capacity doubled from 128 to 250 terabytes and Aurora now integrates seamlessly with AI services through PG Vector for similar similarity search zero ETL to Amazon Redshift and SageMaker for near real time analytics and the Model Context Protocol Server for AI agent integration with the data source.
All available to you in this. So happy birthday. Apparently it is 10 if I were to go Google.
[00:17:03] Speaker C: Yeah, I was. Because I remember like right when I started using. Oh, it would have. Yeah, about 10 years old. Makes sense to a little what given where from my fuzzy memory. And so that's kind of neat.
[00:17:16] Speaker A: Reinvent 2014.
Pretty sure I was there.
But what company, where was I working in?
[00:17:26] Speaker C: Trying to remember. Was 2016 my first year or 2015? Yeah, one of those two. Probably 2016.
[00:17:32] Speaker A: Maybe the early. Maybe the reason why I kind of blacked this out is because the early days of Aurora were sort of like a lot of promise and a lot of rough edges.
[00:17:43] Speaker C: Well, and there wasn't a. Yeah, and there wasn't a lot of differentiators either. Right. Like you could run postgres on RDS or you could run aurora postgres and RDs. And it was kind of. I don't think there was very much difference unless you were at like the bleeding edge of performance and all you were really getting was a more complicated pricing pattern. So like. Yeah, I remember not choosing Aurora in a lot of cases because I just, you know, I was like, this is more complicated and I don't get the benefit.
[00:18:14] Speaker A: Well, and. And the reality was that most people weren't dealing with the scale problems that Aurora really solved for you at that point. Um, so, yeah, I mean, I guess 10 years is fair. So. All right, well, happy birthday.
You. I guess that's what you're a fifth grader now. 10th. 10. 10. Yeah. Yeah, I think fifth grader. So almost to middle school then. Who knows what happens?
Well, shockingly enough, Gartner has released the Magic Quadrant for strategic Cloud platform services and AWS has been named a leader for the 15th year in a row.
[00:18:45] Speaker C: No way.
[00:18:45] Speaker A: Because they invented the space.
[00:18:47] Speaker C: Because Gardner has crafted this program after aws.
[00:18:53] Speaker A: Gardner pointed out that the custom Silicon Portfolio Graviton and French and Trainium as a key to trader enabling better hardware software integration and improved power efficiency for customer workloads. And the report emphasizes AWS's extensive global community as a competitive advantage. With millions of active customers and tens of thousands of partners, AWS Transform emerges the first agentic AI services specifically designed to accelerate enterprise modernization of legacy workloads including net mainframe and VMware migrations and the recognition underscores AWS's operational scale advantage with its market share, enabling a more robust partner ecosystem that helps organizations successfully adopt those pesky cloud services.
Right below Amazon Web Service though is Google. So Google was actually above Microsoft on ability to execute and completeness of vision. And then Microsoft was right behind that. And then Oracle was trailing as a lovely fourth all in the leaders quadrant. For those of you working in China, Alibaba Cloud was a challenger, the only challenger. And then IBM somehow still made it on this quadrant. I don't understand how, I don't know what they're selling in cloud. And then there's also Hawaii Cloud and Tencent.
[00:19:58] Speaker C: And how is like DigitalOcean not there? Like there's a couple others that I'm like confused that are missing. Right.
[00:20:04] Speaker A: Like well this you have to remember, you know there's going to always be a inclusion criteria which I could go get because I am a gardener subscriber. I can tell you what it was but it's mostly digital Ocean but doesn't have the breadth or some feature that they require to be okay considered in the thing.
[00:20:22] Speaker C: I'm just confused that IBM would you know, because like Oracle I feel barely does well.
[00:20:28] Speaker A: And this is also the platform services. Strategic cloud platform services is slightly different.
[00:20:34] Speaker C: Yeah, that's. Yeah, there's always, there's always those caveats in gardener so they can sell you more reports.
[00:20:40] Speaker A: Yeah, exactly. Well, it got too big so we decided that we needed to break it up into multiple quadrants. Let's see here.
I have the actual document. I will try not to read it. So I don't know the anger the Gartner gods.
Okay.
To qualify for inclusion in these strategic cloud platform services you must be a provide a public cloud infrastructure as a service and platform as a service service that are suitable for supporting mission critical large scale production workloads. You must sell public cloud as standalone services.
You must host it in infrastructure you own or lease.
You must have market traction, momentum, which means that you either have to have at least one public cloud infrastructure or platform as a service offering that's been generally available for more than three years as offering must be generated a million minimum of 1 billion in revenue. There it is. That'll do it.
Or have at least one public cloud service that has been generally available for less than three years. But this offer must have generated a minimum of 500 million revenue for calendar year 2024. Again, there it is.
[00:21:38] Speaker C: Yeah.
[00:21:40] Speaker A: It must be able to invoice, offer consolidated billing and negotiate custom contracts. You Must maintain sales and support offices on at least three continents, have 24 by 7 customer support. Offer language localization of minimum of two language options and offer both free and fee based cloud adoption assistance. And then as part of your technical capabilities, they require you to have software defined computing, storage and networking part of your paas service. You have to have a managed database platform support elastic real time provisioning and scaling of both infrastructure and platform.
How does Oracle live in that.
[00:22:14] Speaker C: Right.
[00:22:14] Speaker A: Offer cloud services facilitating automated infrastructure management including minimum monitoring, auto scaling and manage data backup. Offer AI ML capabilities.
Offer a published SLA for 75% or more of your services and offer an architecture for service resilience that enables customers to replicate resource configurations between provider zones and regions. Yeah, so yeah, this actually now explains some of the things that Digital Ocean's been announcing.
[00:22:36] Speaker C: Yeah, it does, yeah. That's, that's all in that direction. Right. They've been announcing. It does seem like Gartner's defining.
[00:22:42] Speaker A: They announced an AI platform, they announced managed database platform. Know, I don't be shocked to see they have support offices opening up globally. Yeah, exactly. To support that.
So that makes sense. Okay.
[00:22:52] Speaker C: It does seem like the rules tailor to just that, you know, crowd that are specifically defined.
[00:22:57] Speaker A: Yeah, I mean a billion billion dollars in revenue or 500 million in revenue if it's less than three years. I mean that's a pretty high bar. I mean but again it's a big market and those market caps for those companies are massive. So I keeps the riff raff out for sure.
But congratulations. I was definitely surprised about the Google part though. I think that's an AI reflection.
I mean, and I assume they're, I assume they're dinging Microsoft because again, they don't own OpenAI and they don't have their own solution, but they are still also dominating that space too. So it's silly.
[00:23:24] Speaker C: That's a good point. I hadn't thought about the OpenAI relationship.
Yeah, I bet you're right.
[00:23:29] Speaker A: I mean, I don't. I mean, yes, I guess Nova exists for aws, but who's using Nova?
Even Kiro doesn't use Nova. It uses, Uses Claude.
[00:23:38] Speaker C: Well, right, but I think the. I mean, is it. Do you have to have your own monitor? Do you have to have your own platform? Like, you know, because it's like. I would just like Bedrock.
[00:23:46] Speaker A: I doesn't. I mean Matt's not here but I'm pretty sure he's told me before. Azure has AI studio type capabilities where they can do. There's some similar vertex sagemaker things and they can do tuning and they can do training and all those things just like everybody else. So I don't know. I again I feel like it's penalty they're putting on top of Microsoft for no good reason that I can justify my brain.
[00:24:06] Speaker C: Yeah. Security availability, Ease of use.
[00:24:09] Speaker A: No, I mean those are fair but that's Again I'm not reading the whole. Not going to read the whole thing but sure there's more to it. Let's see. What did they ding poor Azure for? Where is it?
Alibaba. Amazon. Large public cloud community. Cloud inspired silicon. They gave them a lot of credit for. So this cloud inspired silicon, you should probably note that one because that will probably end up being a prediction thing you need for Google. And Microsoft cautions lagging in SaaS and nascent low code capabilities. Okay. They tried low code. It was bad. Minimal multicloud support. Okay. They're still getting ding for that. It's been years. A marketplace approach for drone AI models. They're getting ding for the marketplace approach. Weird. Google.
A Google wide AI strategy leading the AI solution stack and partner sovereign cloud offering and caution support and account management. Consistency and quality. Amen. Legacy application sensitivity and loose integration between GCP and Google workspaces. Really? They're getting dinged for that stuff. They should.
[00:25:09] Speaker C: No, it's still terrible. I hate it.
[00:25:13] Speaker A: I mean I wish they just. I wish they just divorced them completely to be honest. But.
[00:25:17] Speaker C: And they've. They've made it a lot easier recently. Very recently. I'm still sort of feeling my way through the troubled waters on that.
[00:25:26] Speaker A: So then in Azure the game for strengths AI integration.
[00:25:29] Speaker C: What?
[00:25:30] Speaker A: Yeah, a cross product strategy with 365 Dynamics fabric entre Defender Copilot teams and other offerings. Multi cloud support.
High marks on that. Then the cautions were capacity purchasing and account management and emergent in house AI capability. So they are getting dinged because they don't have their own internal. You're right.
[00:25:50] Speaker C: Yeah.
[00:25:50] Speaker A: Stuff there.
[00:25:51] Speaker C: Interesting.
[00:25:52] Speaker A: So there you go anyways. Gartner. Yeah, it's interesting world.
[00:25:57] Speaker C: It's a good barometer. Still don't trust.
[00:25:59] Speaker A: Yep. But you know I like the forester wave as well but the gardener stuff is interesting. I you know having been more embedded into that ecosystem a little bit. It's. It's amazing all the data they have and what they have and then how much of it I think is wrong. Yeah.
So it's also like this is really fascinating. You have this. I also don't agree with any of it and I'VE talked to some of the analysts and I'm like, I see why I don't agree with you because your opinion of this is completely flawed from my opinion. But you know, I see where you're coming from. I just don't agree.
[00:26:26] Speaker C: Yeah, it's just, it goes back to the how to lie with numbers, right? There's subjective measure no matter how hard you think the data is.
[00:26:32] Speaker A: It's exactly well AWS is releasing an open source Go driver that wraps PGX, PostgreSQL and native MySQL drivers to reduce database failover times from minutes to single digit seconds for RDS and Aurora clusters. The driver monitors cluster topology and status to identify new writers quickly during failovers, while adding support for federated authentication, aws, Secrets Manager, and IAM authentication. This addresses a common pain point where standard database drivers take 30 to 60 seconds to detect a failover, causing application timeouts and errors during Aurora's automated failover events. Available under the Apache 2 license on GitHub, the driver requires no code changes beyond swapping import statements, making it a drop in replacement for existing Go applications using PostgreSQL or MySQL. I mean, I like this is in the driver to an extent and then I hate it at the same time because having the driver now means it's a distributed problem that if it's not working, it could not be working in lots of places, especially for some things like figuring out topology and did it detect the topology correctly and did it identify the writer fast enough, et cetera. I get why it has to be in the driver, but it also feels like I'd rather just have a proxy layer in between.
[00:27:38] Speaker C: Yeah, I mean I think I had I went on a roller coaster journey with this article because I was like if I read the first bullet I'm like we got to cut this story. No one cares. Like this is. Oh my God.
[00:27:49] Speaker A: Yeah.
[00:27:50] Speaker C: But then it's like okay, understanding more of the pain points with the failover times and that sort of thing. Like you have to be at the connection layer could be a proxy, but I proxies and atomic rights have always been sort of like also problematic. Yeah. And so but the Federation Authentication Secrets Manager integration for me is what kind of seals the deal on this, which is if you put it at the Secrets Manager, at the driver layer, you can do, you can do a dynamic rotation of Secrets, which is awesome.
[00:28:23] Speaker A: And then Amazon or sorry, Amazon is giving us one more instance type this week with the R8i and the R8i Flex instances with custom Intel Xeon 6 processors delivering 20% better performance and 2 1/2x memory bandwidth compared to the R7. I instance simply targeting memory intensive workloads like SAP, HANA, Redis and Real time Analytics.
This will get you up to 96x large with 384 VCPUs and 3 TB of memory. Double what the previous one did. Achieving 142,000 ASAPS certification. Which is the SAP thing I don't understand. Which is the highest one comparable cloud and on premise systems which I think Google just announced a few weeks ago that they had the highest ASAPs. So now we're in an arms race. But ASAPs, which is funny. And the R8i Flex offers you 5% better price performance at 5% lower cost workloads that don't need sustained CPU usage. Reaching full performance 95% of the time while maintaining the same memory bandwidth improvements. And both instance types feature 6th generation AWS Central cards with 2x network and EBS bandwidth. Plus configurable bandwidth allocations for optimizing your database performance. Available to you in US East Virginia and Ohio, US West Oregon and Europe. In Spain with specific performance beings 30% faster for Postgres, 60% faster for Nginx and 40% faster for AI recommendation models.
[00:29:39] Speaker C: I feel like AWS is just trolling us with instance announcements now because it's. I feel like there's a new one and I don't know the difference now. Like they're just different, they're different words. But CPUs and memory, if you're really.
[00:29:52] Speaker A: Tracking the Intel CPU, you know Xeon processor pipelines and you really cared about that level of performance.
It's really cool. If you don't care about that and you just need more access to GPUs for your AI workload, you probably don't care about this.
[00:30:06] Speaker C: I feel like SAP specific thing.
[00:30:08] Speaker A: Right.
[00:30:08] Speaker C: Like they're so they can be certified on SAP and get that like I.
[00:30:12] Speaker A: You know, I think it's more of.
[00:30:13] Speaker C: An angle and they know that customers, SAP customers have all the money and no other options.
[00:30:19] Speaker A: So there you go. But to take us back to Aurora's 10 years old at that type of. Back in those days they used to announce these things at Re Invent.
So makes me a little nostalgic for like I remember when they used to announce new instances at Rent.
You know that's where the C4 and the C5 came from and you know, mainstays in my early cloud. Yeah.
[00:30:37] Speaker C: And then it got pushed sort of Like Mondays, right? And then now it's.
[00:30:41] Speaker A: And then after that it wasn't even worthy of Monday. Now it's just. We just jump it in a press release on the AWS news blog and hope for the best.
This one did get a full blog treatment, so I guess that's nice because they really wanted that ASAPS credit, right?
So they needed a full blog press.
Moving to GCP they are bringing two GKE clusters. Increased scalability via multi subnet support. Your GKE cluster can now use multiple subnets instead of being limited to a single subnet's primary IP range, allowing clusters to scale beyond previous node limits when IP addresses are exhausted. This addresses a common scaling bottleneck where clusters couldn't add new nodes once subnet IPs were depleted. The feature enables on demand subnet additions to existing clusters without recreation, with GKE automatically selecting subnets for new node pools based on on IP availability, which is also gonna be fun to troubleshoot.
Available in preview For GKE version 1.3.0.3 GKE 12.11 000 or greater with CLI and API support currently available, while Terraform and UI support are coming soon. This puts GK on par with eks, which has supported multiple subnets since launch. Key benefits for enterprise running large scale workloads that need to grow beyond initial capacity planning. Particularly useful for auto scaling scenarios where node counts can vary significantly and the feature works with existing multi pod CIDR capabilities for comprehensive IP management. No additional costs are mentioned for the multi subnet capability itself. Those standard networking charges apply for additional subnets created in the vpc.
[00:32:07] Speaker C: This is one of those things where the differences between Amazon and GCP networks where it comes into play, right? Because it's when you're, you're operating globally and not in these little isolation bubbles. You have to be a lot more structured with you know, your, your IP layout and your subnet design and all these things and if you get that wrong, you're just screwed, you know, for, or at least we're up until this. And so like you could either launch a whole new Kubernetes cluster and move your workload over to that, you know, or nothing really, because I don't think you could even change, change subnets for a Kubernetes cluster either. So you know, there's very little recourse. So this is, I'm glad to see this. This will be very handy in the day job and this is something that literally was solving this problem like a week or two ago.
Cool.
[00:33:02] Speaker A: I was like when the feature comes out right when I need it. Yeah, especially right after.
[00:33:06] Speaker C: Like that's my favorite. You know. Like this is not quite a Sherlock.
[00:33:10] Speaker A: Well, my my least favorite is when you just spent a year fixing the thing you needed in code and then they release it. That's what that one makes me know. I mean happy, but also annoying. So.
[00:33:23] Speaker B: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days.
If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask will you actually give me my money back? Our chair will click the link in the Show Notes to check them out on the AWS Marketplace.
[00:34:01] Speaker A: Well, Database center is expanding coverage to now monitor self managed MySQL Postgres and SQL Server databases on compute engine VMs extending beyond just managed Google Cloud databases to provide unified fleet management across your entire database estate. The service automatically detects common security vulnerabilities and self managed databases, including outdated versions, missing audit logs, overly permissive IP ranges, missing root passwords, and unencrypted connections, addressing a significant gap for customers running databases on VMs. New learning capabilities notify teams when new databases are provisioned or when Database center detects new issues, while Gemini powered natural language queries now work at the first folder level for better organizational wide database management. Zoro comparison features have expanded from seven days to 30 days, enabling it better capacity planning and trend analysis across your database fleet with Database center remaining free for Google Cloud customers.
Well, I'm glad to have this.
I also like that it can notify me that someone created a new SQL cluster versus me being surprised by the bill.
So that I do appreciate quite a bit.
[00:35:02] Speaker C: Yeah, no, this is a great thing to have as a service. I want it to do a bunch more than what it's doing. And so like I'm hoping that they're they're going to build off of this to be, you know, proper fleet management of databases in terms of like capacity and performance as well as security and sort of just overall visibility of existence. And so like I think it's great, I really want to see it expand but you know, I don't think it's quite a fleet manager for databases yet.
So I would still choose you know, a managed Database service, a cloud SQL or rds. Before I would do this but.
[00:35:43] Speaker A: Well, I definitely know there's a lot of things coming on the pipeline for this one. So there, this is an area that they're definitely talking to many customers about. So I'm looking forward to seeing what they actually built, but I know it's something they're very interested in becoming competitive with an Azure competitive product, so. Oh cool. Yep.
Google Cloud HSM now integrates with Workspace client side encryption or CSE to provide HIPS142 Level 3 compliant hardware security modules for organizations in highly regulated sectors like government, defense and healthcare that need to maintain complete control over their encryption keys. Server addresses compliance requirements for ITAR EAR, FedRamp High and Disa IL5 certifications while ensuring customer managed encryption keys never leave the HSM boundary.
This provides a 99.95 uptime SLA and can be deployed in minutes with flat pricing model currently available in the US because that's what all those things I mentioned earlier are for. So this is great. Appreciate this.
[00:36:38] Speaker C: So the fact that it's in Google Workspaces and not in Google Cloud is very confusing to me.
[00:36:47] Speaker A: Like I get it you isn't it? But it wasn't it already in Google Cloud? Like I thought it was already there.
[00:36:52] Speaker C: What's the, you know, what's the. So the, there's definitely been a cloud hsm. This is specific to hardware, I think.
I just don't quite understand the, the workspace addition of this and so like.
[00:37:06] Speaker A: Well it's really, it's really giving that CSE side. So it's actually encrypting on my client for Google, you know. So my Gmail client actually will have a key that is being accessed from this HSM to encrypt the mail at my browser, I assume in JavaScript before it gets sent. That's what I think.
[00:37:22] Speaker C: Yeah, I think you're right. That makes a lot of sense. That's why it's there.
[00:37:26] Speaker A: Yeah. Okay. So I mean again I think in the, in the Google Cloud already they already had the hardware level, they already support all these things. But if you're a government agency wanting to use Google Workspaces and you wanted to support those things, you need to have client side encryption, which the only way to do that in the Gmail client before was to basically use Outlook or other third party clients that would support CSE at some level and then how do you manage the key distribution for all of that, et cetera. And this is what.
[00:37:51] Speaker C: And if you're using the G suite, you know, Google Docs stuff. None of that's going to be available either, so.
[00:37:57] Speaker A: Correct.
[00:37:57] Speaker C: So this makes, this makes a lot more sense now. Yes. So okay, this really is just workspace services and that kind of thing. So.
[00:38:05] Speaker A: Cool.
[00:38:06] Speaker C: All right, fine.
[00:38:09] Speaker A: Google Cloud has announced comprehensive AI security abilities at their security Summit today, introducing agent specific protections for agent space and agent builder, including automated discovery, real time threat detection and model armor integration to prevent prompt injection and data leakage. The new Alert Investigation Agent and Google Security Operations center autonomously enriches security events and builds process trees based on Mandiant analyst practices, reducing manual effort and SOC operations while providing verdict recommendations for human intervention. As long as you agree with Mandiance analyst practices, you're good here. If you don't agree with their practices, this is not so great for you.
Security Command center gains three preview features including Compliance Manager for Unified Policy Enforcement, Data Security Posture Management with native bigquery integration and risk reports powered by virtual Red Team technology to identify cloud defense gaps. And the gentech IAM is coming later this year with auto provisioning agent identifies across cloud environments with support for multiple credential types and authorization policies, addressing the growing need for AI specific identity management as organizations deploy more autonomous agents. And MANIA Consulting is expanding their services to include AI governance frameworks, pre deployment hardening guidance and AI threat modeling, recognizing that organizations need specialized expertise to secure their generative and agentic AI deployments.
[00:39:18] Speaker C: Yeah, I mean a lot of good features in this.
I've been waiting for these announcements, you know, talking to account teams and product teams within Google for a while. So like I'm really happy to see these and there's a whole bunch I didn't know about. They announced today that after, you know, attending the summit and reading the press releases that I'm super excited about, you know, there's a lot of automated discovery of AI workloads inside your cloud, so you don't have to know necessarily and be aware of all the things that are going on inside of a cloud platform which if you're working in security you almost always find out the wrong way around.
And so this is great.
There's a lot of curated detectors so that you can, you know, analyze and run playbooks within SecOps based off of security events that you're seeing.
And then reading through the agentic AAM as I was trying to understand what it meant because I didn't really gather it from the the press release. Like I said a whole bunch of words I understood but didn't really get it. But it really goes to the fact that you, because you're using an AI and an agent, something that can change what it's doing dynamically based off of what you're requesting of it. How do you develop a static, you know, permission policy for something like that?
[00:40:37] Speaker A: Right.
[00:40:37] Speaker C: Without just constantly running into permissions errors.
And so this is sort of changing that method to how I feel all user authentication should go, including users, which is not define a specific list of permissions, but defining a list of boundaries sort of. And then having a very, a much tighter sort of authentication workflow with, you know, verifiable identity through like certificates of cryptographic means.
So I'm pretty stoked about these. Um, and you know, I was playing around today immediately because I was excited and that was procrastinating work I was supposed to be doing.
Don't tell anyone. No one at work listens, right?
[00:41:18] Speaker A: No, no, never. I don't know what I'm talking about.
Well, you can now right size LLMs serving on VLM for GPUs and TPUs. I don't know any of that just meant but hopefully the article helps us Google publishes a comprehensive guide for optimizing LLM services ON VLM across GPUs and TPUs, providing a system approach to selecting the right accelerator based on workload requirements like model size, request rate and latency constraints. The guide demonstrates that a TPU V6E trillium achieves a 35% higher throughput compared to the H100 GPUs when serving Gemma 3.27B, resulting in 20% lower costs to handle 100 requests per second. Key technical considerations including calculating minimum VRAM requirements, determining tensor parallelism needs, and using the auto tune SH script to find optimal GPU memorization and batch configuration.
And the approach addresses a critical gap in LLM deployment where teams often over provision expensive hardware without systematic benchmarking, potentially saving significant costs for production workloads. Google support for both GPU and TPU options in the VLM provides flexibility for different use cases, with TPU's showing particular strength for models requiring tensor parallelism due to the memory constraint itself.
[00:42:27] Speaker C: I read this article and it said a whole lot of stuff that I thought I understood and then the more I read I realized I didn't understand. But it did really firm up the difference between using, you know, an AI managed service versus running my models on hardware that I'm provisioning to the cloud. I'm like, oh, I'm just going to pay the premium on the, on the service. Thank you. And then I don't have to worry.
[00:42:50] Speaker A: About any of these things.
I mean, VLLM is really for running the open source models, right? That's at least my understanding of it. And what's actually interesting, one of the things I was reading about the other day is that open source models are actually less efficient because they don't have the heavy caching benefits of the managed services that you get. So it's kind of crazy. I did ask to make sure that I was right on this. Claude explained to me like I'm five what a VLM is.
And it says VLM is like a super smart helper that makes AI language models like me Claude work faster when lots of people want to talk to them at the same time. Imagine you have a really smart robot friend who can answer questions if a hundred kids all want to ask the robot questions at once. Normally the robot would have to answer one kid and then the next kid and then the next kid, which would take forever.
VLM is like giving the robot a special power. Now it can listen to many kids questions at the same time, answer them all together in groups. It's like when your teacher reads a story to the whole class that are reading it to each kid one by one. The special trick VLM uses called Paged attention. Thinking of it like organizing all the questions on different pages of a notebook very neatly so the robot can flip through them super quickly and not waste any time looking for other things. So then combine that I guess with the TPU and figure out is which of these is most cost efficient for the use case that you're trying to do. And that's where it's kind of like picking auto scaling or load balancing to the appropriate model for your need.
[00:44:05] Speaker C: Is it auto scaling or is it.
[00:44:07] Speaker A: More of like not auto scaling? Sorry, it's really more like load balancing. Load balancing based on load and optimization.
[00:44:11] Speaker C: Yeah, yeah.
[00:44:12] Speaker A: And optimizing for that.
[00:44:14] Speaker C: Cool.
[00:44:16] Speaker A: And now we enter Azure territory where without Matt. Matt gave us some of these articles and this first one is definitely a Matt story that only Matt.
[00:44:26] Speaker C: Oh no, I know this one.
[00:44:28] Speaker A: Oh, you're excited. This one too. Okay.
Microsoft is launching the Terraform Ms. Graph provider in public Preview, enabling Day Zero support for all Microsoft Graph APIs, including Entre ID and Microsoft 365 services like SharePoint through standard HDL syntax. You lost me in Entra. And then you doubled down with SharePoint. This positions Ms. Graph as the Azure AD equivalent of what AZ API is to Azure rm, providing immediate access to new features without waiting for provider updates. Okay, I got to get that the new Microsoft Terraform VS code extension consolidates Azure RM Azure API and Ms. Graph support into a single tool, replacing the separate Azure Terraform and AZ API extensions. And key features include exporting existing Azure resources as Terraform code, intelligent code completion, and automatic conversion of ARM templates to AZ API format. This release targets organizations managing Microsoft 365 and Entre ID resources alongside traditional Azure infrastructure, addressing a gap where AWS has separate providers for different services. Microsoft now offers a unified tooling Ms. Graph provider extends beyond the limited Azure Ad provider to support all beta and v1 graph endpoints.
[00:45:31] Speaker C: So I understand why you hate this because you hate all the services that are behind the Graph API. But there's a single API point if you want to do anything in teams, it's the same API endpoint if you want to query Entre ad for membership in order to list of groups, it's a single Graph API endpoint for, for, for anything in the in the docs or the mail space and a bunch of things. It's all just the same API and so like and it's because it's a sing single API. That way the structure can get real weird real fast and so I think that's really prevented, you know, something like a Terraform provider. For this to exist would be just unbearable to support.
So this is kind of neat. I'm hoping it makes things easier. I would love to do things like auto provision Azure elements in my cloud based off of the list of members in Entre or be able to provision teams or bots or things in teams rooms based off of lookups and that. So this is pretty cool. There's a lot of things this will unlock but yeah, I don't like the Graph API at all. Like it's awful to use.
[00:46:45] Speaker A: Well good, I'm glad to hear it.
[00:46:47] Speaker C: Do not like all right.
[00:46:49] Speaker A: Agent Factory is being introduced by Microsoft as a six as a six part blog series showcasing five core patterns for building Eurogentic AI that moves beyond simple Q and A to exceeding complex enterprise workflows through tool use, reflection, planning, multi agent collaboration and real time reasoning.
Azure AI Foundry serves as a unified platform for agentic AI development offering local to cloud deployment, 1400 plus enterprise connectors, support for Azure OpenAI and 10,000 plus open source models and built in security with managed entre agent IDs and our back controls.
So this tool is basically providing you a bunch of documentation of preferred legacy or preferred not legacy but preferred opinionated architectures how to use Agent Factory to solve all of your AI needs, which I appreciate this. I mean I'm not going to use Azure Factory to do it, but I do, you know, like that there. These are available to us as six blogs that kind of give you different models and their ways to think about these particular problems, which I like.
[00:47:48] Speaker C: Yeah, no, I mean I, it's funny because I, I feel like I'm always sort of playing catch up with AI and so it's, I think that's kind of unique from at least my career where I was sort of coming at it from in front. So you know, like common use cases and design patterns for deploying, you know, compute at scale. I was like ho hum, blah blah, blah. But now with these sort of like how do you, how do you configure multiple agent workflows to do business level processes? I'm like yes, please give me all the data. I have no idea how I'm going to do that.
[00:48:18] Speaker A: So this is. I love these things.
Microsoft is unifying OneLake's capacity pricing by reducing proxy transaction rates to match redirect rates, eliminating cost differences based on access method and simplifying capacity planning for your fabric customers. OneLake serves as centralized data storage foundation for all Microsoft fabric workloads including lakehouses and warehouses with storage build pay as you go per gigabyte, similar to Azure Data Lake storage and Amazon S3. The pricing alignment removes architectural complexity for organizations using OneLake with third party tools like Azure Databricks or Snowflake as all access paths now consume fabric capacity units at the same low rate.
Low is subjective.
The change reflects Microsoft's Strategy to make OneLake an open vendor neutral data platform that can serve as single source of truth regardless of which analytics tools organizations choose to use later.
[00:49:03] Speaker C: I mean, I guess it's better because it's simpler, but I mean it's cheaper.
[00:49:07] Speaker A: Yeah, I like cheap.
[00:49:08] Speaker C: I don't know if it's actually. Is it cheaper because they're reducing, they're matching the price of these two transaction rates but I don't know, I mean.
[00:49:16] Speaker A: They say they're increasing to the higher.
[00:49:18] Speaker C: Amount so yeah, I mean it seems unlikely. I do, you know I imagine that sorting out these types of transactions when you're talking about redirect to different capacity presumably or like a different part of the. The OneLake but yeah, what a mess.
[00:49:38] Speaker A: Azure Linux with OS Guard is Microsoft's new hardened container host OS that enforces immutability, code integrity and mandatory access control. Essentially a lockdown version of Azure Linux designed specifically for high security container loads on aks, the Azure system uses integrated policy enforcement recently upstreamed into the Linux kernel in 6.12 to ensure only trusted binaries from DM Verity protected volumes can execute including container layers. This runs rootkits, container escapes and unauthorized code executions. Built on FedRAMP certified Azure Linux 3.0 it inherits HIPS 143 cryptographic modules and will gain post quantum cryptography support as NIST algorithms become available, positioning it for regular workloads and future security requirements Available soon as AKS OS skew with Viya Preview CLI with Feature Flag customers can test the Community edition Now on Azure VMs targeting enterprise needing stronger container security without sacrificing the operational benefits of of managed kubernetes.
[00:50:34] Speaker C: Yeah, this is interesting like, because you know, like this according to the blog post takes a sort of different approach than what we've seen in the past with like coreos and Bottle Rocket and stuff where they're sort of like trying to reduce the, you know, what's in that limit so much that you can't have anything that vulnerable that can be exploited in it. And this uses a lot more like sort of almost like the, you know, the, the protected VMs where it uses the sort of encrypted memory objects and, and so this is sort of a, a new take on sort of securing container workloads at the compute level.
I have a lot more questions of, you know, that I want to read up about it but I, I thought it was pretty cool that they're. They didn't just take the old tried and true like make as small of an OS image as possible and then just, you know, go with it from there. They're actually looking at it holistically and taking a new approach.
Be kind of neat maybe that way I don't have to like whenever I'm debugging containerized workload, like do these weird things where I'm downloading all these network tools into a sidecar container and then mounting all the system resources to do inspection on it. Maybe, maybe can actually have.
[00:51:50] Speaker A: Maybe, maybe it could be a real.
[00:51:52] Speaker C: OS that I can use and still be safe.
[00:51:57] Speaker A: Well, that's possible.
I don't know.
[00:52:01] Speaker C: Yeah, we'll see.
[00:52:03] Speaker A: Well, I feel bad for Microsoft because I talked about they're falling down on the cloud positioning to Google and so I saw they have a new magic quadrant for container management and this one had them above Amazon. So but Google was still beat. Okay, but you know, it was Google, then Microsoft, then Amazon and then Red Hat and then a whole bunch of other people on this particular Azure quadrant. But yeah, being named a leader isn't anything to be sad about. This is the third consecutive year in the leader quadrant highlighting their comprehensive container portfolio. We just talked about their amazing new security container AKS Automatic and Preview aims simplify Kubernetes adoption by providing production ready clusters with automated node provisioning, scaling and CI CD integration. And the platform is integrating AI workload support through GPU optimized containers and AKs and service GPUs and container apps as well as Azure Query Suite Manager addresses enterprise scale challenges by enabling policy driven governance. So congratulations Azure for being named number two in the Gartner quadrant against Google.
[00:53:03] Speaker C: And if only the Gartner analysis understood what ECS actually was and how much better it is than Kubernetes.
[00:53:09] Speaker A: All of my stuff still runs on ecs.
Well, Oracle woke up and realized that they were, you know, they needed more AI models and so they've announced the partnership with Google Cloud to bring the Gemini 1.5 Pro and the Gemini 1.5 Flash model to OCI are working Oracle's first major third party LLM partnership behind cohere. Now of course they didn't give them any of the current Gemini models so you know, take that as you will. I thought it was sort of ironic but you know this positions Oracle apparently as a multimodal cloud provider similar to AWS Bedrock and Azure OpenAI services does it though arriving later to market with more limited selection compared to competitors broader model portfolios integration targets Oracle's existing enterprise customers who want to use Google's models. Who wants to do that while keeping data within OCI security boundaries particularly appealing to regulated industries already invested in Oracle's ecosystem models will be available through OCI standard APIs, Oracle's built in security features and the real task will be whether Oracle can attract new AI workloads or simply provide convenience for existing Oracle shops that would have used Google directly anyways.
[00:54:16] Speaker C: What a weird thing. Like I mean I don't know what the behind the scenes licensing of these, you know, sort of closed source models are, you know how difficult that is. But to offer the previous generation ones when 2.5 has been out for a while now.
My kids are a little strange.
Maybe Google honored too much money.
[00:54:38] Speaker A: I don't know, I don't know.
Maybe it's also one of those things like it started months ago and Oracle just finally shipped and they're like no, we got 1.5 now we need to get the New ones?
[00:54:49] Speaker C: Yeah, maybe.
[00:54:49] Speaker A: I don't know. That'd be interesting.
Very interesting. Moving on to other clouds, getting into the AI business is probably a good thing for DigitalOcean, who I thought we'd take a little quick side trip on. They announced their Future earnings of 219 million in revenue, 14% year over year growth, beating analyst expectations and driving a 29% stock surge. Company's focus on higher spending, hyperscaler customers spending roughly 500 plus monthly showed a 35% year over year growth and now represents nearly 25% of their total revenue. This is before they launched Gradient AI just a few weeks ago. And so I start to see this number continue to climb. And maybe they'll be able to make it to the Gartner metric quadrant because their full year guidance is a 888 million to 892 million revenue for the year, which is almost a billion dollars. So maybe they'll get there with that AI piece.
[00:55:39] Speaker C: Yeah, that's. I mean, that's great.
[00:55:43] Speaker A: I. I'm actually, you know, I. Earlier if you'd asked me, how much money do they make? I said they're probably like 400 million to 500 million.
[00:55:49] Speaker C: So I would have guessed 500. Yeah, for sure.
[00:55:51] Speaker A: Yeah. So they're making double. Almost double what I thought.
[00:55:54] Speaker C: Yeah. So that's. That's awesome. Good for them. Like a luxury firm.
[00:55:58] Speaker A: Yep, training for the underdog as always.
[00:56:00] Speaker C: And now I know much more about the gardeners. You know, quadrant selection stuff. It makes a lot of sense. Like you said, like these. These Latest announcements by DigitalOcean are very in that direction of that strategic platform quadrant. So cool.
[00:56:15] Speaker A: Databricks has entered the Jurassic park territory. Or the.
You didn't think about it. You should, but you did. Anyways, they're announcing SQL stored procedure support following ANSI PSM standards, enabling users to encapsulate repetitive SQL logic for data cleaning, ETL workflows and business rules while maintaining unity catalog governance. This addresses a key gap for enterprises migrating traditional data warehouses who rely heavily on stored procedures, which makes me question if you should be using a data warehouse.
The feature supports parameter types in out and in out. Nested recursive calls integrates with SQL scripting capabilities including control flow variables and dynamic SQL execution. Unlike functions that return values, procedures execute sequences of statements, making them ideal for complex workflows. Early adopters like Qlik Technologies report improved performance, scalability and reduced deployment time for critical workloads like customer segmentation and the ability to migrate existing procedures without rewriting code significantly simplifies transitions from legacy systems. Now I picked this one because I make fun of it because I skipped. Well, I think we covered it briefly when Snowflake announced basically the same thing. But Snowflake also got sort of procedures this year. So now they both have it. Yep.
And so now all your NoSQL data warehouse becomes SQL with stored procedures. So you're welcome.
[00:57:26] Speaker C: I get it, you know, database people are going to do data things. But yeah, it's so funny because like the I'm so sort of religiously against stored procedures and the fact that you have to execute changes and ETL things in a specific sequence and it really relies on the data being a specific format and nothing, nothing can be different about it. And so it's very easy in like a queue based system where you're going to read something or translate, do some sort of transformation with stored procedures you're just going to end up with a poison pill or like an infinite scaling problem. Like it's just like why would you do it this way?
I don't know, maybe that's, you know, just where, where I came into the industry and you know how functions for everything.
Yeah.
[00:58:15] Speaker A: Who doesn't love good function a good stored procedure?
[00:58:19] Speaker C: To write your unit test, you just got to strictly define the inputs to your stored procedure, exactly what it's going to do and what the outputs are. And so it's just double the amount of work, if not more to test your stored procedures. Cool. Awesome.
[00:58:35] Speaker A: Well, Google had a great cloud journey for us this week.
How Google does it. Your Guide to Platform Engineering by Leah Rivers and James Brabink. Basically, it's a walkthrough of how to think about platform engineering in your organization and you can use some of their tips and tricks. And so I thought we'd talk about it today.
Basically they're introducing their shift down strategy for platform engineering, moving responsibilities from developers into the underlying platform infrastructure. Rather than the traditional DevOps shift left approach that pushes work earlier in the development cycle. The approach categorizes ecosystems of types 0 through 4 based on how much control and quality assurance the platform provides provides from flexibility of flexible YOLO environments to highly controlled assured systems where the platform handles security and reliability. I have to check, was YOLO actually in this article or is that a clodism?
[00:59:19] Speaker C: Please, please be a YOLO environment because I am totally stealing that.
[00:59:25] Speaker A: It is, it is. YOLO is mentioned here.
Conversely, in less uniform YOLO ad hoc or guided type 02 ecosystem developers have more responsibility for drink as from. That's hilarious. That you only live once is one of the, one of the options in.
[00:59:36] Speaker C: The Google thing that is fantastic.
[00:59:38] Speaker A: Key technical implementation relies on proper abstractions and coupling design to embed quality attributes like security and performance directly into the platform, reducing operational burden on individual developers. Organizations should work backwards from their business model to determine the right platform type, balancing developer flexibility against risk tolerance and quality requirements for different applications. This represents a shift in thinking about platform engineering. Instead of a one size fits all approach, Google advocates for intentionally choosing different platform types based on specific business needs and accepting your risk level. So yeah, in general, I'm a big platform fan. You know, I, I'm a big DevOps fan as well for Shift Left. But then, you know, you get to a point where Shift Left doesn't always work and there's. You lose some of the benefits of a platform. And so that's where then the platform idea and shifting down really starts to play quite nicely.
And so yeah, this is a pretty good initial overview of how to get into it. There's a lot more nuance and detail than this article shares with us.
And they do say that you can watch for a full discussion.
You know, there's a 56 minute video on YouTube where they talk more about this, but it's a good starting place. What do you think, Ryan?
[01:00:38] Speaker C: I. I hate it. I mean, other than, you know, the yolo environment which I now love and will adopt. I might get a tattoo. I really like that. The. I was trying to figure out, I was reading through this blog whether or not they're sort of advocating for a lot of the same things that I advocate, just using different words.
But it's really hard to tell because they gloss over so much of the detail in here where it's too high level for me to really see if there's any meat to this because they point out the difficulties with DevOps. They point out things that are facts that we can all agree on.
But how to work backward from the business model, that's a hard question for people to, to define.
[01:01:21] Speaker A: How do you, I mean my thought when I was reading that was like what does that even mean exactly?
[01:01:26] Speaker C: Like, right?
[01:01:27] Speaker A: Like it's my business sells widgets to people. Like, how do I back that into the process?
[01:01:32] Speaker C: And I was trying to think from a developer platform point of view. You're. Are you talking about, you know, like a containerized application? Like, are you talking about like having a service catalog sort of thing? Like, what does that even look like? And so like that's, that's sort of my Problem with this, this blog post is it. It has a lot of that sort of high level.
[01:01:51] Speaker A: I mean, they're really pitching you to go watch the YouTube video from platform Con. I mean, that's really the point of this article is to get you to watch the video.
[01:01:58] Speaker C: But I have no faith that that's going to be defined in that video either because like, yeah, the problem with these things is that you can't really generally talk about them in a meaningful way without context. And so, you know, because it's sort of like, you know, they, they use gkes a couple times as an example as, as that platform of shifting down onto the platform and like, I kind of get it but you know, it takes a developer platform to use the other developer platform then because it's not real easy to do a full app development life cycle across Kubernetes. Right. Like it. And it doesn't cover any of the, the SLDC of like the code aspects of these things, just the ex. Just the running of it. So it's sort of like I think they're onto something that's kind of good and something I do agree with in the sense of building developer platforms that offer guardrails and easy optimizations and sort of global paved roads is a great thing that we need to continue to do. And pushing is the only way we're going to achieve sort of DevOps mindsets and SRE mindsets and you know, ownership of these services and also be able to scale the way we need without just forcing developers to learn 17, you know, programming languages and infrastructure, you know, HCLs and different technologies. But also sort of like, I think that they need to sort of have more meat in this blog post because I don't like.
[01:03:31] Speaker A: Well, I, I did admit that I was supposed to watch this video before we talked today about this. So I'm gonna watch the video and I'll come back. Maybe, maybe we'll rehash this again next week with Matt here as well. Yeah, and we'll, we'll take homework to watch.
[01:03:43] Speaker C: I guess I'll come back. If you're gonna take homework, then I'll, I'll watch the, I'll watch it as well. Because I really do want to now. And especially now that, you know.
[01:03:50] Speaker A: No, no, but yolo, I want to know more.
[01:03:52] Speaker C: Yeah, because that was really the one where I was like, this seems interesting to me is that, you know, sort of these boundaries of the ecosystem is what they call it and what, you know, I was like, I don't know what those are but it sounds great, you know, kind of things. And so maybe it's what I've been talking about, you know, in terms of, like having, you know, automated rollouts and lots of testing in the pipelines and things that I think allow DevOps practices to scale. So maybe it's the same, but I don't know. Let's see.
[01:04:19] Speaker A: All right, Ryan. Well, I think we got it done. I think we covered everything for this week.
It was a long. It wasn't too long, actually.
Well, it's short.
[01:04:29] Speaker C: We will talk to you soon, but that's not good.
[01:04:31] Speaker A: Yeah, well, we will talk to you next week here in the Cloud. Matt will be back from his very well earned vacation and we will see what the cloud providers have for us next week. Thanks, all.
[01:04:44] Speaker C: All right, bye, everybody.
[01:04:49] Speaker B: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services.
While you're at it, head over to our
[email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback, and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.