323: Databricks One: Because Seven Eight Nine

Episode 323 October 09, 2025 01:22:16
323: Databricks One: Because Seven Eight Nine
tcp.fm
323: Databricks One: Because Seven Eight Nine

Oct 09 2025 | 01:22:16

/

Hosted By

Jonathan Baker Justin Brodley Matthew Kohn Ryan Lucas

Show Notes

Welcome to episode 323 of The Cloud Pod, where the forecast is always cloudy! Justin, Matt and Ryan are in the studio tonight to bring you all the latest in cloud and AI news! This week we have a close call from Entra, some DeepSeek news, Firestore, and even an acquisition! Make sure to stay tuned for the aftershow – and Matt obviously falling asleep on the job. Let’s get started! 

Titles we almost went with this week

AI Is Going Great – Or How ML Makes Money 

00:58 Google and Kaggle launch AI Agents Intensive course

Cloud Tools 

03:21 Atlassian acquires DX, a developer productivity platform, for $1B

04:30 Justin – “I use DX, I actually really like DX, some I’m hoping Atlassian doesn’t F it up.” 

AWS

06:51 Qwen models are now available in Amazon Bedrock | AWS News Blog

07:22 DeepSeek-V3.1 model now available in Amazon Bedrock | AWS News Blog

08:00 Justin – “I’m still skeptical about DeepSeek; because it sounded like it was derivative of ChatGPT, so I don’t really know what you’re getting out of it, other than it’s something cheaper.”  

08:34 Amazon RDS for MySQL announces Innovation Release 9.4 in Amazon RDS Database Preview Environment

09:45 Ryan – “My experience with database upgrades is the opposite. No matter how much preview is offered in time and enticement, you’ll still have to kick everyone off the older version kicking and screaming.”

11:50 AWS Organizations supports full IAM policy language for service control policies (SCPs)

12:43 Ryan – “They actually had the stones to say zero friction and SCP in the same article, huh?” 

14:11 Amazon Q Developer CLI announces support for remote MCP servers

15:18 Justin – “I think having it centralized is ideal, especially from a security and access control perspective. It’s a bit of a problem when these MCPS are running on everyone’s laptops – because that means they may not be consistent, they may not all follow all the same permissions models you need them to, or different access rights…so there’s lots of reasons why you’d like to have a remote MCP.”    

15:54 Accelerate AI agent development with the Nova Act IDE extension

17:39 Ryan – “I get why this is more than just a model, right? This is a specific workflow for development, and there’s clearly extensions and features in here that are above and beyond what’s in Kiro and Q, presumably, but they’d have to be really good.”

GCP

18:07 New GCE and GKE dashboards strengthen security posture

18:58 Ryan – “I got to play around with this and it’s really cool. I love getting that security information front and center for developers and the people actually using the platform. You know, as, as a security professional, we have all this information that’s devoid of context, and, if you’re lucky, you know enough to build a detection and be able to query a workflow. It’s going to just fire off a ticket that no one’s going to look at. And so this is, I think, putting it right in the console, I think that some people – not everyone – will take the initiative and be like, this is very red. I should make it not so red.”

20:53 Firestore support and custom tools in MCP Toolbox

21:46 Ryan – “As someone who never wants to write SQL queries ever again, I love these types of things. This is exactly how I want to interact with a database.” 

23:17 How are developers using AI? Inside Google’s 2025 DORA report

Azure

31:25 Microsoft’s Entra ID vulnerabilities could have been catastrophic

32:52 Matt – “We had a problem. We fixed the problem. Buy more stuff from us so you don’t have any problems in the future.”

36:56 Inside the world’s most powerful AI datacenter – The Official Microsoft Blog

40:17 Introducing new update policy for Azure SQL Managed Instance | Microsoft Community Hub

41:54 Matt – “This is different, because Azure is complicated – because Azure. You have Azure SQL, which is RDS, it’s fully managed. You have Azure Managed Instances, or Azure SQL managed instances, which is SQL on a server. You have access to the server, but they give you extra visibility and everything else into the SQL on that box, and can do the upgrades and stuff.”

43:19 Fast, Secure Kubernetes with AKS Automatic | Microsoft Azure Blog

44:38 Ryan – “Yeah, in my day job I’m doing a whole bunch of vulnerability reporting on the container structure. I’m like, half of these containers are just the Kubernetes infrastructure! It’s crazy.”

45:19 Generally Available: AKS Automatic 

PLUS

AKS Automatic with Azure Linux | Microsoft Community Hub

45:45  Public Preview: Databricks One in Azure Databricks 

46:42 Justin – “So if you didn’t want this, you are going to get it forced on you at some point. 

47:15 Public Preview: Azure HBv5-series VMs

49:56 Public Preview: Azure Functions .NET 10 support 

51:00 Ryan – “I’m just happy to see .NET running in serverless workloads.” 

Show note editor Heather adds “This is a NO time of the day research thing.” 

53:30 Generally Available: High Scale mode for Azure Monitor – Container Insights 

54:17 Matt – “The same thing as CloudWatch, it’s so expensive to take logs into any of these platforms, but you gotta get them somewhere. So you kind of just are stuck paying for it.”

54:49 Generally Available: Confidential computing for Azure Database for PostgreSQL flexible server 

57:11 Announcing the Azure Database Migration Service Hub Experience | Microsoft Community Hub

57:57 Ryan – “It’s a great play by Azure. They have a huge advantage in this space and I think there is a desire by a lot of companies to get out of legacy deployments, so it’s smart. Hurry up with the features.”

58:19 Public Preview: Azure Managed Service for Prometheus now includes native Grafana dashboards within the Azure portal

58:54 Justin – “I look forward to the arguments between ‘well the Azure monitoring says this, but the Grafana monitoring says this’ and it’s in the same dashboard.” 

1:00:01 Generally Available: At-cost data transfer between Azure and an external endpoint

1:01:11 Generally Available: Introducing the new Network Security Hub experience

1:01:51 Matt – “From my preliminary research, it’s just a nice gooey update that they’ve done to kind of make it be a little bit cleaner. It looks like it’s easier to manage some of these things just with Terraform across the way, but, you know, they’re trying to make this be better for companies at a larger scale.”

1:02:32 Fabric September 2025 Feature Summary | Microsoft Fabric Blog | Microsoft Fabric

1:03:30 Justin – “I appreciate all this Fabric stuff; Fabric is Azure’s Q.” 

1:04:09 Microsoft tames intense chip heat with liquid cooling veins, designed by AI and inspired by biology – GeekWire

1:05:13 Ryan- “Necessity is the mother of all innovation, right? And so this is not only as trying to offset carbon credits, but it’s also all the demand for AI and more compute – and less space and less power and water. So I think it’s neat to see innovations come out of that, and the way they make the sound just makes it seem like sci-fi, which is cool.”

1:06:18 Generally Available: Application Gateway upgrades with no performance impact

1:07:10 Matt – “About two years ago they added the feature called Mac Surge, which is when you have a scale set, you add a node and then you delete it. So here, they are adding their app gateways; so essentially if you have 10, you would go to 11 and then you would remove one of the original ones. And they essentially are just leveraging that as part of the app gateways… But if you’re also auto scaling, which if you have the app that can handle that, you don’t control your nodes. So you would just lose capacity at one point. So it’s one of those quality of life improvements.

Oracle

1:08:27 Oracle Sets The Standard In Enterprise Ai

1:09:42 Justin – “The best thing about this article is they basically imply that they invented AI.”

After Show

1:21:40 Prompt Engineering Is Requirements Engineering – O’Reilly

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod, where you can join our newsletter, Slack team, send feedback, or ask questions at theCloudPod.net or tweet at us with the hashtag #theCloudPod

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. [00:00:06] Speaker B: Welcome to the Cloud pod where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure. [00:00:14] Speaker A: We are your hosts, Justin, Jonathan, Ryan and Matthew. [00:00:18] Speaker C: Episode 323 recorded for September 23, 2025. Databricks 1 because 7, 8, 9. Good evening Matt and Ryan. I mean that's a joke that Matt's kids would probably really appreciate and both Ryan and I's kids would roll their eyes at us so hard. [00:00:36] Speaker D: Dad jokes, baby. [00:00:37] Speaker A: The preferred reaction these days for me. [00:00:39] Speaker C: Yeah, yeah, yeah, the, the dad jokes are super fun because the kids just hate them so much. We actually just went to a musical in San Jose recently. It's called Shucked and it's, it's a small town, you know, story, whatever, but it's just full of corn puns, the whole thing. And I was dying. It's hilarious. I was like, this is, this is just fun. And my kids didn't want to go. And I was like, you guys missed out because it was dad jokes everywhere. Oh well. All right, well let's move on to AI is how ML makes money for this week and this one is a training course for you guys. Google and Kaggle are launching a five day intensive course on AI agents from November 10th through the 14th following their gen AI course that attracted 280,000 learners. With curriculum covering agent architectures, tools, memory systems and production deployments, the course focuses on building autonomous AI agents and multi agent systems which represents a shift from traditional single model AI to systems that can independently perform tasks, make decisions and interact with tools and APIs. This development signal grow, signals growing enterprise interest in AI agents for cloud environments where autonomous systems can manage infrastructure, optimize resources and handle complex workflows without constant human intervention. Just much to the chagrin of security people. The hands on approach includes code labs and a capstone project indicating Google's push to democratize agent development skills as businesses increasingly need engineers who can build production ready autonomous systems. I actually might want to go this one, this one, this one's intriguing to me. [00:02:09] Speaker A: Yeah, no, I'm. [00:02:11] Speaker D: If it wasn't five days, I'm definitely. [00:02:13] Speaker A: Going to look into it. [00:02:14] Speaker C: That's a lot. [00:02:14] Speaker A: It's a big time commitment but like the, there's so much I still have to learn. [00:02:20] Speaker C: Right on. [00:02:20] Speaker A: You know, like a lot of these things, like you know, you know, I know the basics of a lot of, a lot of bits of AI and it's, it's very difficult to get deep into it and then trying to keep up with, you know, the rest of the business in my day job is very challenging. So it is, you know, like things like this are a great thing to have. I'm glad that Google and Kegel are putting this out there. [00:02:43] Speaker C: Yep. [00:02:43] Speaker A: And I like how sort of big it is. Right. Because I, I think if it was like a seminar or if it was even, you know, if it was smaller, I don't think it would be enough. [00:02:53] Speaker C: I mean there's just so much to cover. But yeah, the nice thing is, is the, the. What day is again? The 14th through the. [00:03:00] Speaker D: It's the 10th. [00:03:01] Speaker A: 10Th through the 14th. [00:03:01] Speaker C: 10Th to the 14th. November. Yep. So that's. Yeah, it's perfect. I just get back and then I can do that and yeah, it's good, good way to come back from vacation to deep intensive generative AI times talk. [00:03:12] Speaker A: I mean it's a way to ease back in. [00:03:14] Speaker C: Yeah. You know, why, why, why come in and answer emails when you can go to a four day training class. [00:03:19] Speaker A: Exactly. And then you could build, build the thing to. [00:03:22] Speaker C: Yeah. [00:03:23] Speaker D: To write the emails back to tell everyone to do. Go jump off a bridge. Yeah. [00:03:27] Speaker C: I mean my, my solution is just control a delete and then just declare bankruptcy. That's the best way to go. Yeah. Well, big news at Lesion is acquiring dx, which is a developer productivity analytics platform, for a billion dollars, after failing to build their own solution intern. Well, that's just not trying very hard. DX analyzes engineering team productivity and identifies bottlenecks about making developers feel surveilled. DX provides both qualitative and quantitative insights into developer productivity, helping enterprises understand what's slowing down their engineering teams. The platform serves over 350 enterprise customers, including ADP, Adyen and GitHub. The acquisition is particularly timely as companies struggle to measure ROI on AI tool investments and help understand if their growing AI budgets are being spent efficient effectively. DX can help track these how those tools impact developer activity directly. 90% of DX customers apparently already use Atlasian tools, making it as a natural integration that creates an end to end workflow. Teams can identify bottlenecks with DX analytics, then use at least project manager tools to hopefully address them. I mean, calling out Lesion's tools project management is a bit of a stretch. Is a great acquisition for the founders of DX, who only raised $5 million ENTRA funding. The bootstrapped approach has aligned nicely with Atlasian's own growth strategy, making this hopefully a good acquisition for them. I've used dx. I actually really like dx, so I'm Hoping at lesion doesn't fuck it up. [00:04:49] Speaker A: Yeah, I was thinking the same thing. Just because you and I have both used it and it's, it is really great. And I like the, the way that they sort of involve developers instead of sort of just producing metrics about them so that it, you know, so it doesn't feel like surveillance, because I think that's super important. Yeah, we'll see what happens with, with Atlassian. They'll probably, you know, put it behind this terrible paywall and then, oh yeah. [00:05:15] Speaker C: Makes it make some new JIRA analytics tool and cost 12,000 times the cost. And we'll have to go find another tool. [00:05:22] Speaker A: Hopefully it'll be more than like an Atlassian plugin into JIRA where it creates this terrible, like Easy BI database. [00:05:30] Speaker D: They'll link it to BitBucket and then force you to use BitBucket. [00:05:33] Speaker C: No good. Who doesn't love that model? Yeah, I mean, Laser is also making friends this week too, or last couple weeks, because they also said they're, they're deprecating Data Center Edition, which was their solution to people who didn't want to go to SaaS. And so now they're basically like, yeah, we're not, we're not going to do that anymore. So, yeah, Laser is making friends all over the place right now. [00:05:51] Speaker D: Yeah, but they weren't really, from what I understand, making massive improvements. Like they were doing bug fixes on the on prem version because they have a few friends that like work at large companies that run it. And they were like, we're not getting anything real besides bug fixes at this point. So they kind of EOLED it, I feel like a while ago, but just said, here, we'll keep you guys happy. [00:06:11] Speaker C: It was just a way for them to make more money for a few years until they could convince customers who were like, we hate SaaS to now adopt SaaS. Yep. [00:06:19] Speaker A: But, but there's certain environments where you can adopt it. You know, you have to in control of the data and, and it's, you know, I don't know. [00:06:28] Speaker C: I mean, I, I, I don't think at Lean's gotten Fedramp certified still not. [00:06:32] Speaker A: They have not. So I mean, they are not. [00:06:35] Speaker C: That's a, that's a bit of an issue right there. You know, if you're in a federal environment, you know, I know they, at has been working on it. I just don't think it's come out yet. Yeah. So now. [00:06:44] Speaker A: So yeah, there's things, you know, and there's other compliance frameworks that are the same right where it's this will be It'll be an interesting play for these. You know where people were running these on Prem or just even people who wanted to keep the data in house. We'll have to find another solution, which. [00:06:59] Speaker C: Is a bummer indeed. Moving to AWS, they've released two new models into Amazon Bedrock. The first one is Quin 3 from Alibaba, which includes mixture of experts or MOE and dense architectures with the largest QIN3 coder 480B having 480 billion total parameters, but only activating 35 billion per request for efficiency, efficient interference or inference. Sorry. The model introduces hybrid thinking modes that allow developers to choose between step by step reasoning for complex problems or fast responses for simpler tasks, helping balance performance and cost trade offs. Deepseek is the other one. DeepSeek v3 to 1 is now available in Bedrock as a fully managed foundational model that switches between thinking mode for step by step reasoning and non thinking mode for faster direct answers. I just heard that somewhere the model delivers improved performance in co generation, debugging and software engineering workflows while supporting over 100 languages with near native proficiency, making it suitable for global enterprise applications and multilingual customer service implementation. So if you need either one of those two models, they are now available to you in most regions that are available with Bedrock. [00:08:04] Speaker A: We don't have Jonathan here to explain why this model is better than any of the others. [00:08:09] Speaker C: Well, I mean, I'm still skeptical on Deep SEQ because it sounded like it was derivative of open of ChatGPT, so I don't really know what you're getting other than something cheaper. But Quinn, I've heard good things about. [00:08:20] Speaker A: No, I. I've heard good things. I just don't know specifically details about why it might be better than another. [00:08:27] Speaker C: Yeah, well, I mean, Jonathan will be back someday. Matt doesn't know when, but you and I saw him for a brief moment. [00:08:34] Speaker A: Yeah, I heard he exists. [00:08:37] Speaker C: He does exist. [00:08:37] Speaker D: From people that listen to the podcast. It was on my list to listen to that one. [00:08:42] Speaker C: Amazon RDS is now offering MySQL Innovation release 9.4 in database preview environments, giving customers early access to the latest MySQL features, including bug fixes, security patches and new capabilities before they hit general availability or the long term release. The preview environment provides a fully managed database experience for testing for MySQL 9.4 with both single AZ and multi AZ deployments on the latest generation instances, the databases are automatically deleted after 60 days. Don't use them for anything. Real MySQL innovation releases follow a different support model than LTS versions with innovation releases are only supported until the next Minor release, while LTS versions like MySQL 8 and 8.4 receive up to eight years of community support. The preview environments are priced identically to production RDS instances in US East Ohio, being cost neutral for organizations to test new MySQL versions before committing to production. The preview capabilities allow database teams to validate application compatibility and performance with MySQL 9.4 features in a production like environment without risking your main workload. So do not use this for production. It will be deleted after 60 days. Yeah. [00:09:47] Speaker A: It'S the only way to prevent that. Right? Like because you know, someone will do it anyway. Which is funny because my experience with, you know, database upgrades is the opposite. Like no matter how much you know, preview is offering in time and enticement, you'll still have to kick everyone off the older version. Kicking and screaming. [00:10:06] Speaker D: Well, people just never want to upgrade. [00:10:08] Speaker A: It's hard. [00:10:08] Speaker C: Bother. Yeah, upgrade is hard. I mean I just did it because Google started charging me more money for it. Of course that would never have happened. [00:10:17] Speaker A: Yeah, that's the only way we're going. [00:10:19] Speaker C: To charge you extra money. And we're like, yeah, yeah, we don't want to pay you extra money for that, so we'll go do it then. You know, the reality was it was call an API call and then run this vacuum process and you're done. And it was no big deal. But you know, it's getting people to do it. [00:10:31] Speaker A: But when it is a big deal, it's early. [00:10:33] Speaker C: Oh no, when it's a big deal. [00:10:34] Speaker D: It'S a huge deal. You gotta like refactor large sections of your application and then you just give up. [00:10:39] Speaker A: And a lot of times you don't know until. [00:10:41] Speaker C: Until load hits. Yeah, it worked fine. Dev not so much in broad. Yeah, luckily that was not the case for us. It was relatively trivial. But you know, again it's now we have to make sure we have a policy and a process because you know, the next version of postgres will come out and then our version, you know, will eventually get to a point where it's on extended to support costing money too. So just a matter of time everything rots. [00:11:05] Speaker D: Yep. Okay. Years ago I had to do like a Postgres 9.0 or 9 point something up to like 14 but in order to do it that number of steps you had to like do three and the like bunch of in between ones and scripting it for like 16 servers that were each standalone and upgrading. I was like, this is fun. [00:11:25] Speaker C: They do that in Oracle too. Maybe not now, but back in the day, like going from Oracle 8 to 9i, you had to go to Oracle 8i first, which is sort of weird. Like okay, so yeah, you do like multiple upgrades and take like several days to upgrade from one version to another. It was, it was lots of fun with lots of CDs because even though it was called Oracle 8i, it was not the Internet ready yet. [00:11:49] Speaker D: So like ESXi versus ESX? Yeah, no upgrade path. [00:11:54] Speaker C: Yeah, not really. AWS organizations now support the full IAM policy language for service control policies, enabling conditions, individual resource arns and not action elements with allow statements, bringing SCPs to feature parity with IAM managed policies. This enhancement allows organizations to create more precise permission guardrails such as restricting access to specific S3 buckets or EC2 instances across all accounts using condition statements rather than blanket service level restrictions. The addition of wildcards at the beginning or middle of action strings and the not resource elements enables more flexible policy patterns, reducing the need for multiple SCPs to achieve complex permission boundaries. Existing SCPs remain fully compatible with no migration required, making this a zero friction upgrade that immediately benefits organizations Using AWS organizations for multi account governance. The feature is available in all commercial and gov cloud regions at no additional cost, strengthening your AWS organization's position as the primary tool for your enterprise wide security governance. [00:12:49] Speaker A: They actually had the stones to say zero friction and SCP in the same article, huh? [00:12:57] Speaker C: They did, yeah. [00:12:57] Speaker D: Yeah. [00:12:58] Speaker C: Weird, right? [00:13:00] Speaker A: I mean, I still hate SCPs. This makes them usable, which is more of a challenge now because before at least it wouldn't work. But yeah, they're just so difficult to troubleshoot and manage at the runtime. Everything becomes a 403 and if you've delegated access to your accounts, there's no ability for them from within the account to troubleshoot. Unless they fix that in the number of years where they actually, you know, have added a logging statement or something. But back when I tried these, it was just permission denied and you could not fix it. You did not understand why. [00:13:36] Speaker C: Well, I mean now you can run these through the because it's now IAM compatible. You can run it through the IAM access analyzer. So that's kind of the benefit you. [00:13:46] Speaker A: Get if it knows. Yeah, if. Because that was the biggest problem before was that it didn't have insight into the organization level, just the account level. Yeah, yeah, that's fair. But you know it's also been a while since I've tried it. [00:14:01] Speaker C: I linked us to the short blog post, but the long blog post has a lot more details about how you can do policy validation now and several of the new features around the not actions, etc. And example policies to show you how you can mess up your world. So go back to you Amazon Q developer CLI is announcing support for remote MCPs, enabling centralized tool management with HTTP transport and OAuth authentication for services like Atlassian and GitHub. This shifts compute resources from local machines centralized servers, reducing individual developer workload while providing better access control and security management for the development team itself. Remote MCP servers are allow servers allow qdeveloper CLI to query available tools from external services after authentication, making third party integrations more scalable across development organizations. Configuration requires specifying HIV transfer type authentication, URL and optional headers in either Custom Agent Configuration or MCP JSON files. Features available in both Q Developer CLI and the IDE plugin. [00:15:02] Speaker A: Yeah, I'm glad to see that move this off the local resource because you know, no matter how big of a Mac you buy, you still have to use Chrome and there's just not enough memory for that. And I do like that you can sort of centralize these things versus having, you know, everyone sort of maintain their own, which would be. Which is a enablement. [00:15:22] Speaker C: Nice. Yeah, I mean, I think, I think having them centralized is ideal, especially from a security access control perspective. So yeah, it's a bit of a problem when all these MCPs are on everyone's laptops because that means they may not be consistent, means they may not all follow the same permissions models you need them to, or different access rights. Or, you know, if they're making an MCP public off their laptop now they're hosting something off their laptop users using their credentials, which is other security problems. So there's lots of reasons why you'd like to have remote mcp, but then. [00:15:50] Speaker A: It'S another platform service you got to maintain like Yep. [00:15:56] Speaker C: And because Amazon can't get their branding right, Adrios is launching Nova Act Extension, a free IDE plugin for VS Code Cursor and Kiro that enables developers to build browser automation agents using natural language prompts and the Nova act model without switching between coding and testing environments. The extension features a notebook style builder mode that breaks autonomation scripts into modular cells for individual testing, plus integrated debugging with live browser preview and execution logs for complex multi step workflows. Developers can generate Automation scripts through natural language chat, or use predefined templates for common tasks like shopping, automation, data extraction, QA testing and form filling, then customize with APIs and authentication. So on top of the open source Nova ACT SDK, the extension provides the complete agent development lifecycle within the ide, from prototyping with natural language to production grade script validation. I mean, I'm just. Whether you use Q vs when do you use Nova Act? When do you like Amazon? The branding is killing you. [00:16:51] Speaker A: Yeah, like, I don't know how, what kind of feature Nova would have to have in order to make me try it at this point. [00:16:57] Speaker D: Right. [00:16:58] Speaker A: And it could, because I have no idea how good or bad it is, but it's just trying to keep it straight. I've already tried Q, you know, unless it's my job to go through all of these things, like. [00:17:08] Speaker D: And it also feels like, you know, they're taking off of Kira a little bit because, you know, there's that whole thing and it feels like they're just kind of building. You have a bunch of, you know, two pizza box teams building something and no one's talking to each other, being like, hey, maybe we don't need to do the same project that we just did over here and maybe we should kind of follow the same branding. [00:17:29] Speaker C: Someone has an OP1 project though, and they're getting a promotion if they get this out the door. So. [00:17:34] Speaker D: Yep. [00:17:35] Speaker A: Can you use multiple models in Q? Like, isn't it? [00:17:38] Speaker C: Yeah, I think so. [00:17:39] Speaker A: So, like, I don't. But anyway, I mean, I get why this is more than just a model. [00:17:45] Speaker C: Right? [00:17:45] Speaker A: Like, this is a lot of. This is an. A specific, specific workflow for development. Right. And so like there's clearly extensions and, and features in here that are above and beyond what's in Curo and Q. Presumably so, but they'd have to be really good. [00:18:06] Speaker C: Yep. All right. GCP has a bunch of things for us this week as well. First up, Google's embedding security command center insights directly into GCE and GKE consoles, providing security dashboards that surface misconfigurations, vulnerabilities, and active threats without requiring separate security tools or interfaces. The GC dashboard displays top security findings, vulnerability trends over time and CVE prioritization. Powered by Google Threat Intelligence and Mandiant Analysis, helping teams identify which VMS patch first based on exploitability and impact. GKE's security dashboard focuses on workload configurations, container threats like crypto mining and privilege escalation, and software vulnerabilities specific to KUBERNETES environments addressing common container security blind spots while basic security findings are included Free accessing a vulnerability and threat widgets require security Command center premium with a 30 day trial available and then you pay all the monies positioning this as a value added upsell for existing GCP customers. [00:19:01] Speaker A: I got to play around with this and it's, it's really cool. I love getting that security information sort of front and center for developers and the people actually using the platform, you know, as, as a security professional. Like how we have all this information that's devoid of context and you know, like if you're lucky you know enough to build a detection and be able to query a workflow that's going to just fire off a ticket that no one's going to look at. And so this is I think putting it right in front of, you know, in the console. I think that some people, not everyone will, you know, take the initiative and be like, oh, this is very red. I should make it not so red. [00:19:38] Speaker D: That's what I feel like. My life is either in pipelines or in my security aspect of my job. It's like make things less angry at me. My wife used to say that my job was to make green checkboxes and, or make this taunts make my black box not be red. [00:19:56] Speaker A: Pretty good summary and presentations with fluffy clouds. That was a, that was one thing. [00:20:00] Speaker C: That I did a lot of as well. [00:20:02] Speaker D: But is it. So this is all integrated into their CPAM solution? [00:20:07] Speaker A: No, it's the Kubernetes console directly. So when you're. Oh yeah, that's cool. [00:20:13] Speaker C: Yeah. And gce, which is the Google, which is the instances themselves, which is compute. [00:20:17] Speaker A: The VM instances. Yeah and so that's, it's, it's really nice to have it there just because it's, you know that the vulnerability data and the configuration for Kubernetes is super important and it's one of those things, especially the configuration that is a complete black box to a lot of Kubernetes, you know, platform engineers and you know, working with security is, you know, it's great when, when there's time and money, but that doesn't always happen. [00:20:42] Speaker C: Google's expanding the MCP toolbox for databases to support Firestore, enabling developers to connect AI assistants directly to their NoSQL datab through natural language commands for querying, updating documents and validating security rules. Integration allows developers to perform database operations without writing code. For example, asking AI Assistant to find all users whose wishlists contain discontinued product IDs or remove specific items from multiple user documents directly from their ID or cli. This positions Google alongside Anthropic's MCP standard, providing a unified way for AI systems to interact with enterprise data sources through AWS and Azure. Haven't announced similar MCP compatible database tooling yet. The Firestore tools support document retrieval, collection queries, document updates and security rule validation, addressing common developer pain points like debugging data issues and testing access controls before deployments. [00:21:31] Speaker A: I mean, as someone who never wants to write SQL queries ever again, like I love these types of things because it's this is exactly how I want to interact with the database. Like I want to ask it a question and have it spit back data and then the data is wrong because I asked the wrong question. But that's neither here nor there. [00:21:47] Speaker C: At least you knew the data was wrong to ask a better question. [00:21:51] Speaker D: Or did you? [00:21:54] Speaker A: Well, it's better than dealing with syntax errors for three hours and then just giving up because I don't care anymore. [00:22:00] Speaker D: I feel like my only caveat here is make sure this is set up to not hit your production database, at least hit the read only replica. Because I've definitely seen people that set up these types of tools and you know, they gave, you know, people BI and data scientist, Asics production data and this is at an old day job of mine and all of a sudden production was down. They couldn't figure out why because some data scientist or BI person ran a terrible query and just slowed the hell out of the entire production databases. So that's one of my always fears of leveraging some of these tools is like how good is the SQL? How good is the, you know, query that is writing. [00:22:41] Speaker A: Yeah and it's difficult, right? [00:22:42] Speaker D: Probably still better than mine, let's be honest. But that's a different problem. [00:22:46] Speaker A: You know, how do you load, you know, proof your your application for something where it's being interacted with natural language like in the query is not defined yet. So this will be a new challenge for for database administrator. [00:22:59] Speaker C: Google's 2025 door report is out showing AI adoption among software developers has reached 90%, up 14% from last year. With developers spending a meeting of two hours daily using AI tools for their development tasks. Despite 80% of developers reportedly reporting productivity gains and 59% seeing improved code quality. A trust paradox exists where 30% trust AI a little or not at all, suggesting AI serves as a supportive tool rather than replacing human judgment. The report identifies seven team archetypes, from harmonious high achievers to legacy B teams Revealing that AI a both a mirror and multiplier, amplifying efficiency and cohesive organizations while exposing weaknesses in fragmented ones. Dun dun dun. Google introduced the DORA AI Capabilities model, a blueprint for seven essential capabilities combining technical and cultural factors needed for successful AI adoption in software development organizations. While AI adoption now correlates with higher software delivery throughput, reversing last year's finding, organizations still face challenges ensuring software quality before delivery. Indicating adoption alone doesn't guarantee your success. So check out the full door report for all of the insights which are always very interesting and definitely color some of my thoughts on some different things as well. [00:24:10] Speaker A: Yeah, I was reading an interesting article this week about a study. [00:24:15] Speaker D: I think. [00:24:16] Speaker A: It was at the Harvard Business Review where they were talking about the return of investment on AI and that they it's the measurable metrics aren't there and there's. And so it's, it's kind of interesting to me because I also see these statistics where it's like 90% of people are using it, they're spending hours a day. But if there's, you know, and these tools aren't cheap in a lot of cases. But if there's no return on investment, like that's, that's concerning and you know, so getting to the bottom of that's going to be interesting. Personally I don't, I don't agree at all. Like I've been able to achieve much greater things in much shorter period of time because of AI. [00:24:55] Speaker D: But are you producing? [00:24:57] Speaker A: I'm not a developer. [00:24:58] Speaker D: Production quality? No, no applications with it. And that's where I feel like the disconnect is like I feel like it's really good for jump starting new projects. Things along those lines less good for full production blown systems. So like for example, I built a Node JS app and I've never written Node JS before in my life, you know, out of it. And I got it up and running and it kind of reminds me of Microsoft front page. It's up and running but now if I want to tweak something, it's okay. I can't just go in the code that I know and re and read to make it work. Like okay, I have to like figure out now the convoluted logic, you know, of how it created this thing. Or I wrote another script that ended up being like a 4000 line script to just hit an API. It was a bash script I used as a, to link two tools together because Terraform couldn't natively do it. And I feel like going to go. But it should have been like a 500 line script. It's 2700 lines long. So yeah, overly is verbose at times too. [00:26:05] Speaker A: And so I find myself tuning AI and giving it a lot like not letting it get too long of a leash, like so giving it the structure I'd like to see and, and you know, sometimes taking a couple tries at it because it is, it will do some crazy things sometime. And you know, my favorite is it I asked for a feature change and it yeah, like added three new components to ensure backwards compatibility of this internal tool that's going to run on my desktop. Like pass. [00:26:33] Speaker D: But, but that's where I kind of find like AI is really good in certain places but if you try to load it with a 5 million line code base, it's probably gonna die on itself. So like you gotta find how to leverage the tool for your environment, for your workflow, for your, for your tasks. And like I definitely used it in large scale places and I've also just been like, I don't feel like writing a script. Go write me the script that does this. And then had it made up. PowerShell commands for me, you know, and it's like that. Just because you say, I think it was something like get SQL database storage space doesn't mean that command that actually exists. So it kind of hallucinated there a little bit. That was like two days ago. [00:27:12] Speaker A: PowerShell results are a little weak, a little wacky. Yeah, well it shows that there's just not enough data for the models to train like in GitHub and stuff. And if you look at the statistics and GitHub, like PowerShell is not there, not there. [00:27:26] Speaker D: But that's where I think you can use it. And there's a lot of places to use it. But there's also what you were saying before. It works in some place, doesn't work in others. And I think that's what kind of the market's figuring out now is we can gain a lot of benefits in certain places. POCs, you know, very basic general stuff. But you still need that developer to really go in and be like, okay, tweak it and use it as a tool, not as the full development agent. [00:27:55] Speaker A: Yeah, I want to get, dive into this report because it's, you know, it's, it's one of those things like we're, you know, like talk at the beginning of the show like the Dex DX purchase rather you know, the productivity and the measurement of that and AI's influence on that is going to be super important as we figure out how the humans and robots are going to live together happily. And so like Dora is a pretty good thing because it should show, you know, like the one way or the other, like AI adoption and its influence on CICD and release times. And so it'd be really interesting to see if there's that kind of correlation. Look forward to reading it. [00:28:33] Speaker C: Yeah. [00:28:33] Speaker D: I would also be curious in the long term to see how bugs are, you know, if you end up with more or less bugs when with companies that leverage AI. [00:28:45] Speaker C: I mean I. It's interesting. One of the articles I saw from this weekend was from Harvard Business Review. AI Generated Work Slop is Destroying productivity. [00:28:53] Speaker A: Ah, you read it too? That's exactly the one I was talking about. [00:28:55] Speaker D: Yeah. [00:28:56] Speaker C: Um, which, you know, actually the most interesting part of the whole article. I mean there's a bunch of things about it, you know, when you look at the research, it's, it's fascinating, but it's a very specific period of time they, they look at, you know, for this, which is not necessarily like Google's, you know, Cloud Sonnet 3.7 or newer. It's, you know, it starts before those are out there. And so the tools have come a long way in the last year too, so you had to factor that into some of these things. But the, the line that struck me in this article that was as AI tools become more accessible, workers are increasingly able to quickly produce polished output, well formatted slides, long structured reports, seeming articulate, summaries of academic papers by non experts and usable code. While some employees are using the ability to polish good work, others use it to create content that is actually unhelpful, incomplete or missing crucial context with the project at hand, the insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret correct or redo the work. In other words, it transfers the effort from creator to the receiver, which I have definitely seen. So it was like, it definitely struck a chord with me when I read through that and I was like, yeah, it's definitely something you have to look at. Even when I use AI to go do research, things like that, I'm, and being very, you know, if I'm sharing out with people, I'm like, hey, I use AI for this number one. So you kind of caveat that. But number two, like I normally read it and I have questions and then I, you know, typically go back and forth with the AI multiple times before I'm satisfied with the answer that I feel like it properly addresses and supports its case because if it's weak to me, I'm going to challenge it. [00:30:24] Speaker A: Yeah, I've definitely rejected prs because it was clearly AI generated and not reviewed at all by, by the author. [00:30:33] Speaker C: Right. [00:30:33] Speaker A: And they're like, you know, can you review this PR so I can commit it in? I'm like, no, like, like this isn't good. You have to still, just because you're using AI, your name's still attached to it. [00:30:43] Speaker D: Right. [00:30:43] Speaker A: Like it's still your responsibility to make sure that it's. [00:30:46] Speaker C: Yeah. If you're, if you're dishing out work slop, you're still gonna get terminated for not being a great worker. Yeah. Because you're the one who's now creating all that work for everybody else anyways. Yeah, check out the door report. I'm definitely, I spent some more quality time with it, but it does come out this week. I wanted to get it into all of our listeners hands. Let's move on to Azure who have very Azure heavy list of stories. Thanks Matt. [00:31:10] Speaker D: It's not my fault. Maybe it is not my fault. [00:31:14] Speaker C: First up, Microsoft is in the news for Entra ID vulnerabilities that could have been apparently catastrophic. Security researcher Dirk Jan Malema discovered two critical vulnerabilities in Microsoft's Entra id, formerly Azure Active Directory, that could have allowed attackers to gain global administrative privileges across all Azure customer tenants worldwide, potentially compromising every organization's user identities, access controls, and subscription managed tools. You know, just a minor issue. The vulnerabilities enabled an attacker was just a test or trial tenant to request tokens that could impersonate any user in any other tenant, allowing them to modify configurations, create admin users, and essentially achieve complete control over customer environments, a scenario that represents one of the most severe cloud security risks possible. Microsoft has presumably patched these vulnerabilities following Malema's responsible disclosure, but the incident highlights the concentrated risk of centralized cloud identity systems where single vulnerability can expose millions of organizations simultaneously. Unlike traditional on premise active directory deployments, this discovery underscores why organizations need defense in depth strategies even when using major cloud providers, including monitoring for unusual administrative actions, including conditional access policies, and maintaining incident response plans that account for potential cloud provider compromises. For Azure customers, this serves as a reminder to review Entre ID security configurations, enable all available security features like privileged identity management, and ensure proper logging and alerting are configured to detect we had. [00:32:34] Speaker D: A problem, we fixed the problem. Buy more stuff from us so you don't have any problems in the future. [00:32:41] Speaker A: I don't think you have to pay for privilege Identity management. I think it's built in. [00:32:44] Speaker D: Yeah, you do. It's G2 licensing. [00:32:46] Speaker A: Oh, that sucks. Because it's a really cool tool. I like it actually. It's the only thing that makes Entre ID kind of usable. [00:32:54] Speaker D: That's where it's one of the main reasons we bought that. And it gives you better reporting for like user audits. [00:33:01] Speaker A: Yeah, it does. [00:33:02] Speaker D: It's the two reasons why, I mean we bought it as a company was it dramatically increased our ability to, you know, not have least purpose access. [00:33:11] Speaker A: Yeah, no, I, I mean this is. It does highlight something that's really scary because it's like reviewing ad and. And security configurations of ad is terrible. And then, you know, Entre ID makes it better but not perfect. And so it's something that's, you know, not always been done or kept up to date. For sure. [00:33:29] Speaker D: Yeah. I mean they have some decent recommendations in there. Some of the things from like I look at. Because we. There's in there is like a score I don't know if you've ever looked at in the Azure portal. There's these. [00:33:42] Speaker A: Yeah, I know what you're talking about. [00:33:45] Speaker D: It's useful. It gets, you know, the whole point is like that red, green, you know, it's that dashboard for people to look at and be like, oh my God, we're red. You know, what do we do? Type of thing. You know, how do we make it green? It's like I kind of find it useful there, but there's definitely things in there where I'm like, these are not actually useful. And you're telling me it's like a three pointer, like the one that's one point seems a lot better. It will increase our security a lot more. It's harder to do, but it's much better. [00:34:14] Speaker C: Yeah. [00:34:15] Speaker A: Yeah. I thought the recommendations were a little basic. Like don't have a bunch of people in all the admin roles. Like. [00:34:21] Speaker D: Yeah, okay, I think we got flagged for having only one person as the global admin. Really? And it's like this is a problem. [00:34:31] Speaker A: But is it like it depends on, you know, like the, the continuity options. Right. Like, it's fine if it's something that you can work with Microsoft to get in, but in some cases you can't just. [00:34:43] Speaker D: Yeah, we have other ways to get in so, you know, miss some of the other things. [00:34:47] Speaker C: But yeah, I mean, specifically the CVE says there's no customer action required to resolve because Microsoft fixed it for you so. [00:34:54] Speaker A: Oh yeah, no, this is definitely, I. [00:34:55] Speaker C: Mean like the, the recommendations they gave you are basically just like you should have logging and monitoring and you should have controls. And it's like you can do that with our tools. [00:35:02] Speaker D: Right? [00:35:03] Speaker C: We'll sell you them. But yeah, otherwise, you know, these are things that are hard for cheap for small companies to deal with. [00:35:09] Speaker A: Definitely. And it's, you know, it's totally scary that, you know, they were able to, to get this in such a way where they could get into anyone's org. Right. Not just theirs, not just break out of theirs or, or just mess with Microsoft directly. [00:35:21] Speaker C: I mean, this is maybe why you shouldn't have taken a technology you wrote in 1997 to become part of Windows 2000 and then turn it into a cloud service and not think, rethink some of these things, you know, just, just putting it out there. [00:35:34] Speaker A: But you know, then, then again then we'd have to like sort of turn away from the Okta instance from not too long ago. And you know, the consultants, I mean, everyone has. [00:35:41] Speaker C: I mean, anytime you're leveraging these major SaaS vendors, you are putting yourself at some level of confidence in their abilities to write good software. [00:35:49] Speaker D: So wait, you said 1997. It looks like some of this is based on RFCs that were written as early as 1971. [00:35:59] Speaker C: Well, I mean, basic identity and authorization stuff was. [00:36:02] Speaker D: Oh, because it's ldap. Yeah, sorry, I was skipping the article. So it's the LDAP rfc, which is what's basically. [00:36:07] Speaker C: Yeah, I was thinking more like, you know, 96, 97, right. When you're in the middle of writing Windows 2000, which is the first place they did LDAP with AD. Yeah, that's right. I'm assuming. [00:36:17] Speaker D: Yeah. Microsoft previewed AD in 1999, released with the first version of Windows 2000 server. [00:36:24] Speaker C: Yep. I'm just saying if you're going to build a major cloud service, maybe don't build it on the thing you built 25 years ago, before modern security. I'm just saying. Just saying. Yeah, minor, minor things. Well, Microsoft unveiled Fairwater In Wisconsin, a 315 acre AI data center with 1.2 million square feet that operates a single supercutor supercomputer using Nvidia GV200 servers with 72 GPU racks or GPUs per rack, delivering 865,000 tokens per second, positioning it as 10x more powerful than the current supercomputers facility, uses closed loop liquid cooling with zero operational water waste and a two story rack configuration to minimize latency, while Azure's re engineered storage can handle over 2 million read write transactions per second per account with exabyte scale capacity. Microsoft is building an identical Fairwater data center of data centers across the US and partnering with Inscale for facilities in Norway and the uk, all interconnected by AI Wanna to create a distributed supercomputer network that pools compute resources across regions. Zen for sure specifically targets OpenAI, Microsoft AI and Copilot workloads, with Azure being first to deploy Nvidia GB200 at data center scale, a notable advantage over AWS and gcp. The investment represents tens of billions of dollars and positions Microsoft to offer Frontier AI training capabilities that smaller cloud providers can't match, though pricing details weren't disclosed and will likely command premium prices. [00:37:48] Speaker A: Yeah, it's interesting how the AI is starting the data center arms race again, right? Because it used to be electricity, then it was density and now it's everything. But it is kind of. It is, it is neat and I'm such a dork that I love reading these articles because it's, you know, even though you're not getting a total insight into all the details, you get a little bit and like the, you know, I'm happy to see them using closed loop things because that's a constant, you know, challenge for AI is it's going to drink all our water. Yeah, it's fun. [00:38:20] Speaker D: I find it interesting just like the building of this. Like, like how you physically build the building and I have a bunch of friends from college that are architects and one of which who is actually a data center who builds data centers and built them for multiple hyperscalers now. But you know they talked, she's talked about like how they modularly built it but like building two floors is a big deal because like the weight of a server rack and batteries and everything else in that rack straight down is a lot. So one, the idea of a two story rack itself, it's just a lot of weight and like to minimize that like that's a lot. [00:38:59] Speaker A: I think it's physically married because there's some pictures in the article and I don't. It's not like a rack sitting on a wreck. It, it looks like a giant bunch of network cables going up towards the sky but I think it's got its own, I think it's AX above that's sitting on its own structure. [00:39:14] Speaker D: No, that's what, that's what I think it is too but it's still its own structure which means you like it's more heat at the top versus the bottom. Like there's a lot of power connect things. Like it's a lot of interesting things. [00:39:27] Speaker A: Yep. [00:39:28] Speaker C: I love that they decided to call their WAN an AI. Wanna is there any AI in the wan? [00:39:33] Speaker A: Nope. I was like, okay, I mean it's for AI, right? For AI, talk to AI and the AI that's all the way. [00:39:40] Speaker C: I mean, I hope someone starts dropping AI suit underwater cables like it's an AI deep sea cable. So yeah. [00:39:51] Speaker B: There are a lot of cloud cost management tools out there, but only our chair provides cloud commitment insurance. It sounds fancy, but it's really simple. Achera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask will you actually give me my money back? Achero will click the link in the Show Notes to check them out on the AWS Marketplace. [00:40:31] Speaker C: Azure SQL Managed Instance now offers three update policy options, always up to date for immediate access, new SQL Engine features SQL Server 2022 for fixed feature sets, matching on premise versions, and the new SQL Server 2025 policy in preview that provides database portability while including recent innovations like vector data types and JSON functions. The SQL server 2525 policy bridges the gap between cloud innovation and enterprise requirements for regulatory compliance or contractual obligations, allowing organizations to maintain compatibility with on premise SQL Server 2025 while benefiting from managed service capabilities. Key technical additions in the 2020 policy include optimized locking for better concurrency, native vector database types for AI workloads, regular expression functions, JSON data type with aggregate functions, and the ability to invoke HTTP REST endpoints directly from T SQL which what could go wrong there more business logic in the database. Why not fronted by an API who doesn't love it? [00:41:28] Speaker A: Yeah, it is. It's interesting to mention of like you know, the newer Microsoft SQL helping stay, you know, compliant to regulatory frameworks. Like that's so interesting. Like I guess because of cloud access policies. Like I'm not sure. It's kind of interesting. [00:41:45] Speaker C: Yeah, it's interesting that how that ties together. I mean there are definitely some things that they did in the Azure SQL that you can't get access to an on PREM SQL if by enabling this connection together you can get some of those advantages. They're cloud powered is my assumption. No. [00:42:01] Speaker D: So this is different because Azure is complicated. Because Azure. [00:42:06] Speaker A: Of course it is. [00:42:07] Speaker D: You have Azure SQL, which is rds. It's fully managed. You have Azure managed instances which are this Azure SQL managed instance, which is. [00:42:15] Speaker C: SQL on a server, managed by Microsoft or managed by me. [00:42:21] Speaker D: You have access to the server, but they give you extra visibility and everything else into the SQL on that box and can now do the upgrades and stuff like that. So it's still a SQL because for example, you can't run SSRS on the SAS paths on the path Azure SQL, you have to run it on the Manage in SQL. [00:42:44] Speaker A: Right. [00:42:44] Speaker C: Because it's not a full. It's not a true full SQL instance. Yeah, okay. [00:42:48] Speaker D: It's. So it's something new story is just about the managed instance, which is not to be. There's also a third kind that I don't remember offhand, but just to really confuse everyone. [00:43:01] Speaker C: Yeah. So basically the things that you can get in SQL, Azure SQL Manage, that you can't get on Azure SQL on Prem. By connecting these things together you can get some of those capabilities is basically the gist. I just. I just mistakenly said the Azure SQL part. [00:43:14] Speaker D: Yes. [00:43:15] Speaker C: Okay, thanks. Thanks for. [00:43:16] Speaker D: I think. [00:43:17] Speaker C: I think so. [00:43:18] Speaker A: I don't know. [00:43:19] Speaker D: There's like an 80% chance. It's Friday, guys. Just say. [00:43:25] Speaker C: Azure has gone crazy on Kubernetes this week with a bunch of AKS announcements. First up is AKS Automatic which delivers production ready Kubernetes clusters with one click deployments, removing manual configurations of node pools, networking and security security settings while maintaining full Kubernetes API compatibility and CNCF conformance. Services include automated scaling via Carpenter for nodes and built in HPA VPA and KEDA for pods. Plus automatic patching, Azure Monitor integration and Microsoft Entra ID authentication configured by default. They say this makes it better than or competes with GKE Autopilot and EKS Fargate, giving you that fully managed experience of those providers on Azure. Akito Tier includes Azure Linux nodes by default, GPU support for AI workloads and integration with Azure's broader platform services, though pricing details aren't specified beyond the automatic tier selection during cluster creation. The service addresses the Kubernetes tax by automating day two operations like upgrades and repairs, allowing teams to Deploy directly from GitHub Actions, while Azure handles infrastructure management automatically. I mean, the Kubernetes tax for me is not. Yes, this is part of the tax, but the other part of day two tax is managing, you know, Kubelet and access to Kubelet and all that. Stuff. So you know there's lots of things that are part of that day two tax, not just managing the host itself. [00:44:42] Speaker A: Yeah, in my day job I'm doing a whole bunch of like vulnerability reporting on the container structure and I'm like half of these containers are just the Kubernetes infrastructure. [00:44:51] Speaker D: Like it's crazy by the time you have nginx and everything else that's in. [00:44:57] Speaker C: There, you know, ISTIO containers, sidecars, all the security tooling. Like it's so much stuff just to manage it. [00:45:04] Speaker A: Nine different metrics containers for some reason, you know like and we have to host Prometheus and like wow. [00:45:10] Speaker C: Yeah. [00:45:10] Speaker D: I always say Kubernetes is as complicated as a cloud. [00:45:14] Speaker C: I mean it's basically a new version. [00:45:16] Speaker D: Of VMware its own cloud. [00:45:18] Speaker A: Yeah, yeah. [00:45:20] Speaker C: One of the ones here was the Azure Linux being built into it gets you CIS Level one benchmarking by default, which is the only AKS support distribution to do that and includes FIPS and Fedramp compliance certification. So if you are trying to do Azure and FedRamp, the AKS automatic is a great path for you because it's part of that service. So as long as it's been certified. So check that first. But I appreciate that they are giving you an option that meets both the FIPs and the CIS Level 1 benchmarks out of the box. [00:45:46] Speaker A: Yes, that's cool. [00:45:49] Speaker C: Databricks 1 is consolidating data engineering, analytics and AI development into a single platform within Azure Databricks. Addressing the common challenge of fragmented data workflows across multiple tools and services, the platform introduces unified governance across all data operations which could help enterprises meet their compliance requirements while reducing the complexity of managing permissions and access controls. The decisions Azure daybricks competitively with all of the other cloud providers, data services and target customers include enterprises struggling with data silos and organizations looking to accelerate their AI ML initiatives without managing multiple platforms and governance frameworks. Pricing details aren't provided because it says in preview, but consolidation typically reduces operational overhead but may increase your platform locking. Considerations for organizations evaluating your multi cloud strategy. Although you can get databricks on all the other clouds. One thing if you are already an Azure Databricks customer, I did see a deprecation notice that when this goes ga, the legacy databricks is dead. So if you didn't, you didn't want this, you are going to get it forced on you at some point. [00:46:50] Speaker A: Yeah, well hopefully the data and stuff can all stay. [00:46:52] Speaker C: I'm sure the data can still stay managed. Here is going to change from the control plane. [00:46:57] Speaker A: I mean, I'm a big fan of these control plans and I agree getting it all consolidated in one place makes it a lot easier. I do find it, you know, it's interesting that Azure is just partnering with databricks to build this into, you know, and offer it on Azure. It's kind of cool. [00:47:11] Speaker C: Azure has new acronym soup for you in instances. This is the HBV5 Series VMs which are launching in preview in South Central US targeting memory bandwidth intensive HPC workloads like computational fluid dynamics, automotive simulations and weather modeling that require extreme memory throughput and performance. These VMs represent Microsoft's latest push into specialized HPC infrastructure, competing directly with instances like the X2GD and the M3 series at Google. HPV 5 instances feature AMD's latest Epyc processors with enhanced memory bandwidth capabilities, though specific technical specifications were not provided in the archive article. Target customers include automatic manufacturers running crash simulations, aerospace companies modeling aerodynamics and meteorological organizations processing weather prediction models. A bottleneck on that memory bandwidth. [00:48:00] Speaker D: It's amazing how specialized different workloads are to different instance types. And I very rarely had to be like, okay, besides this is web or workers or you know, something like that. You know, be like, okay, I am so deep into this system that I need high throughput memory and in order to get the best performance out of this, like I've done it with like different HPC workloads and a few other small things, but never to that level of the extreme where it's like I need this specialized instance to focus on that. [00:48:34] Speaker A: Yeah, I was really happy in the, in the article that they, you know, they sort of talked about the, what this was good for. Right? Because I wouldn't had any clue if it wasn't for, you know, the mention of like, you know, simulations and modeling fluid dynamics. It's like, why would you need this? It's pretty cool. [00:48:50] Speaker D: I mean, it's most curious why they chose that region to launch it. South Central U.S. but that's a different story. [00:48:57] Speaker A: I bet you it's a customer. [00:48:59] Speaker C: I say there's probably a customer there. [00:49:01] Speaker D: It's 100% that. But like, what's the customer like? Normally I feel like you can kind of gather what the customers like. [00:49:07] Speaker C: Fluid dynamics, weather modeling and automated simulation. Let me think, where is the weather channel located? They're in Atlanta. [00:49:15] Speaker D: Yeah, but South Central is in Atlanta. [00:49:17] Speaker C: No, I know, but it's not that far. Then I'm like, okay, who else would need potentially weather modeling? Could be A NASA might need that for space shuttle or for launches of rockets. You know, things like that in Texas. [00:49:28] Speaker D: So it's SpaceX. I thought it was further. [00:49:32] Speaker C: It's definitely not. I don't think it's SpaceX, but I. [00:49:34] Speaker D: Thought they were a big. [00:49:36] Speaker C: I mean, maybe they are. I don't actually know, but I think. [00:49:39] Speaker D: They were for some reason. [00:49:40] Speaker C: That's weird. Okay, their choice. You could have Amazon's Right there. It's right there. Azure Functions are going to support Net10 if I knew what Net10 did, I probably care about this. But this is in public preview allowing developers to leverage the latest. NET runtime improvements, including better performance and reduced memory usage in their serverless applications. Upgrade requires updating the target framework, which is a pain in the butt. [00:50:09] Speaker D: And. [00:50:09] Speaker C: The Microsoft Azure Functions Worker SDK to version 2.05 or later, providing a straightforward migration path for existing. Net function projects. No, that's not straightforward. Positions Azure functions competitively with AWS lambda, who supports. Net8 and Google Cloud Functions, which supports. Net Core 3.1, giving Azure temporary advantage for the. Net developers, which they already had an advantage because it's Microsoft Enterprise. Customers running network codes can now run standardized on.net 10 across their entire Azure stack, from app service to functions, simplifying dependency management and zero security patching. Likely this will lead to general availability sometime in early 2026, giving organizations time to test compatibility with their existing function code before production deployments roll. [00:50:49] Speaker A: Yeah, I mean I'm just happy to see. NET running in serverless workloads. I doubt you know, like in my head it like it can't run well. But you know. No, it's nice. [00:50:59] Speaker D: Good amount of it. Do you. [00:51:01] Speaker A: Nice. [00:51:01] Speaker C: Yeah. The Release date for net 10 is supposed to be November 2025, so it might be late 2025. [00:51:08] Speaker D: So at Ignite. [00:51:10] Speaker C: Yeah, most likely there's a release candidate was released on September 9th. So you're already into that. [00:51:16] Speaker D: Yeah. [00:51:17] Speaker C: What's new in net 10? Because this is great for listeners. Real time, just in time inlining method devirtualization and stack allocations. Also includes AVX 10.2 support, native AOT enhancements, improve code generation for structure and arguments, and enhanced loop inversion for better optimizations. New. Net libraries which makes sense. The. Net SDK gets updated with new. Net testing capabilities. Net tool exec capabilities for CLI O. Net Aspire is coming ASP net core 10.0 includes Blazer improvements, OpenAI enhancements and minimal API updates and C sharp 14 and F sharp. That's my favorite one. I don't know what it does, but it's F sharp. That's all I got. [00:52:03] Speaker D: I don't think I understood half those things you said. [00:52:07] Speaker A: I've never even heard of that. [00:52:08] Speaker D: I want to know what the aversion loop is. [00:52:12] Speaker C: I don't know what is F. You're. [00:52:15] Speaker A: Just trying to torture our listeners at this point. [00:52:17] Speaker C: With all the life It's a functional first approach, open source cross platform functional first programming language running on the. NET platform. Known for its succinct, robust and performant code. It features lightweight syntax, type inference, immutable by default data, and supports both functional and object oriented programming models. [00:52:37] Speaker A: It sounds like they built a version of C that runs on serverless Net. [00:52:42] Speaker C: Maui is coming out of this too, which is basically NET for mobile devices plus Mac natively. So down the deck and in fact to your Macintosh. [00:52:51] Speaker D: Wait, so an improved lope inversion is just a do while loop versus a while loop? [00:52:58] Speaker C: Cool. [00:53:00] Speaker A: What? [00:53:01] Speaker D: I'll read this later, but what? [00:53:03] Speaker A: Yeah, that's like an early in the day research thing, not a late in. [00:53:08] Speaker C: The day research thing. Yeah, you're going to have to use more research on that one. All right, let's move on before we get into more Net F Azure Monitor Container Insights now offers high scale mode and general availability, enabling higher log collection throughput. For Azure, Kubernetes service clusters that generate substantial logging volumes. I mean, just by definition, Kubernetes generates a significant amount of logging volumes. Turning it up this addresses a common pain point for enterprises running large scale AKS deployments where standard container insights might struggle with log ingestion rates during peak loads or debugging scenarios. Target customers include enterprises with high transaction microservice architectures, financial services running real time processing, and any AKS workload generating logs beyond standard collection limits or Microsoft hasn't detailed specific pricing changes. Customers should evaluate whether the improved throughput justifies potential increased costs from higher log ingestion and storage volumes. And I'm going to tell you it does not justify because those logs are mostly garbage. [00:53:58] Speaker A: They're mostly garbage and they yeah, the storage and the ingest rate costs you so you paying for it twice this. [00:54:06] Speaker D: It'S so same thing as Cloud Watch. It's so expensive to take logs into a PL into any of these platforms, but you got to get them somewhere so you kind of just are stuck paying for it. [00:54:16] Speaker A: I think they should be more expensive like because it's the. Like it's not expensive enough because people don't tune their logs at the current rates. [00:54:22] Speaker D: Like it's Developers don't care about the price though. You've had developers. They care about the price of their logging. They care about like what the not telling me that they need like, you know, 84 cores to run a simple web app is hard enough. [00:54:38] Speaker A: I mean, the minute you build them, they care, but it's hard. Yeah, yeah. [00:54:42] Speaker C: Azure database for PostgreSQL now supports confidential computing through hardware based trust execution environments or Ts, ensuring data remains encrypted even during processing and preventing unauthorized access from cloud administrators or malicious insiders. [00:54:56] Speaker A: Right. [00:54:56] Speaker C: First question. Does Microsoft SQL support conceptual computing? Do you either of you know? [00:55:03] Speaker A: I have no idea. [00:55:03] Speaker D: Does Microsoft SQL real time set up. [00:55:07] Speaker C: The feature leverages Intel SGX or AMD SEV technologies to create isolated compute environments. So customers should expect performance overhead of 10 to 20% and potential limitations on certain PostgreSQL extensions. Primary use cases include multi tenant SaaS, apps processing sensitive customer data, compliance with data residency requirements, and organizations needing to demonstrate zero trust security models to auditors. [00:55:29] Speaker D: Yes, it's called secure enclaves. So I knew that they had secure enclaves. I wasn't sure that I didn't know that meant the same thing. [00:55:35] Speaker C: Yeah man, they had enclaves too, but I just didn't know they were supporting. [00:55:39] Speaker A: Yeah, so it's just running Microsoft SQL on an enclave as you know. It's because that's just basically a confidential computing server at that point, right? [00:55:48] Speaker D: Yeah, that's what my 32nd Google search and reading Microsoft documentation live has told me. So highly plausible. I'm wrong here. Check the AI, check the cloud pod. [00:56:03] Speaker C: The real time follow up on your improved loop inversion. It doesn't have to do with the while and the while do while it has to do with the where the while comes into play. So by moving the condition of the while to the bottom of the loop, the just in time removes the need to branch the top of the loop to test the condition, which improves your code layout. And it also improves numerous operation optimization including loop cloning, loop unrolling and induction variable optimizations which allows the loop to wrap faster. So just think about the loop is a big circle. How do you shorten the circle? By moving the condition later in the process. [00:56:35] Speaker A: There you go. [00:56:37] Speaker C: You're welcome. [00:56:38] Speaker D: Still don't understand all that but good enough. [00:56:40] Speaker C: I mean it was better than it's a difference between a wild true. [00:56:46] Speaker D: I also think I so down at the middle of it. So you know. [00:56:49] Speaker C: Yeah, yeah. [00:56:50] Speaker A: Make code better. [00:56:52] Speaker C: Yeah. Azure Database Migration Service Hub provides centralized dashboard for discovering, assessing and Tracking SQL Server migrations to Azure Addressing the complexity of managing multiple migration projects across enterprise environments I almost want the service. The service automatically discovers SQL servers in your environment and provides readiness assessments, helping organizations prioritize which databases to migrate first based on dependencies and potential blockers or this is going to cost you a lot of money. This is going to need really big servers. Microsoft plans expand beyond SQL Server to support multiple RDB mass migrations. Add real time migration tracking with status monitoring, error reporting and completion metrics directly in the Dashboard hub. Experience targets enterprises consolidating data centers or modernizing their legacy SQL Server deployments. I mean it's nice that this is getting improved. I want the features that aren't out yet though. [00:57:38] Speaker A: Yeah and it's, you know, it's a great play by Azure because you know they have a huge advantage in the space and I think there is a desire by a lot of companies to get out of their legacy deployments and so smart hurry up with the features. [00:57:54] Speaker C: Azure managed service for Prometheus now embeds Grafana dashboards directly into the Azure portal at no additional cost, eliminating the need to manage separate Grafana instances for basic virtual visualization needs. The integration reduces operational overhead by providing out of the box dashboards for common Azure services while maintaining compatibility with existing Prometheus query language workflows. Feature Target users include DevOps teams and platform engineers who need quick metric visualization without the complexity of managing dedicated Grafana infrastructure. Particularly useful for Azure native workloads, this simplifies basic monitoring scenarios. Organizational complex visualization requirements for multi cloud deployments will still likely need a standalone Grafana instance. I also look forward to the arguments between well, the Azure monitoring says this while the Grafana monitoring says this and it's in the same dashboard, it's in. [00:58:38] Speaker A: The same screen and it's the same telemetry. But nope. [00:58:43] Speaker D: It'S going to happen. Let's be honest. I mean it's nice that they're just. It's one less thing of running Grafana, running your own service, securing it, et everything, et cetera, et cetera. So if you're just using it for basic because you set up your simple AKS cluster and you want to have your Grafana with Prometheus and everything running there like it's just a nice easy, simple step to, you know, get something up and running quickly. [00:59:10] Speaker C: Yeah. [00:59:11] Speaker A: And that, you know, it's this kind of removal of things where the removal of toil, you know, prevents someone who just, you know, making their Grafana dashboard open on The Internet with no auths and just putting it out there, you know, of course it just makes it a target. [00:59:26] Speaker C: So great. [00:59:27] Speaker D: Sound like you've seen that before? [00:59:29] Speaker A: No, no, never once. [00:59:34] Speaker C: For those of you in Europe, you're getting now from Azure at cost data transfer for customers moving data from Azure to external endpoints via the Internet in Europe, eliminating typical egress fees that can make multi cloud or hybrid strategies expensive. This move directly addresses vendor locking concerns by reducing the financial barriers to data portability, making it easier for European customers to adopt multi cloud architectures or migrate workloads between providers. The feature is limited to European regions and CSP partners, initially suggesting microstructures responding to the EU regulatory pressure about data sovereignty and cloud provider switching costs. No enterprise customers running hybrid workloads or needing to regularly sync large J sites between Azure and on premise systems will see immediate cost benefits, particularly for backup, disaster recovery and data lake scenarios. I'm still hoping for one of these cloud providers just to do it for everybody, all regions and just say like screw you, Europeans do it, but we're doing it for all because it's the right thing to do. [01:00:24] Speaker D: But they're going to lose too much money on it. [01:00:26] Speaker A: Yeah, it's a huge profit center, right? [01:00:27] Speaker C: Yeah, yeah. But it would be so nice and. [01:00:31] Speaker A: It would be really nice. [01:00:32] Speaker C: I would, I would like that whatever cloud provider that was 5% better than I like today, which if it's. [01:00:38] Speaker D: What if it's Asher? [01:00:40] Speaker C: It'd be negative 30 to negative 25, so it'd be great for them. Azure Firewall Manager has been rebranded as Network Security Hub, consolidating Azure Firewall, web application firewall and DDoS protection into a single management interface for simplified security operations. Centralized address decentralization addresses a common pain point where customers had to navigate multiple portals to manage different security services. Now providing unified policy management and monitoring across network security tools. Primary use cases include enterprises managing complex multi region deployments who need consistent security policies across Azure firewall instances, WAF rules and ddos protection. Pricing remains unchanged. So that's nice. [01:01:23] Speaker D: It's from my preliminary research. It's just a nice GUI update that they've done to kind of make it be a little bit cleaner it looks like. Still, you know, from my experience it's easier to manage some of these things just with terraform across the way. But you know, they're trying to make this be better for companies at larger scale. [01:01:43] Speaker C: Yeah. [01:01:45] Speaker A: I've seen it, you know and I remember as Google's added their services, they kind of built this model as well and so it's sort of like it's becoming more and more standard except for VS because they like they like their separate portals for everything. [01:01:59] Speaker C: In September, Microsoft released a bunch of features over new 100 new ones to Fabric to help with engineering, analytics and AI workloads with key additions including general availability of governance APIs, purview data protection policies and native support for Pandas data frames and user data functions that leverage Apache Arrow for improvement. I don't know that just meant the last part, but it's fine. The new Fabric MCP server enables AI assisted code generation directly with IT VS code and GitHub code spaces, while the open source Fabric CLI and new extensibility toolkit allows developers to build custom fabric items in hours rather than days using Copilot optimized starter kits. What could go wrong? Real time intelligence capabilities expand significantly with Maps visualization for geospatial data and 10x performance boost for Activator which now supports 10,000 events per second and direct Azure monitor logs integration via event stream. Data Factory introduces civilized pipelines and adds 20 new connectors including Google BigQuery and Oracle and enables workspace level workload assignments allowing teams to add capabilities about tenant wide changes while maintaining governance controls. I appreciate all this fabric stuff. Fabric is Azure's queue just everything gets everything gets dumped there. [01:03:06] Speaker A: I still don't really know all of the things that are in there. Right. Yeah. Like and where does Fabric end and their databricks offering begin? [01:03:13] Speaker C: Right. [01:03:13] Speaker A: Because it does seem in direct competition. [01:03:16] Speaker D: Now I'm wondering if one actually links to the other just to really confuse us. [01:03:20] Speaker A: Well it sounds like a lot of these upgrades are specifically so that it can yeah. [01:03:25] Speaker C: And then Microsoft has apparently developed using AI a microfluidic cooling that brings liquid coolant directly inside processors through vein like channels, enabling servers run hotter and faster through overclocking while handling spiky workloads like teams meetings without needing excess idle capacity. The cooling System is up to 3x more effective than current cold plates at removing heat from chips hottest spots, which can have heat density comparable to the Sun's Surface. And Microsoft's plans to integrate this into future Azure Cobalt chips and Maya AI accelerators. This positions Microsoft to compete with AWS and Google for high performance compute workloads and Microsoft is making this an industry standard through partnerships, potentially enabling future 3D chip stacking architectures where coolant flows between silicon layers, a development that could significantly advance computing capabilities beyond current limitations. They've announced partnerships with Corning and Harris for hollow core fiber production to reduce data center latency, and with Stegro for drain steel that cuts carbon emissions by 95% in data center construction. All trying to make their green energy aspirations true. [01:04:28] Speaker A: I mean, it's necessity, you know, is the, the mother of all innovation. [01:04:32] Speaker C: Right. [01:04:32] Speaker A: And so this is not only is, you know, trying to offset with carbon credits, but it's also all the demand for AI and more compute and less space and less power and water. So I think it's, it's neat to see, you know, innovations come out of that and they, the, the way they make the sound just makes it seem like sci fi, which is cool. [01:04:51] Speaker D: Yeah. I'm trying to figure out how the. [01:04:53] Speaker C: Picture is pretty cool. It's like, yeah, individual cells of water. [01:05:01] Speaker D: You'Re pumping it between layers of silicone. Like it's, it's kind of crazy that they're able to do that. I also just really like how on this article page I have an ad that's talking about Microsoft Azure. I have an ad for AWS Re Invent. [01:05:18] Speaker A: Just throw that one out there. [01:05:21] Speaker C: All right, and our final Azure story. We made it, guys. [01:05:24] Speaker D: Oh, sweet guy. [01:05:25] Speaker A: Thank you. [01:05:26] Speaker C: Azure application gateway now maintains full capacity during upgrades by automatically provisioning new gateway instances, eliminating the performance degradation that previously occurred during maintenance Windows. Really? The zero downtime upgrade capability control. I'm just saying that. Great. The zero downtime upgrade capability addresses a common pain point where load balancers would operate at reduced capacity during updates, potentially causing slowdowns for high traffic apps. Enterprise customers running mission critical workloads will benefit the most as they no longer need to schedule maintenance windows or over provision capacity to handle upgrade periods. And while the announcement doesn't specify additional costs, the automatic provisioning of temporary instances during upgrades may result in brief periods of increased cost. Compute charges. So do check your bills and check with your rep if you're unsure on that one. [01:06:07] Speaker D: I have feelings. [01:06:08] Speaker A: I don't think Matt's ever going to forgive us. No matter how much they increase the uptime of these things. [01:06:16] Speaker D: I mean, it was as simple as, you know, about two years ago they added the feature called max surge, which is, you know, when you have a scale set, you add a node and then you delete it. So in here they are adding their app gateways. So essentially versus if you have 10, you would go to 11 and then you would remove one of the original ones. And they essentially are just leveraging that as part of the app gateways. Now, that one feature, because before they would drop a 2.9 and replace which is always fun if you are close to your peak workload either way. So it's just one of those things that is crazy that they didn't have this and this wasn't there earlier and their solution was just to add an extra, you know, n number of nodes. But if you're also auto scaling which if you have an app that can handle that you don't control your nodes so you would just lose capacity at one point. So it's one of those quality of life improvements. [01:07:14] Speaker A: Glad they caught up, you know to something that I've been doing for the last 10 to 15 years. Awesome. [01:07:21] Speaker D: I don't think the App Gateway team likes me. [01:07:24] Speaker A: Probably not. [01:07:28] Speaker C: Well, and someone who hates all of us Oracle is setting the standard in enterprise AI apparently their comprehensive AI capabilities across its cloud platform. Positioning itself as an enterprise AI standard with integrated solutions spanning infrastructure to applications. Oracle's AI strategy centers on three pillars AI infrastructure with Nvidia GPUs and an OCI Super Cluster embedded AI in all their SaaS applications and custom AI development tools. A vertical integration play that AWS and Azure don't match, but may luck customers deeper into Oracle's ecosystem. The company claims 50 plus AI features across Oracle cloud applications including supply chain optimization and financial forecasting with specific performance metrics or customer adoption rates were not disclosed of course making it difficult to assess the real world impact. OCI Data Science platform now integrates automated ML capabilities and pre built models for common enterprise tasks, competing directly with SageMaker and Azure ML, but arriving years later to market with unclear differentiation beyond Oracle database integration. Oracle emphasizes their responsible AI with built in governance and explainability features addressing enterprise concerns about AI transparency, the implementation deals and how this compares to competitors AI governance tools remained vague. Integrated approach from infrastructures to applications could simplify adoption for existing oral customers, but may struggle to attract new kind enterprises already invested in hyperscaler AI platforms. Thus pricing is significantly competitive. And the best thing about this article when you read it is that they basically imply that they invented AI, which is the best part of the whole article. [01:08:53] Speaker A: Yeah, it, it really has a weird tone. Like it, it. I, I agree. Like it's just this and I wanted to say at first like it's typical of Oracle, but it's, it's somehow different and the same at the same time which is like it's very blustery, very. [01:09:08] Speaker C: Devoid of detail. [01:09:10] Speaker A: And it feels like it's not going to be true in the end. Right? It's close but not. [01:09:16] Speaker C: You mean like Unbreakable Linux. Yeah. [01:09:18] Speaker A: Just like that. [01:09:22] Speaker C: Yeah. I'm glad they think they set a standard. And for Oracle customers, I'm glad they probably have something. The question I would have is how many Net new Oracle customers do they get? Like, you know, does a new company sign up and say, hey, you know what, I want Oracle financials. Do you think that happens? Do you think? Or. Or, you know, new hotels, like, I want Oracle Hospitality Suite or their ERP solutions. I mean, like, is that. Is there an industry where Oracle is the preferred choice still today? Like I. Maybe Epic maybe, or not epic, but Ceridian for healthcare, maybe. I don't know. [01:09:56] Speaker A: I mean, you see it where leadership will move and then they. They like their platform that they know. So maybe there's that kind of use case. But yeah, I mean, I can't imagine. [01:10:04] Speaker D: People have to die off. [01:10:06] Speaker C: Yeah. And Oracle keeps hiring new people. [01:10:10] Speaker D: There's a ton. [01:10:11] Speaker A: And they're going to get people just because of the computer and GPUs are going to be available, you know, and. [01:10:16] Speaker C: There'S, I mean, using their GPUs is not the same as using Oracle Database or Oracle applications. [01:10:22] Speaker A: So. Yeah, you know. Yeah, I wish OCI had kind of a better product strategy other than Oracle Database, kind of, you know, like, because it does really feel like that's what. [01:10:36] Speaker D: They know, that's their market. [01:10:38] Speaker A: That's the only thing. Yeah. Yeah. [01:10:42] Speaker C: All right, guys. Well, that's it. We made it through. [01:10:45] Speaker A: Bye, everybody. [01:10:46] Speaker D: Bye everyone. [01:10:50] Speaker B: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services. While you're at it, head over to our [email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback and ask any questions you might have. Thanks for listening and we'll catch you on the next episode. [01:11:22] Speaker C: But I do have an after show. Oh, sorry, I was forgetting my segue to it. Yeah, there was a great article this week by O'Reilly where basically they talked about. Prompt engineering is fundamentally requirements engineering, which apply. You know, when you take fundamental requirements engineering and apply it to AI interactions. The same communication challenges that have plagued software development since the 1960s now appear when working with AI models to generate code or solutions. Engineering emerged as a critical skill for cloud developers using AI tools. Determining what information to include in prompts surrounding code test inputs. Design constraints directly impacts output quality, similar to how Requirements Scope has always affected project success. The shift from static documentation to iterative refinement mirrors Agile's evolution Just as user stories replaced heavyweight specifications. Prompt engineering requires continuous conversations with AI rather than single shot commands. Though AI won't ask clarifying questions like human teammates. Cloud based AI services amplify traditional requirements failures when AI generates code directly from natural language without the structured syntax guardrails. Small variations in problem framing can produce significantly different outputs that look plausible but fail in practice. Organizations falling into the prompt Library trap repeat 1990s template mistakes. Standardized prompts can't replace the core skill of understanding and communicating intent just as perfect requirement templates never guaranteed successful software delivery. [01:12:47] Speaker A: Yeah, I couldn't agree more. I mean this is my, this is my life these days is exactly this. And prompt engineering has really taught me to be better because I, yeah, for a long time my AI projects were just a disaster because I would ask for something and it would give me exactly what I asked for and it had an interpretation that wasn't what I intended. And you know, sometimes you end up too deep to take yourself out and, and you know, it's terrible. But now I've learned some tricks which is, you know, a lot, I reverse it a lot of, I mean I asked, I, I, I start the prompt with, you know, interview me on what I want to achieve and, and then, you know, take the conversation from there and then you can also. And then from there you need to give it a lot of structure, right. Because it'll go off on tangents and it will, it will build components in a structure that is unsupportable. But you can tailor all that and you give thing, give things instructions and you can port instructions from workspace to workspace these days and a lot of tools. So it's a lot easier than it used to be. But it's not quite just someone like a, that's an addition to your team that can just go off for sure. Needs a lot of supervision. [01:14:04] Speaker C: It's like going, yeah, there's a bunch of articles about, you know, basically we were vibe coding and then everyone was like, vibe coding is bad. You need to be really specific on your requirements and give it very good test cases and all these things. It was like, oh, so software engineering. [01:14:21] Speaker A: Yeah, exactly. That's funny. It is sort of how it feels in a lot of ways. Right. It's sort of, it's, it's, it's new, but it's all the same again. [01:14:35] Speaker D: Yeah. I find that even when I, when I use it, I wind up kind of talking to it As a junior Developer almost. So like, hey, I need a script that does this and this is kind of the logic flow I'm thinking so start here. If this, then go this way. Else kind of go this way. Make sure you have error handling in this area because the this area I know it's going to be a problem. So kind of that like junior level intern. If I give it that structure, I find that pretty good. I haven't done the interview me, I'll have to play with that one. But I find that if you like almost stub the code out, but I do it in English, that also helps it kind of stay in line. It still like add a lot of error checking and stuff like that. But I can get it to kind of keep you what I want. And then it's like, okay, I said to do this and you are 100 correct. You did what I told you to do. But that is not what I really meant. So go rewrite this section because that's a section. [01:15:33] Speaker A: Yeah, well, and that's what the interview really gets gets at, right? Is because it's, it's, it's not you giving it requirements and then letting it interpret what you're providing it, it's more of it asks you questions and by it asking you questions you realize that its, its interpretations off, right? And so you can correct at that point. And we're just working on a document. [01:15:55] Speaker C: At this point, right. [01:15:56] Speaker A: It's not writing code it's going to write. It's writing a project plan or it's writing a requirements doc or that and it's, it, it's a, it's a useful tool in that way and also, you know, helps you know, document the, the project and the status. And you know, I, I have, I kick off every, everything with like having it basically break it down into, you know, individual JIRA stories and, and sort of managing it that way. So like pretending I'm on a software development team basically, even though it's just me. [01:16:28] Speaker D: So I do that when I, if I'm, I guess it depends if I'm writing a script or like a full blown project. I feel like that's where it varies some. [01:16:37] Speaker A: Yeah, no, I, I don't always, I do the interview almost always, but I, yeah, there's a very difference of like I need, I need to go get this data right now or I need to go parse this data versus writing an application that's going to, you know, handle vulnerability management across a fleet of production servers. [01:16:55] Speaker C: One of the things that someone gave me a tip on Was like instead of typing your prompt, like just talk your prompt to it. [01:17:03] Speaker D: Like voice to text. [01:17:04] Speaker C: Yeah, voice to text. Because you know when you're, when you're typing you kind of think about verb, you know, being succinct and very clear because you're writing an email or you're writing something else or an IM somebody. But if you're talking, you know, as you're talking about something you like. Oh yeah. Then this one thing to be aware of, you know, if you do this one part of it, you might break this other functionality. So we should write a test for that. Or you know, like, oh, I'm not going to do it right now but like I really need to think about the fact that I'm going to eventually scale this out to be multiple web servers. And so we need to make sure we don't lock ourselves into a one way door on something like that. So you can have that conversation with it and then have it ask those questions back like Ryan was talking about in the interview approach. And the combination of the two is actually really, really powerful. [01:17:43] Speaker A: I've yet to try the voice text just because I think I'm a poor communicator and I'm nervous. I like being able to edit. [01:17:51] Speaker C: I mean you can still edit like the voice text. [01:17:53] Speaker A: I know, editable, but it's not quite conversational speed yet. Right. Like it's you, you have a conversation and it kind of goes off and you wait. [01:18:01] Speaker C: One of the things I like about in root code is I can, I can write my prompt and then it actually has an AI button to fix your prompt to make your prompt better too, which is kind of cool. So, and that's interesting, not so much because I need it to rewrite the prompt, but it adds in details that I wouldn't always have thought of to add to it based on, you know, just like software development practices and things like that. Like oh yeah, that's a good line. I, I should have put that in. But I didn't think about it at the time because I was thinking about my feature or I was thinking about the thing I wanted it to do. [01:18:28] Speaker D: Yeah. [01:18:28] Speaker C: So that's one of the cool things about root code that I really like too. But yeah, again there's so many tools. [01:18:33] Speaker A: Right now I need it for image generation. Like AI, I need to give it a thing and then AI has to parse it because it will not generate what I have envisioned in my head unless it goes through a couple cycles. Because it is really funny. [01:18:48] Speaker D: Like well, so image generation Is hard. I feel like it is hard. [01:18:51] Speaker C: I know it's. It gets easier over time. Like, you just had to see some really good examples of it, like how to describe the thing you want and the deed. It's like, you know, writing a story or writing a book. Like, oh, I'm trying to set a scene. Like, you know, Matt's in the room. Well, no, Matt's actually sitting in a room of a forest because he has a background with a sun coming in over his head, reflecting across his bald head. You know, the tiredness, the, you know, of taking care of children's. Is representative in his eye. You know, droopiness in his eye. You know, sorry, that I'm just picking up. [01:19:24] Speaker D: Wow. Taking shots all over. [01:19:27] Speaker C: You're taking some strays on that one. Sorry. Yeah, I didn't mean for that to quite get so bad. But, you know, again, you're trying to set the scene or like, you know, Ryan's in his, you know, living room with his cat, harassing him at the moment, you know, like you. But you're trying, like, he's leaning against a red couch and you're trying to give all those details to it that you could do. And, you know, and that's how you write a book. You're trying to set a scene. You're doing the same thing when you're trying to create an image. You know, I. I'm looking for a cartoon version of a man in his 40s with glasses who is bald and slightly overweight. Whatever you're trying to do, right, you're getting as much of that detail as possible. And then the nice thing now is the models have gotten so good. Like the Nano Banana update for Google Gemini, I think it was. Like, you can now tell it, like, this specific part of the image I don't like, and it'll actually edit it. Now, where before you had to, like, do such a large amount of rewriting of what you wanted because it always would generate a new image. But now with both the new ChatGPT models and the. The Nano Banana, you know, you can now actually edit the thing you need to edit. Like, oh, you misspelled computer, please rewrite it. And it'll just fix the part that was misspelled computer, which is nice. [01:20:37] Speaker A: That is cool. [01:20:38] Speaker C: So, like, a lot of those. [01:20:39] Speaker A: That is the biggest thing where you like it, except for this one thing, and you ask it for a change and it just redoes the other. It redoes the whole thing in slightly different way that I don't like. [01:20:47] Speaker C: Yeah, yeah. So the new Try out the new. The new Nano Banana stuff on Gemini and the new chat GPT5 for images. It's gotten a lot better at being able to do edits or like, oh, I really like this. But now I want the image to be round or I want it to be slightly different or I need a transparent background. It'll now do those things properly. But again, you got to think about it like writing a story when you're trying to. Trying to generate an image with AI. [01:21:09] Speaker D: What's funny is I was doing one, we have an upcoming audit. And I was like, okay, let's make like, you know, group of people chanting, audit, audit, audit. You know, type of. I wanted to do a video, just like start our audit prep meeting with that for fun. Because, you know, this is what I think is fun in my life. And I ended up doing an image. And then it put the word audit in all capitals and exclamation marks. And then I was like, no, put audit three times. It just overlapped it but slightly off, but like the word audit was over each other. I was like, you're not wrong. But like, yeah, you missed the point of what I wanted. [01:21:46] Speaker C: Yeah. [01:21:47] Speaker D: So like, you could, like you were saying, it could definitely now just edit pieces of it, which is much better. [01:21:52] Speaker A: You just reminded me I have to. I have to generate stickers for completion. I will. I will be testing out the new AI models. [01:22:02] Speaker C: Well, good luck on that. Yeah. All right, gentlemen, I think we've killed this week. Yeah, it's time to go enjoy our weekends. Indeed. All right, see ya. [01:22:13] Speaker A: Have a good one.

Other Episodes

Episode 107

March 09, 2021 00:54:30
Episode Cover

107: The Cloud Meshes with Microsoft

On The Cloud Pod this week, Peter is spending the next 12 hours in a rejuvenation chamber like a regular villain straight out of...

Listen

Episode

July 08, 2019 47:55
Episode Cover

Episode 29: The Cloud Pod Re:Inforces Security

We recap the AWS Reinforce conference from Boston Massachusetts.  Draft results, overall impressions of the conference and we break down each announcement. Sponsors: Foghorn...

Listen

Episode 304

May 22, 2025 01:16:54
Episode Cover

304: It’s Chile Up Here in The Cloud!

Welcome to episode 304 of The Cloud Pod – where the forecast is always cloudy! Justin, Ryan and Matt are in the house tonight...

Listen