346: Zuckerberg Finally Finds His People, They Are All AI Agents

Episode 346 March 19, 2026 01:18:38
346: Zuckerberg Finally Finds His People, They Are All AI Agents
The Cloud Pod | Weekly AI & Cloud News on AWS, Azure & GCP
346: Zuckerberg Finally Finds His People, They Are All AI Agents

Mar 19 2026 | 01:18:38

/

Hosted By

Jonathan Baker Justin Brodley Matthew Kohn Ryan Lucas

Show Notes

Welcome to episode 346 of The Cloud Pod, where the forecast is always cloudy! Hold on to your butts, because Justin, Ryan, and Matt are in the studio today, and they’re ready to bring you all the latest in Cloud and AI news, including the usual: Meta buying social networks, Amazon responding to outages, and OpenAI giving up another version of GPT. Let’s get into it! 

Titles we almost went with this week

Follow Up

00:51 Where things stand with the Department of War 

AI Is Going Great - Or How ML Makes Money 

01:21 Introducing GPT-5.4

02:19 Justin - “There’s also been a slew of every cloud provider in the world announcing Chat-GPT 5.4 is now available, and we will not be telling you about all of them, but assume that if you use a different model or different cloud, they probably have it.” 

04:33 Introducing ChatGPT for Excel and new financial data integrations

04:49 Justin - “If I were a betting man, I’d also say they’re going to have a PowerPoint version any day.” 

06:13 Meet KARL: A Faster Agent for Enterprise Knowledge, powered by custom RL

07:09 Ryan - “It's kind of a neat idea to provide sort of the pipeline there. I mean, I guess the big cloud providers are producing agent-building platforms and stuff; I wonder how much of this you can follow the path that they use for creating KARL and building your own domain-specific agent in the same way. I like the idea. Smaller model, less GPU.”

08:55 Codex Security: now in research preview

10:07 Ryan - “I wish AI wouldn’t generate all those vulnerabilities in code… but I do like that these tools are available.”  

12:40 OpenAI to acquire Promptfoo 

13:36 Justin - “It's good that this company got bought, integrated into the models is a great stepping stone, and I look forward to seeing more red teaming agents, because I think that's an area companies really have underinvested, and with our new cyber warfare world, it's going to become more more important that you're doing more active red teaming.”

15:21 Introducing Kasal 

15:48 Justin - “They didn’t mention security review; I just want to call that out.” 

17:04 Code Review for Claude Code

18:15 Justin - “The COST of the review is really the biggest thing…definitely something that is a factor in all of these things.”

22:24 Meta acquires Moltbook, the AI agent social network

22:39 Justin - “We didn't really talk about Moltbook because we didn't want to talk about OpenClaw extensively, but basically, OpenClaw is a terrible way that you can run AI agents in a fully unsafe manner that accesses all of your personal data, and one of the things you could do is add a skill that would basically have it randomly post things onto MoltBook, which could include your bank accounts or security things if you're not careful in your security. And Meta buying this is just sort of the classic; it's a social network, and it could take us down, let's just take it off the market and kill it.”

Cloud Tools 

23:58 GitHub Copilot coding agent for Jira is now in public preview

24:42 Ryan - “That’s interesting, because Rovo is Atlassian’s AI bot…I’m curious about why that’s required.”  

26:09 The Pulse: Cloudflare rewrites Next.js as AI rewrites commercial open source

27:31 Ryan - “I feel like it's an awful precedent, right? Like, the whole point of open source is community collaboration, and this is directly in the face of that. Like, why would you release something open source if someone's just going to use an AI agent to create their own fork of it?”

31:58 Active defense: introducing a stateful vulnerability scanner for APIs

33:22 Ryan - “This is super cool. This is the AI-enhanced security scanning I’ve been waiting for.” 

AWS

34:43 Amazon plans 'deep dive' internal meeting to address outages

36:36 Ryan - “Hold on to your butts, but we’re going to see a lot more of this.” 

39:00 Database Savings Plans now supports Amazon OpenSearch Service and Amazon Neptune Analytics

39:34 Justin - “Finally. Thank you.” 

40:54 AWS Elastic Beanstalk now offers AI-powered environment analysis

41:55 Matt - “I will say troubleshooting Beanstalk is a pain in the butt. It just says ‘degraded’ and you’re like ‘why’? And at one point, I had an issue with Beanstalk where it needed a specific CloudWatch put metric in order to do it; it got to the point I opened a support case, and asked AWS why it wasn't working. And they're like, here's this - buried 17 pages into… so I can definitely see it being useful.”

43:13 Introducing Amazon Connect Health, Agentic AI Built for Healthcare

43:45 Justin - “This is a great example of a really purpose-built AI that has a specific use case, and I’d almost rather talk to the AI at any time of the day that can book my appointment rather than waiting for the office to open during the day when I’m busy.” 

27:58 Amazon Lightsail now offers OpenClaw, a private self-hosted AI assistant

44:46 Justin - “If you want to try it (OpenClaw) and you can’t get a Mac Mini because everyone is buying them for their OpenClaw implementations, Amazon Lightsail now supports (it).” 

47:22 Amazon OpenSearch Ingestion now supports a unified ingestion endpoint for OpenTelemetry data

47:54 Ryan - “I mean, at the ingestion layer? I don’t know. Because this is really at the logs- equivalent…”

48:27 Announcing the end-of-support for the AWS Copilot CLI

49:26 Justin - “ I mean, yeah, this is kind of the first step into a fully managed world of ECS, and I remember when it came out we talked about it and was like, well, this is nice, but we really want what became Amazon ECS Express, and so they kind of deprecated themselves in their own way with better solution.”

51:04 Amazon Route 53 Global Resolver is now generally available

51:57 Ryan - “I both love and hate this. Having operated a global Anycast resolver, I know how much of a pain it is, and so I wouldn't want to set another one up, and I would gladly pay Amazon to do that. However, I don't know that they're removing the annoying parts. And you add more abstraction, I wonder, troubleshooting failed queries; that's going to be really difficult. And you have a lot more control when you control the network for these things, and so I'm very dubious about this one. But if it just works, then it'll probably be worth it.”

53:29 Automated deployments with GitHub Actions for Amazon ECS Express Mode

GCP

55:59 Introducing the Google Cloud recommended security checklist

56:52 Ryan - “So, your mileage may vary. Some of the code that they have in the solution requires really, really high privileges to run in your GCP environment, so it's one of those things where you might not be able to get that far with it unless you're administering the cloud directly. But it's definitely, I think, a lot of really good, useful things that you can then take… anything that allows people to focus on what they care about is pretty great.”

58:06 New agents for the Autonomous Network Operations framework

58:39 Justin - “This is all a lot of stuff for TelCo’s, but it’s cool, if you’re into geeky TelCo things, check it out.” 

59:24 NotebookLM adds Cinematic Video Overviews

1:00:21 Justin - “A little bit pricey to replace all the YouTubers, but coming soon.” 

1:01:14 Gemini Embedding 2: Our first natively multimodal embedding model

1:02:29 Ryan - “I go back and forth on these multimodal, because I feel like there's so much bloat and we use the wrong model for so many use cases, and I feel like the multimodal is a really good way to do that. So it is interesting, I just haven't seen a use case where I would see a whole lot of benefit of being able to sort of use the multimodal model to get an answer out of an LLM that I wouldn't be able to get using other tools.”

1:03:28 Google shares Gemini updates to Docs, Sheets, Slides and Drive

1:04:21 Justin - “So if you’re in the Google workspaces places, you’ve not got basically what Copilot gave you, but better.” 

Azure

1:05:29 Azure Databricks Lakebase is Generally Available

1:07:17 Copilot Cowork: A new way of getting work done

57:31 Introducing the First Frontier Suite built on Intelligence + Trust 

1:10:54 Ryan - “This is interesting; I know, in evaluations and talking to people from different companies, when they were rolling this out originally - I think it was something like 30 or 50 bucks a user, no one wanted to pay that price. And there was a minimum number of users. So it was a large amount of money.” 

Oracle 

1:12:29 Introducing OCI’s Cost Anomaly Detection

1:12:42 Justin - “This has been at every other cloud forever, so…” 

1:13:24 Oracle Announces Fiscal Year 2026 Third Quarter Financial Results

1:14:47 Justin - “That’s a pretty good bet, so I get it. I also think Oracle is kind of lucking into the multi-cloud…because people are having to adopt Oracle cloud to get the capacity they need.”  

After Show 

57:31 Xbox surprise: Microsoft reveals 'Project Helix' as the codename of its next console 

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod, where you can join our newsletter, Slack team, send feedback, or ask questions at theCloudPod.net or tweet at us with the hashtag #theCloudPod

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. [00:00:08] Speaker B: Where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure. [00:00:14] Speaker A: We are your hosts, Justin, Jonathan, Ryan and Matthew. [00:00:18] Speaker B: Episode 346 recorded for March 10, 2026. Zuckerberg finally finds his people. They are all AI agents. Good evening, Ryan and Matt. How you guys doing? [00:00:28] Speaker C: Doing good, doing good. Halfway through the week. Can't complain. [00:00:33] Speaker B: Week two of Iran war. Things are, things are going bad for some, some people. I don't know if you guys saw the news about Stryker, but they got completely owned. Completely owned. We'll try to talk about that more next week because it kind of, it's still developing. Yeah, not enough data to cover today. I meant. So it was pastor editorial cutoff for this week, but yeah, that one's gonna be a topic next week for sure, for sure. But yeah, that's, that's just bad news over there. But speaking of the war, you know we talked about before the war dropped that the government was basically declaring anthropic a supply chain risk. And yesterday as we talked about last week on the show, they have fully sued them and Microsoft actually backed them and suing them as well, which is good. So that's now going to court. We'll keep you posted in 12 to 18 months from now. So these things go. [00:01:21] Speaker C: Yeah, it's going to take a while for that to work through any court system. [00:01:24] Speaker B: Oh yeah, it'll take quite a while. Let's move on to AI is how ML makes money. OpenAI is on a tear releasing another GPT model. This time GPT 5.4. It's now been integrated into ChatGPT, the API and Codex positioning as their most capable reasoning model to date and emerges the coding strength of chat GPT 5.3 Codex with general reasoning, professional knowledge work and native computer use capabilities in a single model. The computer use capabilities are a notable technical step with GPT 5.4 achieving a 75% success rate on OS. World Verified Desktop navigation tool search is a practical efficiency improvement for Gentex API workloads. This is something that they didn't really have before and I ran into when I wrote our bot for the show. On the professional worksite, GPT 5.4 scores 87.3 on internal investment banking spreadsheet benchmarks and achieves 91% on big law bench for legal document work. I mean for those of us who have lawyers in the family, I'm sure they're not happy about that news. Pricing is higher for token than GPT 502 in the API. The OpenAI notes that models token efficiency should offset costs for many workloads. There's also been a slew of every cloud provider in the world announcing ChatGPT 5.4 is not available. We will not tell you about all of them, but just assume if you use it on a different model or different cloud, they probably also have it. [00:02:37] Speaker A: They probably have it, yeah. Yeah, I guess that Code Red is really paying off for, you know, because they're, they're pumping these out with a lot of big improvements, so it makes sense. [00:02:47] Speaker C: It feels like almost they redid their backend. Right. Of how they train them and everything else and kind of have a much more streamlined workflow, you know, and kind of made it work a little bit better. [00:02:57] Speaker A: Yeah, it's hard to say. I mean, it's, it's funny because I don't know what it takes to train models at that scale because it's so, it's so huge and so expensive that you don't. Can't really play around with that level. Right. So it is sort of interesting. I'd love to, you know, get behind the covers on some of these things and get to their, you know, see how their scaling issues are, because it's going to be completely different than your normal web application. But yeah, I hope that I haven't used this yet. I hope that its performance improves. I know I've been kind of down on their previous GPT models. [00:03:29] Speaker B: I mean, every time a new model drops, everyone rushes like, this is the best model ever and it's so much better than anthropic. Then anthropic drops, like so much better than last one. And like they're in like this arms race that I just, I don't care enough. I like my tool. I like Claude code. It's one I've kind of picked and I'm just kind of sticking with it. And when it gets dumb, I just get. I just less code a little less, then get smart again and then I get happier with it and just you. And most time it's probably me just being bad at context or bad at my prompt and then I just have to rethink my problem statement and then it fixes my issues. [00:03:58] Speaker C: So, Justin, you're supposed to blame the tool, not yourself. [00:04:02] Speaker B: I mean, the tool's the dumb one that mess up the context. I just have to now re instantiate it as better context, that's all. [00:04:08] Speaker C: Again, blame the tool. It's like problem between human and keyboard. Isn't the human, isn't the computer ever? It's the software, it's always the computer's fault. [00:04:17] Speaker A: Well, that's what, that's why they keep the human in the loop, right? For blame. [00:04:21] Speaker B: For blame, yeah, exactly. There's someone there can fire because they can't fire the AI. He's going to stick around forever. Well, on top of chat GPT 5. Four, they also announced OpenAI chatgpt for Excel in beta and Add in powered by, of course, chat GPT504, but lets users build, update and analyze spreadsheet models using plain language descriptions. It preserves existing formulas and structure asking permission before making changes, and links answers to specific cells for auditability. If I were to be a betting man, I'd also say they're going to have a PowerPoint version any day. But this is nice. You know, again, cloud, I had this a couple weeks ago, which was fine, and it's pretty good, actually. I've been using the Excel one quite a bit, so I'm. I might play with this one as well, just to check it out, see if it's. It's much better. But the cloud one is. I mean, I just give it a table of data and be like, hey, just come up with a bunch of charts and data analysis that you think I'd be interested in. It just like, comes with all kinds of things. I'm like, this is pretty good. That's pretty cool. You don't have to do a lot of extra work or thinking about it even. You're just like, hey, here's a data model. Here's the data is. Give me some good viewpoints. And it just produces graphs and pictures and things. It doesn't do pivot tables. I don't know if the, the chat GPT1 does. That's probably one area. But I don't like pivot tables anyway, so it's fine. [00:05:31] Speaker C: Ooh, I like my pivot tables. [00:05:33] Speaker B: I like using pivot tables. [00:05:35] Speaker A: I don't like making pivot tables new. [00:05:38] Speaker C: No. But I like to use them to look at stuff in a different way. And I'm really upset with cloud code and even just straight the Excel plugin because it can't handle that. [00:05:48] Speaker B: Yeah, it does not handle yet, I hope. I assume it's going to come though. It's just pivot tables are a computer science problem that no one really understands and even AI can't figure it out. That's how you know that person. [00:05:58] Speaker A: Excel a database. Yeah. The world may never know. [00:06:01] Speaker B: We'll never know how much access really exists inside of Excel. All right, we'll meet Carl Carl is databricks new knowledge agent with reinforcement learning. Carl, a custom model built using RL technologies to handle grounded reasoning task like document search, fact finding and multi step reasoning across your enterprise data source. Carl was trained with a few thousand GPU hours using entirely synthetic data and internal testing at Match or Upper from Frontier proprietary models on inference cost, latency and response quality simultaneously. The core technical challenge I was trying to fix is hard to verify tasks where there is no single correct answer making reason learning, sorry, reward learning signal design particularly difficult compared to domains like math or code where correctness is easier to measure. Databricks is now offering a custom RL private preview backed by serverless GPU compute allowing enterprises to use the same RL pipeline that produced Carl to build domains specific cost optimized versions of their own high volume agents. So if you need a math agent this might be a tool checkout. [00:07:03] Speaker A: It's kind of a neat, neat idea to provide sort of the pipeline there. I mean I guess you know like the big cloud providers are producing like agent building platforms and stuff, but it is sort of, I wonder how much of this is you can follow sort of the path that they use for creating Carl and sort of building your own domain specific agent in the same way. Because I like the idea smaller model, less gpu, more recent grounding tasks. [00:07:34] Speaker B: I mean that's the thing is small language models I think are going to be a big deal as you get into specialized domains more and more. So it makes sense you don't need these big massive models for a very defined problem statement or a problem domain. [00:07:47] Speaker C: Yeah, you're going to see I think more customized models like this over time. Like you just said, like the healthcare model, it's the medical one, the math, you know, et cetera, et cetera. [00:07:56] Speaker B: I mean I think this is debate too because like the medical one makes sense because there's a lot of words and things in medical that are not things that we use in normal English day to day. And there's very specific ways and things that are done in medical, whereas and same thing with legal. I think legal is probably the other one that has a lot of custom things. But beyond that like most other things, like I know when early on Google was talking about building a bunch of customized models for like finance and for others and then they kind of abandon that because everything can be done, you know, through the, the main model process. So I think it's a question of inference cost and if inference cost keeps dropping then your need for some of the specialization in the general model is not necessary and then you just go for SLM for those very specific domains. [00:08:36] Speaker A: Yeah, it's gotta be different enough to make it worth it. [00:08:39] Speaker B: All right. Codec Security is in research preview from OpenAI as well. Again, they had a busy week. Formerly known as Aardvark, now available to ChatGPT Pro, Enterprise Business and EDU customers by Codex Web with free usage for the first month. The tool functions as an agentic application security scanner that builds a project specific threat model to identify and prioritize vulnerabilities with context or fixes. The performance metrics from the beta are notable. False positive rates dropped over 50% over reported severity, findings held more than 90% and noise were cut by 84% on some repositories over the last 30 days. The scam More than 1.2 million commits surfacing, 792 critical and 10,561 high severity findings. The critical issue is appearing under 0.1% of scanned commits. The tool uses a sandbox validation environment to pressure test findings before surfacing them and can generate working proof of concepts when configured with a project specific runtime environment. Basel learns from user feedback on finding severity to refine its threat model over time. Codec Security has already produced real world results in open source with 14 cvs assigned across products including OpenSSH, new TLS, GOGS, PHP and Chromium. OpenAI is launching codecs for open source software offering free ChatGPT Pro and Plus accounts plus codec security across to open source maintainers which that's nice. Thanks. [00:09:50] Speaker A: It is nice. I wish AI wouldn't generate all of those vulnerabilities in code. But like I don't understand why there's such a high prevalence of that in AI generated code, but I do like [00:10:02] Speaker B: that these tools are available. [00:10:03] Speaker C: I think I saw some stat the other day that it's like one in four, you know, coding sessions produced from AI produces at least one vulnerability. Which doesn't surprise me if you also think about it because you know, I was working on like a terraform thing the other day and it was like I'm like use the latest terraform and it doesn't know about the Terraform provider. Whatever 3x for 4x for Azure, you know, it was about 4.60, not 4.84 which came out last week. You know. So if you take that same assumption for you know, Python libraries or Node JS libraries or anything along those lines, it's always going to grab these old [00:10:39] Speaker A: versions to no spell well, but it's, it's I've seen it grab stuff that is well older than the release date of the model, for example. [00:10:48] Speaker C: Yeah. [00:10:48] Speaker A: And because it is just a predictive text engine, you know, on steroids. And so it is sort of, it has that ability where it's pulling older package names quite a bit, you know, and so stuff that's, you know, been patched for quite a while and it's, it's a, you know, one of those things that I think is the industry is going to focus now is coding. AI coding is becoming much more normalized and if not mandatory, you know, figuring out how we harden pipelines much like, you know, we did with SLDC pipelines and static code analysis. We're going to have to figure out how to do that directly in the ides and really help developers sort of add these things in and then add instructions that tell these environments to always, you know, don't hallucinate, don't make this assumption, you know, always use the most current [00:11:37] Speaker C: Isn't that what the whole Shift Left. I say that in quotes and nobody can see me besides Ryan and Justin here, you know, philosophy was and you know, I know Sneak has added in a bunch of stuff like where it automatically now will just like add things to your repository, which isn't at all annoying at times to kind of do that same type of check. So I think a lot of the tools are there. It's up to the business to implement those tools. [00:12:01] Speaker A: Yeah, well, yeah, I, I, I see my value as a security engineer in Shift Left is to provide those guardrails and toolings that make Shift Left easier. And so like that's, that's sort of the, the idea of if the tools [00:12:12] Speaker B: aren't going to do it, then we [00:12:13] Speaker A: have a specialized team that's going to do it. [00:12:16] Speaker B: Speaking of specialized teams, they're also releasing acquiring a company called PromptFu, which honestly is the best name ever as it was an AI security platform used by over 25% of the Fortune 500 companies with plans to integrate its technology directly into into OpenAI Frontier, the company's enterprise platform for building AI agents. Prompt FU's core capabilities include automated red teaming and security testing for LLM applications, specifically targeting risks like prompt injections, jailbreaks, data leaks, tool misuse and out of policy agent behaviors. These will become native features within Frontier rather than separate tools. The acquisition addresses a practical gap for enterprise AI deployments. Systematic ways to test agent behavior for production, maintain audit trails and meet governance and compliance requirements as AI agents connect to real data and business systems Bronfu also maintains a widely used open source CLI and library on GitHub. And OpenAI has stated it will continue developing the open source project alongside the integrated enterprise capabilities. So it's good, I think it's good that this company got bought. Integrated into the models is a great stepping stone and I look forward to seeing more red teaming agents because I think that's an area companies really have underinvested. And with our new cyber warfare world, it's going to become more and more important that you're doing more active red teaming. [00:13:27] Speaker A: Yeah, absolutely. I mean this is very, very important. [00:13:30] Speaker C: Right now there are like a whole slew of like open source projects I think, like Stanford I looked at and there's a few other ones that have come out of it which are open source AI bot testing where you kind of give it your source code and it does, you know, source code analysis at that level, but you can also give it your URL and does like a red team pen testing, you know, and you can integrate those into your pipelines also. So I feel like there's a whole world of automated, you know, pen testing coming out there. I know there's a couple specialized companies that do exist that do it, but I feel like we're seeing it now move into like AI automated pen testing. [00:14:07] Speaker A: Yeah, there's kind of two flavors, right? There's using AI to generate sort of your red teaming for your traditional application tests and then there's a, you know, suite of products I'm seeing for actually hitting the LLMs directly and making sure that they can't, you know, change the instructions or get proprietary data out of them. So it's, it's definitely a gap. Companies have to solve that themselves with internal processes right now. So it's good to see the B products because I would rather buy versus build if I can and if I could afford it, I would say the [00:14:41] Speaker C: affording part might be the killer, definitely. [00:14:46] Speaker B: Databricks is releasing Casal, an open source visual platform for building multi agent AI workflows without writing orchestration code. Users can drag and drop agents on canvas or describe workflows conversationally and Cazale generates the underlying CREWAI based Python code automatically. Cazale runs natively on Databricks apps with built in out of the box authentication, SQLite or lake based persistence and ML flow tracing integration. Many teams can move from visual design to production plan with a minimal additional configuration. He didn't mention security review. I just want to call that out like I think Social Security side of this Too. The platform supports both sequential and hierarchical agent modes or hierarchical workflows, including manager agent coordinating specialized sub agents. Useful for tasks like generating custom specific sales visitations by combining product and customer data pipelines. Observability is handled at two layers. Business users see execution timelines and workflow status in the causal front end, while AI engineers can use NL flow tracing to debug LLM calls and agent behavior. At the technical level, workflows built in causal can be exported as Python code for further customization, and reusable plans can be registered in a shared catalog. And this is really what I always said about low code. There's always that final mile where you needed a little bit of Python code or you needed a little bit of something else that really killed low code. And so AI is definitely going to fill that gap for a lot of people and I think so we continue to see. [00:16:01] Speaker A: Yeah, I mean it's interesting because I think that you know this is a lot, a lot of the these services, you know, need that last bit of execution, you know, the code layer, but then they also need some sort of running environment and you know, the ability to integrate with external systems. So it sounds like databricks has at least solved half of that with providing and running it on their existing platform, which is cool. Yep. [00:16:25] Speaker B: Anthropic is launching code review for Claude code and research preview for team and enterprise plans using a multi agent system that dispatches parallel agents to find bugs, filter past positives and rank issues by severity delivering results as a single summary. Comments plus inline annotations on each PR Internal metrics show the system increase substantive review comments from 16% to 54% of PRs are anthropic with large PRs over a thousand lines receiving findings 84% of the time averaging seven and a half issues. Less than 1% of the findings marked incorrect by engineers. Reviews scale dynamically with PR complexity averaging around 20 minutes per review and are built on a token usage at roughly $15 to $25 per review. Making this though be more expensive than the existing open source cloud code GitHub action which I have used which remains available as a lighter weight alternative. Practical example from Truenas shows the system surfacing a pre existing type of mismatch bug in adjacent code that was silently wiping out encryption key cache on every sync a kind of latent issue outside the direct chain set. The hangman viewers typically would not investigate until they saw a lot of logs or other problems happened. The system intentionally does not approve PR keeping humans in the decision loop While admins on teams and enterprise plans retain controls over spend and usage. This is a depth focus, supplemental human review rather than a replacement. Now I mean the cost of the review is really the biggest thing. But I mean like when you think about like how much does it cost for a principal engineer to go read a pr, quite a bit. So you know, definitely something that is a factor in all of these things. [00:17:50] Speaker A: So yeah, if you were offsetting that senior engineer reviewing the code maybe. But I mean that's the thing. Like this is going to be a different type of review and it's going to go deep like I said, which is like those types of variable expansion things are really difficult for a human to find. But I, yeah, so I don't know, like I think it would find stuff for sure. I don't know if I want to pay that much money per review and it would have to be a rework of, you know, the SLDC process in order to make that cost effective to make sure that you weren't reviewing every single change. [00:18:24] Speaker C: Yeah, I'm just picturing like the one line like oh, this is in supposed to be a float for like the most simple example I can think of, you know, and that's that $25, like yeah, it's not going to go for well with my CFO. [00:18:37] Speaker A: So 67 issues to fix. [00:18:40] Speaker B: I mean, I mean the amount of money, if you're using Anthropic in a big way, $25, it's probably not that noticeable unless you're doing a lot of pr. So I mean I definitely, I question if this price stands long term, but I do. The open source version is quite good. I've actually used it in some of my projects where I've had a plugin and it reviews my prs and because I'm using COD code to write them, it has really good positive things to say about most PRs. But I think this is a really sound PR. Like yeah, thank you because you wrote it. [00:19:08] Speaker C: Yeah. [00:19:09] Speaker B: But I do appreciate, I do appreciate the open source one. So I'm curious if this one's that much better. I like the idea that there is multiple agents taking different perspectives at it versus just the open source one which is just kind of a senior engineer review of it. But you know, 25 bucks is not going to be my personal projects ever. So I'm not going to spend that kind of money for a PR review. But you know, in a corporate environment I could see this is money well spent. Especially if you're Amazon by the way and you know, all of your AI. Yeah, all of your AI foibles. [00:19:41] Speaker A: It is interesting. It's, it's because they, they put the 15 to 25 per review. If they just kept it abstracted at the token level, I wouldn't understand how much it costs, just like anything else, and I wouldn't even notice. Still don't know how tokens are built. [00:19:55] Speaker C: I feel like somebody kind of did that math to make it just show you what, what it's going to cost. I mean, I feel like I've also kind of done this not to the level that they've done it, but essentially in some of my projects I say, okay, before you commit the code, you know, just like a pre. Commit hook, like run it through. And I have like four or five different agents with different Personas. I have a kind of review. So it's like security senior engineer, skeptical engineer, you know, and kind of walk down a few ways, you know, in order to kind of get those different views on it and then iterate over each response in that way. Yeah, but nothing like 25 an agent though. [00:20:30] Speaker A: Oh, well, I want, that's what I'm wondering. Actually, I'm laughing because it's like, well, I've been looking at token burn and stuff like that, but for all I know it is $25 per review. [00:20:40] Speaker B: Yeah. Something that explains why your bill's so high. Ryan. I get it. [00:20:43] Speaker A: It would, it would explain that I [00:20:45] Speaker C: kind of like the idea of also having different models slash companies, for lack of a better term, review the code. So a lot of times what I do is I, you know, I write in cloud code, but then I have GitHub copilot kind of handle the actual code review and therefore I kind of get two different views on it with the same thing. So because the GitHub copilot agent's actually, I think, pretty decent in that aspect. So I kind of feel like I get kind of two different views with kind of programming in one, having it to the security review and then on the flip side having GitHub copilot do it. And I always find something, I swear if you send it through anyone, you'll always get it to find some issue, you know, and then I kind of feed that back to Claude to kind of have a loop that way too. So I like the different Personas also, you know, with the different companies on the back end because I think it adds a lot of value to it. [00:21:36] Speaker B: Agreed. Meta is acquiring Multbook, an AI agent social network built as a Reddit style platform. Where participants. Every participant is an AI agent owned by a human with no direct human and membership. The founders will join Meta Superintelligence labs through deal terms that were not disclosed. We didn't really talk about multiple book because we didn't want to talk about openclaw extensively. But basically, openclaw is a terrible way that you can run AI agents in a completely unsafe manner that accesses all of your personal data. And one of the things you could do is add a skill that would basically have it randomly post things onto multiple which could include your bank accounts or security things if you're not careful in your security. And you know, meta buying this is just sort of the classic, it's a social network and it could take us down. Let's just take it out of market and kill it. So congratulations on buying something. I don't know how you monetize or do anything and this really feels like an aqua hire that you wanted for some reason I don't fully understand. [00:22:28] Speaker A: Yeah, I mean, you know, if their, their user base on Facebook's gotta be drying out, you gotta replace it somehow, so may as well be AI agents, right? Yeah. [00:22:38] Speaker B: Well, they did talk about Molt's always on directory approach for connecting agents as a novel development, suggesting the acquisition is focused on agent discovery and coordination infrastructure rather than on the social network concept itself. So yeah, maybe that's exactly what it is. They. They need to figure out different ways for AI to engage with their social networks. They don't lose money. But yeah, so that's, that's the news there. I just. Zuckerberg just, you know, he finally has friends. I'm super happy for him. [00:23:05] Speaker A: He's got a place he can hang out, call home. [00:23:08] Speaker B: GitHub Copilot is now a coding agent that will integrate directly with JIRA cloud, allowing teams to assign JIRA issues to Copilot and receive an AI generated draft pull request interconnected GitHub repo without leaving their existing workflow. The agent works asynchronously and autonomously, analyzing issues, descriptions and comments for context, including the code changes and posting status updates back in jira, including asking clarifying questions when needed. This integration targets common repetitive tasks like bug fixes and documentation updates, and it respects existing pull request review and approval rules, meaning teams do not need to change their governance process. That requires installing two marketplace apps, one from Atleisure and one from GitHub, and notably requires a JIRA cloud with ROVO enabled Alongside an active GitHub copilot coding agent subscription. So there are meaningful Prerequisite costs. [00:23:52] Speaker A: That's interesting because Rovo is Atlassian's AI correct bot. So I'm curious on why that's required, [00:23:59] Speaker B: but because they need to increase their revenues. [00:24:04] Speaker A: Yeah, I guess that does make sense. [00:24:07] Speaker B: Yeah. [00:24:07] Speaker A: I mean previously this was possible with GitHub issues and so like it, you know, but a lot of teams are using Jira for their day to day, so I kind of like the idea of meeting teams where they're at. [00:24:17] Speaker B: I've been doing similar on like a side project that I've been working on. I use linear to track the work and I have, I have Claude regularly checking the linear queue and if it sees something, it'll grab it and triage it for me and then when I get back to my computer I can go look at what it says. So like these patterns are having more and more common where you know, you can basically turn yourself into 24.7coding machine, but you don't actually have to be there for which is nice. And then you just go back and review thousands and thousands of lines of code that you have to understand and read through. So pluses and minuses. But I do appreciate this jira. But now Webinar is also integrated into Confluence where probably all your documentation and coding standards and architecture should be located along with this to also give it meaningful context. But maybe that'll come in a future release. [00:25:00] Speaker A: Maybe so I'm still using text files like a Neanderthal. [00:25:05] Speaker B: I like text files. I like Markdown. I'm a fan. [00:25:08] Speaker A: I have like a bunch of stories in Markdown. Yep. [00:25:12] Speaker B: Well, this story we're gonna talk about is not really about the story as much as what. I have a larger question for both of you. So Cloudflare released the next, which is a complete rewrite of Next JS that swaps out Vercel's proprietary Turbo Pak build system for the standard byte build tool, allowing Next JS applications deploy to Cloudflare workers with a single command and producing client bundles are probably up to 57% smaller. The project was completed by one engineer in a week using approximately 1100 dollars in AI tokens by the Open Code Agent and Cloud Opus 4.5, reducing what would traditionally be years of engineering work. Today, though, the results is mostly experimental and not yet battle tested at scale. A key practical concern is that Vinnex covers 94% of the Next JS API surface at roughly 67,000 lines of code versus Next JS's 194,000, meaning edge cases and security auditing remain outstanding work before Production use at any meaningful traffic level. Cloudflare also shifted migration agent skill that integrates with tools like Claude, Code Cursor and Codex, letting developers run a single command to migrate an existing Next JS project to the next, handling compatibility checks, depends, cancellation, and config generation automatically. So, you know, okay, cool, you spent $1,100 rewriting an open source project you were already getting for free. Congratulations, you played yourself. But I'm more concerned about the idea of like people taking open source software and then using an AI tool to basically rewrite it to get around licensing restrictions. And I was curious what you guys both felt about that aspect of this. [00:26:39] Speaker A: I mean, I feel like it's an awful precedent, right? Like, the whole point of open source is, is community collaboration. And this is directly in the face of that. Like, why would you release something open source if someone's just going to use an AI agent to create their own fork of it? [00:26:55] Speaker C: Like, it's not even forking it, it's writing from scratch. So there's no overlap with the same underlying concepts built in. It's being like, okay, let me go rewrite Apache just because I want a new version of Apache 2. [00:27:10] Speaker B: I mean, like, how, how much of it is actually. I mean, so here's the question too. Like, knowing the LLMs were probably built on the open source models that they already ingested. Like, how much of this is actually unique code? Like, I would love to see a third party assessment of like, hey, the thing that they built, how different is it from what Next JS already is? Is it truly like line for line, different in just enough ways to be considered, you know, unique? Or is this completely, you know, BS that's going to blow up in court in their face? Which is kind of what I hope. I mean, I, I like Cloudflare a lot as a company, so this kind of bugs me that they would be even building tools around this concept. Like, cool Hackathon project, that was a neat idea. But to then commercialize it. Oh, it's so dirty. [00:27:52] Speaker A: That's the problem I have with it. Yeah. And you know, like, that's an interesting, like, it will be an interesting legal case if they bring that up just because it's, it is, it's probably trained on that data. It's included in the model because it is open source. So we know it's public, we know it's been trained on that data. So it's, it's a little bit shady for sure. [00:28:09] Speaker B: Just a little. [00:28:10] Speaker A: And I don't understand the motivation, like [00:28:13] Speaker C: it probably like the license associated with it, the, you know, if it's you know, GPL 3 or you know, I hate trying to understand a lot of different licenses mit, but like, you know, there are restrictions with each license so they probably want to do something with it that is in violation of that license. [00:28:30] Speaker A: Yeah, no, I just want to know the specifics. Like I know it's something along those lines, but like there's a lot of ways in which you can come to agreements with licensing for using open source software, if that's part of that or other things. And so this kind of feels gross. [00:28:46] Speaker C: I mean what's going to be interesting is, you know, I get to answer some security questionnaires my day job. And one of the things that I've seen multiple times is are you using standard like ssl, your tls, you know, libraries or you have you custom written them. And I'm sure if I've ever answered a custom wrote them, someone's going to throw a bunch of red flags to be like, well does it need to be audited, et cetera, et cetera. I'm waiting for someone to be like, while we like open ssl, we are upset by this thing and therefore I'm going to rewrite it from scratch and not have any of the legacy stuff and only support TLS 1.3. And it would be better because A, [00:29:19] Speaker B: B and C. I mean unless you're a Microsoft who writes IIS or some other thing like to you know, write your own libraries at that scale is just crazy to me. Like, you know, I know that you get into these use cases like, you know, everyone uses F5s and you're like, oh cool, F5 is the enterprise standard. Then you're like, well you know, meta used to use F5 and they had to write their own custom load balancer. And you're like whoa, that's crazy. I never thought about but like, but the scale that Facebook is dealing with at that point where they got to that point where they need to build their own dev team to build their own custom load balancer is not a scale that anyone is anyone is typically running into. Unless you're meta or you're fang company. And so like some of these things are like building your own SSL library is a terrible plan for most people most of the time always. Unless you have a very specific like you are trying to build a new web server and you want to divorce yourself from these legacy patterns, that's a clear decision you're making. But like an enterprise software company asking another company to fill out a security assessment questionnaire to ask that question. It just sort of silly to me. Like that's a, that's a laughable question. Seriously, I get so many of those these days. Like some of the questions are just getting worse and worse. I'm like, are they using AI to write these? Because they're just getting dumb. Like the questions are getting worse. [00:30:30] Speaker A: They are. Yes, they are. [00:30:31] Speaker C: The answer is yes. And then they're having AI read it on the other end, right? Where no one actually reads the answers to start off. So then they're going to process through AI. [00:30:39] Speaker B: I mean, if they get AI to actually read them, I think that's probably an improvement because I feel like they, they never read them before. So if an AI is going to at least read them and summarize what the risks are, then that maybe that's something better than what you've dealt with in the past. But all right, so Cloudflare on the one side, on the left hand, terrible company. Don't do this Next JS you guys are jerks to oh, this is really cool. With their next announcement, which is that they've launched a beta web and API vulnerability scanner focused initially on Bola, which is broken Object Level authorization, which is the top threat in OS A. API top 10. Unlike WAF rules that catch syntax based attacks, Bola involves valid, authenticated requests that violate business logic, making them invisible. Traditional defenses the scanner is stateful, meaning it builds an API call graph from your OpenAI spec and chains react quests together, logically creating resources as an owner and then attempting to access them as an attacker. This solves a core limitation of legacy DAS tools that evaluate each request in isolation and miss authorization flaws that span multiple API calls To handle ambiguous or inconsistent OpenAI schemas, the scanner uses Cloudflare Workers AI running OpenAI's GPT OSS 120B model with structured alphas to infer data dependencies between endpoints automatically. This removes the manual configuration burden that typically makes DAS tools slow to deploy. Credential security is handled through HashiCorp's vault, a transit secret engine or credentials are encrypted immediately on submission, and encryption is only permitted by the specific Rust worker exceeding the test. The notable design choice given that no scanners by definition need access to valid API credentials. Scanner is available now in open beta for API Shield customers via API, allowing teams to trigger scans and pull results into CI CD pipelines or security dashboards. Cloudflare plans to extend her OAuth top 10 threats like SQL injection and cross site scripting in future releases. This is super cool. [00:32:23] Speaker A: Like this is the AI enhanced security scanning I've been waiting for. Right? Like these are incredibly difficult to do. And having IT build that model and sort of the fact that they're chaining together requests to view that in that contextual landscape is amazing to me. And I can't wait to see what things like this solve and what they discover because I'm sure it's out there. [00:32:46] Speaker C: See, I'm going to view it from the cynical perspective. I'm more terrified of what a hacker is going to do with this and we're chaining it together in the same exact way. So hopefully, you know, we get these, these tools available to us before, you know, we all get hacked and everyone has a bad day. [00:33:05] Speaker A: Well, I mean hackers are absolutely going to use LLM models to make attacks on publicly available APIs, no doubt. So it becomes even more important to have these scans available because while chaining this together to break some sort of resource contract and might be kind of difficult for a human programmer, it's going to be really easy for an AI based system to do it. [00:33:30] Speaker B: Let's move on to aws. Amazon is not admitting they're having a problem with AI, but they're having a problem with AI. We talked about it a couple weeks ago about outages caused potentially by AI. They then denied it and then this was leaked that there was an internal meeting from Amazon's retail side where they experienced four sub one outages in a single week, including a six hour checkout and account access failure on March 5, prompting an internal deep dive beating led by SVP David Treadwell to review availability posture. The internal document initially cited Gen AI assisted changes as a contributing factor to a trend of incidents since Q3, but that reference was removed before the meeting and Amazon later clarify only one incident involved AI. Allegedly Amazon is implementing new safeguards required. Additional review of Gen AI assisted production changes with Treadwell acknowledging that best practices on generative AI usage and production environments have not yet been fully established. Separate day December AWS address linked to Kiro AI coding tool though Amazon attributed that incident to user error. And with Amazon reaching 200 billion in catalog this year while simultaneously reducing its workforce by tens of thousands, reliability of AI assisted development workflows becomes a practical concern for any organization. So this is the flip side that things can be dangerous. You know, it's great that you're using all these AI tools. It's clear that you're using Claude, you're doing these things, but, you know, make sure you're adding in the right checks and balances to your SDLCs. You know, like I, I have, you know, web hooks and all of my get commits. So when the AI does dumb things, it catches it in the web hook before I even committed to the source code. You know, things like secret scanning, things like looking for, you know, did you update the packages but didn't update package JSON lock? You know, things like that that are just very common, things that'll burn you every time are now being managed and handled appropriately through that. The pipeline I have for my own personal projects now at enterprise scale like Amazon, you should have way more than that. And humans still possibly in the flow. [00:35:22] Speaker A: Yeah, I mean, I think hold on to your butts. We're going to see a lot more of this. Right. If you think about Amazon's running into this now, we're going to see smaller enterprises hitting it very soon because it's the same problem and it's Amazon's right. The practices around AI assisted code and AI generated code are, you know, in their infancy. We're all just struggling to hold on to this train that's moving a million miles an hour and so supporting, supporting this code that a human wouldn't put in place and then there's no humans that authored it. So we got to figure out what it is and how it's affecting this production Outage is going to be fun times. [00:36:00] Speaker C: Yeah. I mean, it's going to be putting all, you know, more and more checks into your pipelines, which is going to upset people because it's going to slow down deployments and everything else. But you need to have that level of detail to make sure, you know, not just from just to remind, you know, side hackathon project, you know, to make sure there's no secrets in it. But, you know, do you have the right level of UN test in place? Do you have a true. Is it going through an integration environment that not just tests in, you know, your little sandbox, but tests in our larger suite of system proof then has its own level of unit tests. You know, one of the things you know, I was working on, you know, was it was spinning up containers and you know, I needed to have for one, some of my tests I needed to have multiple of these file multiple of these containers available. But when I ran locally, I launched about 27 containers or whatever it was. So I had different tests at different levels and kind of built out a different triage set based on each level. So in theory the code could get out of my, you know, local commit, but by the time I hit the GitHub Action commit, it kind of did that next level of testing. And in a large enterprise you probably don't need it just there then you need to have an integration and QA environment kind of run these more larger scale tests. [00:37:12] Speaker A: Yeah, I think it's going to become more important to get some of those things that you know, like chaos engineering and having the ability to do canary deployments that, you know, scale slowly while, you know, accepting load in a way that sort of allows these testing some of the more sophisticated patterns as we roll these things out because it's, you know, it's going to free up our time. But I think we also have to add a lot more sophistication if we're going to move faster. [00:37:36] Speaker B: Agreed. While Amazon's giving you database savings plans for your Amazon OpenSearch service and Amazon Neptune analytics, offering up to 35% savings with a one year commitment and no upfront payment required, plan automatically applies across serverless and provision instances regardless of engine instance, family size or region. So customers can switch instance types like moving from M7I large to a C8G 2X large search without losing their discount. The expansion is useful for organizations running search or graph analytics workloads at scale since neptune analytics and OpenSearch can carry substantial hourly cost benefit from committed use pricing. So yes, finally, thank you. This one took a while. For those of you who don't know what Neptune is, it's a graph database. If you're using anything in the GraphQL space, which I don't really understand, but a lot of people love them. [00:38:25] Speaker A: Yeah, I mean this is, you know, the one example I have for using large instances, you know, in my personal experience. And I can see this being super important and so not being able to move due to not being able to get your discount could be really, really handcuffed. So like I'm glad to see this. And I always like savings plans for, for things because I'm not that great at doing an analysis up front. Sometimes I make decisions that need to be changed later. Don't like being roped in. [00:38:51] Speaker C: I mean, it just amazes me how long it's taken. Like gotta be savings plans and things. People have been asking for for years. Then they finally released it, but they only did some of them and not others. So they're just like just making money. Which I'm sure was one of the reasons for not rolling this out faster. These just sat here and ran forever. So I assume it's just they want to roll it out for making money and now people are getting pressured to cut budgets, be more economical with the current economy. So it's just giving people more tools for that. [00:39:24] Speaker B: Read well. AWS Elastic Beanstalk now is integrating with Bedrock to provide AI powered analysis of environmental health issues, automatically collecting events, instance health data and logs generate step by step troubleshooting recommendations without manual log reviews. The feature is triggered from Elastic Beanstalk console by an AI analysis button when environmental health reaches Warning degraded or severe status is also accessible programmatically through the existing request Environment Info and retrieve Environment Info, CLI and API operation. This is a practical addition for teams managing Beanstalk environments who want to reduce mean time resolution. Particularly useful for developers who may not have deep operational expertise in diagnosing platform level issues. Availability is limited to regions where both Elastic Beanstalk and Amazon Bedrock are supported, so teams and regions without Bedrock coverage will not have access. And AWS does not publish specific pricing details for this feature beyond standard Beanstalk and Bedrock usage costs. So for the thousands of you who apparently use Atlassian Beanstalk, dozens of us, this is a great feature. [00:40:23] Speaker C: I will say troubleshooting Beanstalk is a pain in the butt. It just says degraded and you're like, why? And at one point I had an issue with Beanstalk where I needed a specific CloudWatch put metric in order to do it. It's like it got to the point I opened a sport case, asked AWS why it wasn't working and they're like, here's this buried 17 pages in. So I could definitely see it being useful. I'm a little worried about pricing, you know, as you work down these. But you know, AI and pricing is always, you know, opposites. [00:40:54] Speaker A: Yeah, I've long had an issue with Beanstalk just because of the level of abstraction that it has. And like you said, there, there are issues that you can have that aren't very easy to surface. But trying to reconcile that with AI and the introduction of this is like, do I care anymore if like the AI agent's supporting it? Like, huh, maybe this is a solution I could use. [00:41:13] Speaker C: I mean, I have like security issues with the way it's built, you know, ssh. Open to the world, you know, 443. 443 and 80. Open to the world by default. Like it's just not. Yeah, that's my issues with it more. [00:41:25] Speaker A: It's from a previous age. [00:41:26] Speaker B: Yeah, yeah, for sure. [00:41:28] Speaker C: It's from EC2 Classic. [00:41:30] Speaker B: Yeah. [00:41:31] Speaker A: Are we going to get this feature in lights up, I think is the question. Right? [00:41:34] Speaker B: That'd be cool. [00:41:35] Speaker A: Yeah. [00:41:36] Speaker B: Speaking of other Amazon AI tools, they have released Amazon Connect Health as generally available for offering five purpose built AI agents targeting healthcare administrative workflows including patient verification, appointment scheduling, ambient documentation, patient insights and medical coding with ICD10 and CPT code generation, which is all important for medical insurance reasons. The Surface is HIPAA eligible and integrates natively with Amazon Connect, allowing contact center and point of care workflows to be configured in minutes rather than months, which is a notable deployment speed advantage for healthcare IT teams. And that's just cool. I don't really care about healthcare because I'm not in that space, but this is a, this is a great example of a really purpose built AI that has a very specific use case and I'd almost rather talk to the AI at any time of the day that can book my patient appointment versus waiting for the office to be open and remembering to call during the day when I'm busy. Yeah. [00:42:24] Speaker A: And something that already has context, right? Like so you don't have to explain all the things, well, I need to [00:42:30] Speaker B: talk to the doctor, like I need to talk to the doctor about the appointment I had last week and the new medication you put on there. And they're like, what medication do they put you on? You're like, you have my chart right there. Yeah, right, yeah. [00:42:40] Speaker A: This is the third time I've been in here. You know that context is important and you know I've seen it advanced just in medical applications and so this would be cool. Agreed. [00:42:51] Speaker B: If you didn't take my warning seriously earlier about OpenClaw and how dangerous it is and you know, if, and if you do want to play with it, like I'm not saying you shouldn't. I have played with it and it's cool. But I also didn't give it anything that I care about because basically my rule of thumb is if it's something you're hacked or you don't want to have exposed in a hack, don't put it in openclaw. But if you want to try it and you can't get a Mac mini because everyone's buying them for their OpenClaw implementations. Amazon LightSail is now supporting deploying an OpenClaw self hosted AI assistant that runs on your own LightSail instance, giving you a private alternative to cloud based AI services where data stays within the own infrastructure. The offering includes several built in security features out of the box, include a sandbox agent session one Click htbs without manual TLS setup, device pairing authentication and automatic configuration snapshots, reducing the typical operational overhead of self hosting AI tools. Amazon Bedrock serves as the default model provider, which ties us directly into the broader AWS AI ecosystem. The users can swap models or connect to messaging platforms like Slack, Telegram, WhatsApp and Discord for different workflows. Pricing follows standard LightSail instance pricing rather than separate AI specific cost structure, which makes appealing for small teams and developers who want to have dripable on the costs and the features available across all 15 AWS regions that can have lightsail. So there you go. I would say that the fact that it ties everything to Bedrock is a good way to get a really, really expensive bill. So do it says they won't. It's a predictable price, except for the fact that it's API based usage. So take that with a grain of salt. And at least when people are using their anthropic $200 max plans, they hit a wall. [00:44:21] Speaker A: They can run out of tokens. Yeah. [00:44:22] Speaker B: And they can run out of tokens. And the Bedrock world, they won't run out of tokens and you'll just pay through the nose. So. [00:44:28] Speaker A: Yeah, not unless you set up budgets. Right where it shuts it down. [00:44:30] Speaker B: Exactly. So be cautious. Yeah, I mean, I don't know, like [00:44:34] Speaker A: if you're going to do open cloud, I would prefer it be on one of these sort of hosted platforms just so that you get the benefits of the shared security model. I don't believe in running OpenClaw on a Mac Mini in my environment with my icloud credentials, you know, like sweet Jesus. Like that sounds terrible. Like it's, you know, great. Just be able to hit every stupid IoT device on my network and do stupid things there like ah, pass, pass, pass, pass, pass. [00:45:00] Speaker C: More confused why you have your IoT network able to talk to each other. [00:45:03] Speaker A: It's separate. I was just simplifying, you know, it's separate. [00:45:06] Speaker B: Yeah. [00:45:07] Speaker C: But then even within the network I might set up so the things can't talk to each other because I don't trust things. [00:45:11] Speaker A: Mine too. I was just offering a an example. [00:45:16] Speaker B: Someday in the after show you guys talk about your home network setups. [00:45:20] Speaker A: Yeah. [00:45:20] Speaker C: I just got mad one time when my Alexa decided to find my printer and tell me that I had. It was low on ink and I immediately blocked everything from being able to talk to each other. Immediately within 30 minutes of it printing out a sheet of paper saying Alexa has printed this. And I was like, what is going on? I'm not okay with this. [00:45:37] Speaker A: Yeah. [00:45:39] Speaker B: Amazon OpenSearch ingestion now accepts logs, metrics and traces through a single unified pipeline endpoint, eliminating the previously requirement to run three separate pipelines for each OpenTelemetry signal type. The consolidation reduces operational overhead around access control monitoring and lifecycle management, which translates to lower costs for teams running algorithm at scale. Now, having ran elasticsearch, I will tell you that you probably will still want to have separate ingestion points for logs, metrics and traces as you scale up. But as an early poc, this is a great optimization. [00:46:11] Speaker A: I mean, at the ingestion layer. [00:46:13] Speaker B: I don't know. [00:46:13] Speaker A: Like you know, because it's. This is really at the like the logs dash equivalent. Right, where it's. You're parsing and doing etl. [00:46:19] Speaker B: Yeah, but even there, if you run into backlogs, you don't want backlogs of metrics or MET or logs presenting your preventing your metrics and traces for getting into the data. You know, so there, there's. Yes, you're right, it shouldn't be bad. But having seen elasticsearch blow up in many fun ways at scale, I mean, [00:46:35] Speaker A: your indexes are still going to blow up even with unified endpoint. [00:46:39] Speaker B: Yeah. So just, just be cautious. That's all I can say. [00:46:42] Speaker A: Yeah, for sure. [00:46:45] Speaker B: Well, in 2020, Amazon released AWS Copilot CLI and now today they're announcing it's reaching end of support on June 12 this year, meaning it will no longer receive new features or security updates, though it remains available as an open source project on GitHub AWS, recommending two primary migration paths, either Amazon ECS Express mode for teams wanting a fast opinionated path to production with automatic ALB TLS and auto scaling provisioning and AWS CDK L3 constructs for teams needing fine grained Intraco control and familiar programming languages. E6 Express mode is the closest functional replacement for Copilot's most common patterns such as shared application load balance across up to 25 services and limiting the need to learn a custom manifest format. Teams migrating worker services, backend services and scheduled jobs have specific CDK construct equivalents available, including Q processing Fargate service for SQS based workloads and scheduled Fargate tasks for CRON based jobs. Since Copilot uses standard cloudformation under the hood, teams can also simply adopt existing generated stacks and manage them directly, which represents the lowest effort migration option for teams not ready to switch tooling. I mean, yeah, this is, this is kind of the first step into a fully managed world of ecs. And I remember kind of maybe when it came out, we talked about it was like, well, this is nice, but we really want what became Amazon ECS Express. And so they kind of deprecated themselves in their own way with better solution. [00:48:01] Speaker A: And you know, like this, a lot of this was simplifying that deployment pattern because if you're developing an application, maybe you don't want to deal with your container orchestration. You abstracted it with these CLI commands and now like AI enhancement, like it's just not that difficult anymore to ignore the parts of the SLTC that you want to. So yeah, I, I don't remember this coming out at all. I probably wouldn't have liked it, but I didn't know about it until today. [00:48:28] Speaker B: Again, I'm pretty sure we went back to it. You were either not there or you were there and you were like, I need to learn more about this, but I hate it on the surface. [00:48:34] Speaker A: Yeah, I mean, if it's been more than 15 minutes, I don't remember. [00:48:36] Speaker B: So, I mean, 2020, it's currently on July 2020. So I think I also, I definitely don't remember suffered from the fact that none of us cared about anything at that point other than surviving pandemic. So, you know, it's just definitely a change. [00:48:48] Speaker C: I was just surprised that Amazon even came out with something called Copilot. [00:48:52] Speaker B: Like, well, this is this predated Microsoft copilot. This is 2020. 2020. [00:48:58] Speaker A: Does it predate GitHub Copilot? [00:49:00] Speaker B: Maybe I think it does. Matt's doing real time follow up. [00:49:05] Speaker C: How did they not sue October 2021? [00:49:08] Speaker B: They probably didn't trademark it. [00:49:09] Speaker C: Technical preview was June 29, 2021 for GitHub Copilot. [00:49:13] Speaker B: So a year almost just under a year later. So yeah, I don't know. Amazon Route 53 Global Resolver is now generally available across 30 AWS regions, expanding from the 11 region preview shown at. Re Invent 2025 with support for both IPv4 and IPv6. DNS query traffic from any location. The service functions of the Internet reachable Anycast DNS Resolver allowing authorized clients in an organization to resolve both public Internet domains and private Route 53 hosted zones without being tied to a specific network location. Security filtering is a core capability Blocking malicious domains, DNS tunneling domain generation algorithms and now with general availability dictionary DGA threats alongside centralized query logging for visibility across your org. This addition is Global Resolver as a managed alternative to running your own DNS resolver infrastructure for distributed or Remote workforces, reducing operational overhead while centralizing DNS policy enforcement. You can try this out for 30 days free as a new customer. Pricing details available to you in the Route 53 Global Resolver Service page. [00:50:10] Speaker A: I both love and hate this a global anycast resolver. Like I know how much of a pain it is and so like I wouldn't want to set another one up and I would gladly pay Amazon to do that. However, I don't know that they're removing the annoying parts and like you add more abstraction. I wonder troubleshooting, you know, failed queries and that's gonna be really difficult and you have a lot more control when you control the network for these things. And so I'm very dubious about this one. But you know, if it just works then probably be worth it. [00:50:45] Speaker B: I mean they may not fix all the sharp edges yet, but I assume that those would be future enhancements you could make and say hey this is great but here's the problems. I still have your solution and they would at least take under advisement. So it's a potential that you're right. It probably doesn't solve all your pain points from ever having to do this. But I have more hope now that this gets fixed in a more programmatic way because it's now a service. [00:51:09] Speaker A: Yeah, no, it's definitely true. And DNS is very rarely going to be your core capability. [00:51:13] Speaker B: Right. [00:51:13] Speaker A: Of whatever business you're in. [00:51:15] Speaker B: Yeah. I mean if you know someone who knows Bind inside now, they'll probably work for Amazon on the Route 53 teams. Like that's exactly like it's hard to find people who really know Bind these days. Yeah. And those of us who did know it have tried to forget all of it. [00:51:28] Speaker C: We could purposely forgot it. [00:51:29] Speaker A: I drank all of that knowledge away. Yeah, yeah, yeah. [00:51:33] Speaker B: Our final Amazon story for the week. AWS is publishing a walkthrough for hunting GitHub Actions to Amazon ECS Express mode, automating the full pipeline from codecommit to container deployment including image builds, ECR pushes and service updates without manual coordinations. Integration uses OIDC for indication set of stored aws credentials, meaning GitHub Actions receives temporary credentials that expire after each workflow run, which reduces the risk surface compared to long lived access keys sitting in your repository. ECS Express Mode handles the infrastructure heavy lifting automatically provisioning an ALB target groups, health checks, auto scaling based on CPU and security groups so teams get a production ready stack from minimal workflow configuration image tagging using the first seven characters of the git commit SHA give teams precise version traceability and a straightforward path rollback by referencing a specific immutable im in the ECS deployment history and costs are usage based, covering ECS Fargate tasks, ECR storage and data transfer with no GitHub action charges for public repos. The estimated Setup time is 20 to 30 minutes, making this a relatively low friction starting point. I'm really intrigued by this. I have GitHub actions that deploy containers and they work just fine and are great. But if I could offload a bunch of this to this service and I'm already actually interested in potentially moving to ECS Express Mode servers for other reasons so I'll be checking this one out. I'll get back to you guys when I test it out. [00:52:49] Speaker A: I've written this pipeline so many times the exact same thing with the SHA characters. Everything about it that which this is fantastic, right? Because it's if I never had to do it again I would be very happy because it's not a new problem to solve. It's one of those toil things that you have to do when you're developing a new containerized app and it's like you need to get it out there and have the deployment pipeline. So this is. These are great. I love it. [00:53:13] Speaker C: And as of December it does support Spot. I think I remember originally it didn't support Spot. So that also helps a lot because that's why I haven't used it care. [00:53:23] Speaker A: I mean, I guess the Express node you're probably getting a different price on the hosting if it's on if you select Spot. Yeah, but you're abstracted away from the entire node, aren't you? [00:53:32] Speaker C: Yeah, but I don't want to pay for, you know, a node. I'd rather pay for the Spot node for my little home project. I don't really care about if it's goes down for a few hours. [00:53:41] Speaker A: I'm just surprised they let you. I assume the whole thing would run on Spot. [00:53:45] Speaker C: It wasn't originally December 18th. [00:53:47] Speaker B: They would take the cost. [00:53:48] Speaker C: Yeah, somewhere in the month of December when you know we are catching up on reinvent Microsoft conferences and erm, predictions. We either talked or missed this one. [00:53:57] Speaker A: Yeah, cool. [00:53:59] Speaker B: We went on to gcp. They have gifts for you Ryan. They are publishing a recommended security checklist at Web Viewer. I'm not going to say out loud. Featuring 60 curated controls access across six domains including authentication, data protection and network security. Organized into basic, intermediate and Advanced tiers, the checklist is directly motivated by Data from the 2025 Google Cloud Threat Horizons report, which found weak credentials and misconfigurations account for nearly 76% of cloud compromises. That's a, that's a big number. Can you believe it? [00:54:29] Speaker A: In this day, it's still that big of a problem. [00:54:31] Speaker B: Yeah. A companion Terraform repository on GitHub is available to provide deployable code for the controls. Moving the checklist beyond documentation is something teams can act on immediately and consistently. The checklist is free to use and aligns with the open Minimal Viable Secure Product framework, meaning organizations can cross reference it against existing compliance or vendor neutral security standards. [00:54:50] Speaker A: They may already be tracked, so your mileage may vary. Some of the code that they have in the solution requires really, really high privileges to run in your GCP environment. So it's, it's one of those things where you might not be able to get that far with it unless you're administering the cloud directly. But it's definitely, I think, a lot of really good, useful things that you can then take. Maybe you can, you know, it's just terraform code in a repo, so you can pick and choose the parts that you want to use and apply that directly. And so I like that, you know, anything that allows automation for security and allows people to sort of focus on what they care about is pretty great. It will be interesting to run this on, you know, our day job environment and see, see what it comes up with. [00:55:38] Speaker C: I mean, Ryan, don't you always. [00:55:39] Speaker A: Hopefully nothing because I've done such an amazing job. [00:55:42] Speaker C: Don't you always run as the highest privileged admin user? [00:55:45] Speaker B: Come on, I just run as mobile admin all the time, right? That's what everyone does. [00:55:48] Speaker A: You guys, not today. That's. It's too soon. [00:55:52] Speaker B: Yeah, talk about that next week. Yeah, we'll talk about that next week. [00:55:55] Speaker C: Yeah. [00:55:58] Speaker B: Rip Stryker, that's all you. Yeah, look at the news. New agents for Autonomous Network Operations Framework is coming out from Google Cloud. These two new components, the autonomous data steward and the core network of Volte Agent V O L T E, both built on Gemini and targeted telecom operators managing complex network infrastructure. The autonomous data steward addresses a core scaling problem by using a zero copy architecture with Dataplex Universe catalog to store metadata pointers instead of duplicating data sets. And the VO LTE agents built on data steward foundations to monitor voice quality metrics like call setup, success rates and mean opinion scores. Correlate SIP and diameter signal data for root cause analysis and recommended corrective actions. This is all a lot of stuff for telcos, but it's cool. Like if you're into geeky telco things like this is something to check out. [00:56:44] Speaker C: You only really understood about half of this. [00:56:47] Speaker A: Yeah, I was trying to figure out because I didn't quite understand it on a first read like what. But I guess you're it's because I'm not from a telco background. [00:56:54] Speaker B: Yeah, it's a. You know, basically think about managing massive network sides and like having to track call tracking and then if you have call tracking falling down, you know, because you're not getting calls connected, then how do you address that? With different metrics and KPIs and automation. So there's a quite a bit of cool automation there. Just good patterns like in general for other things that you might be able to apply them to. [00:57:14] Speaker A: That's cool. [00:57:15] Speaker B: Notebook LM has finally decided to stop going after podcasters and now goes after YouTubers. So thank you for this. Google Notebooks LLM's new cinematic video overviews moves beyond static narrated slides to generate fluid animations and detailed visuals from user provided sources using combination of Gemini 3 and VO3 models. Working together, Gemini functions as the Creative director in this pipeline, handling narrative structure, visual style selection, format decisions and self refinement passes to maintain consistency across the generated video. This is a consumer facing AI feature rather than a direct GCP and Treasure offering, but it demonstrates practical multimodal orchestration that GCP customers building their own AI pipelines may find instructive. Availability is currently limited to English language users on web and mobile who subscribe to Google AI Ultra which is priced at $250 per month. Which is makes perfect sense because the few times I've used Veo it's cost me about $45 to make 1230 second clips. The primary use case centers on education, knowledge synthesis. Users can transform documents, research or other sources into video summaries. So again a little bit pricey to replace all the YouTubers but coming soon. [00:58:17] Speaker A: Keep an eye on that one. Yeah, I I was going to see if I could run this against you know my all my data that I'm collecting for AI Security, but not anymore. So you will have to troll through all my data sources in my notebook and chat with the bot like old school because I I refuse to do the it's forward. [00:58:36] Speaker C: Just send the bill to the cfo. They won't care too much. [00:58:40] Speaker A: No, Justin knows where I live. [00:58:42] Speaker B: I do. Stay away Ryan. [00:58:50] Speaker C: That's why I live on the other coast. So I could use take a flight from dmt. [00:58:55] Speaker B: I unfortunately am in driving distance of Both Jonathan and Ryan and so any of shenanigans they're up to, I can go shenanig stop as quick as possible. Gemini Embeddings 2 is Google's first natively multimodal embedding model is now in public preview by the Gemini API and vertex marking. Google's first natively multimodal embedding. Built on Gemini architecture, it maps text, images and videos up to 120 seconds audio and PDFs into a single unified embedding space across 100 plus languages. A notable technical detail is that audio is embedded natively without requiring intermediate transcription, which removes a common pipeline step that previously added latency and potential accuracy loss in multimodal flows. I mean, those of you like embedding is all about getting vector scores and so we use it for example for being able to search all of our claw podcast notes and all of our show history. Now that's what we built into Bolt. So if you're on our chat room, you can go ask Bolt to ask like hey, when did were you talking about the spot instances earlier that Matt accidentally reacted to the show notes while I was talking talking and they came back and told us that yes, in episode 337 we covered it. Apparently Ryan was quite enthusiastic about it, per the show notes. Of course I was, but I didn't ask what, you know, who was the host of the day. So maybe Matt. That was an episode you missed. [01:00:13] Speaker C: Now I want to ask the question while we're live and see what happens. [01:00:16] Speaker A: Yeah, this is kind of interesting. It's. Yeah, I go back and forth like on these, these multimodal because I feel like there's so much bloat and we use the wrong model for so many use cases. And I feel like the multimodal is a really good way to do that and so is interesting. Like I just haven't seen a use case where I would see a whole lot of benefit of being able to sort of use the multimodal model to get an answer out of a. An LLM that I wouldn't be able to get using other tools, I guess. I don't know. I feel like maybe I'm a Luddite on this. I know people are really into it. So it's like I feel like there's something I don't understand. [01:00:54] Speaker B: The Bolt did a confirm that Matt was on that episode. [01:00:58] Speaker C: Thanks for calling me out. [01:01:01] Speaker A: We can't lie about anything anymore. [01:01:02] Speaker B: I know the bot. The bot does not lie. He's. He's on top of it. Our next last story for Google is all about Google workspaces. And so they've Geminiified all of Doc, sheets, slides and Drive. So now you get all the benefits of Gemini and all of the Google solutions in Docs. Features include style matching across document and format matching as a reference file. So Gemini can populate a travel itinerary template using flight hotel details. For example, Sheets gets a fill with Gemini capability that lets users drag down a column and have Gemini populate Excels with real time web data or summarize content, which. That's cool. Drive gains an AI overview feature in search results that summarize relevant file contents with citations before a user even opens the document, which I've used a couple times already. [01:01:41] Speaker C: Wow. [01:01:42] Speaker B: Which is pretty cool. On the spreadsheet side, it's. It's reached a 70.48% success rate on full spreadsheet Bench Public Benchmark, which tests AI models on real world spreadsheet editing tasks. It does not do pivot tables as far as I can tell though I need to test it a little bit more. Again, pivot tables are mad science that no AI can figure out, but those are cool. So if you're in the Google workspaces place, you got basically what Copilot gave you, but much better than Microsoft's Copilot, which is still terrible. [01:02:09] Speaker A: Well, you do have to be on the Ultra and Pro. [01:02:12] Speaker B: Yeah, that's true. You do have to pay a small fortune for both of those. But yeah, but if you're already using it for Gemini cli, if you're using that for your coding projects, then you know, you already have access to those things. [01:02:21] Speaker A: Probably already in there. Yeah, I mean, it is cool. I think this, this is just going to be table stakes, right, for all of our tools. Like, we're going to need it. We're going to be useless without these AI tools in our, in our tool set. So like I'm already becoming useless. [01:02:34] Speaker C: Going to be. I think I already am half the time. Even like emails and things. I'm like, you are a pain in the butt. Polish and make me sound more formal. Thank you. [01:02:45] Speaker A: Which is what I want, right? I just want general instructions for every email. Like, did I, did I say something offensive this time? You know, like maybe don't send that yet. You know, that's what I want. [01:02:56] Speaker C: Automatically send to Trash and don't send when I tell people that they're morons. Gotta check. [01:03:01] Speaker A: Redirect. [01:03:02] Speaker B: Back at you. [01:03:03] Speaker A: Yeah. What does this sound like to you? Oh, whoops. [01:03:08] Speaker B: Azure is our next segment Here so we can finish up on that. Azure Databricks Lake base is now generally available as a managed serverless postgres offering that stores operational data directly in lakehouse storage. [01:03:20] Speaker A: Cool. [01:03:20] Speaker B: Glad to have in our database that is available but it's postgres. [01:03:25] Speaker A: It's not a new database and it's data breaks. [01:03:26] Speaker B: It's both. It's the best of both worlds. [01:03:28] Speaker A: It's perfectly fine. [01:03:29] Speaker B: It's great. [01:03:30] Speaker A: Yeah. [01:03:31] Speaker B: I mean, I don't think it's bad, I just. [01:03:33] Speaker A: No, it's just so confusing these days when trying to pick and choose tools [01:03:37] Speaker B: and like, where do you store it? Do you store it in a blob and you know, in some other open proprietary format? Then you use something like, you know, spark to then query or do you put it into a data warehouse? Then you can use. [01:03:50] Speaker A: Use something like Iceberg, make sure it's agnostic. [01:03:53] Speaker B: Yeah, no, then feed it into a rag model into large language learning systems and like, like there's. Yeah, like data is the new currency, is definitely the case and it's also costing a lot of money for what it provides. Oh yeah. So definitely crazy. [01:04:08] Speaker C: I was going to say it's not where you store, it's how many places you store. That's really the key here. How many places can you store the same data and not get upset? [01:04:18] Speaker B: Well, it's not very sustainable to store the data in multiple places, as the EU will tell you. So data sustainability says that you should not have to react, which is why Iceberg is such a great option because it's agnostic across all these solutions, but has its own complexities because it's agnostic. So you know, there's lots of things [01:04:35] Speaker A: here, but then I have to store it in multiple places to say comply to GDPR if I've got global customer bases. And then I have to process the data in separate places and then correlate the results and then I have to go have a drink. [01:04:48] Speaker B: Microsoft is releasing Copilot Cowork. And if that triggered a weird flinch in your mind, that's because they combine a copilot, which is Microsoft's AI thing, with Cowork, which is Anthropics. And you go, huh. And I can tell you this is a new Microsoft 365 feature that moves Copilot beyond answering questions to actual executing multi step work tasks such as rescheduling calendar conflicts, building meeting packets and coordinating product launch assets across Outlook teams in Excel. And this is through a partnership with Anthropic. [01:05:17] Speaker A: So Anthropic's not OpenAI what happened? [01:05:20] Speaker B: I know the feature is powered by Work IQ, which pulls signals from across Microsoft 365 apps to give Copilot contextual understanding of your work before taking action. Will user control checkpoints to approve, pause or modify tasks. The technical details that Cowork integrates Claude from Anthropic alongside Microsoft's own models, reflecting a multimodal approach where Copilot selects the most appropriate model for any given task rather than relying on a single single provider. Enterprise Governance is built in by default with Identify identity permissions and compliance policies, and Cowork is currently in Research Preview with limited customers and will expand to Frontier Program in late March So yeah, this is basically all the things we've been raving about Cowork for the last few weeks built into Copilot, which is a win win in my book, which [01:05:58] Speaker A: I might be able to use in my my corporate environment for the day job. Which is exactly what I want. [01:06:03] Speaker B: Exactly. [01:06:03] Speaker C: Yeah, I want to get rid of all the annoying stuff I do and let it just do it for me. [01:06:07] Speaker B: Now here's the problem. Here's where the rub comes from. [01:06:10] Speaker A: How much? [01:06:11] Speaker B: Yeah, so I knew it. If you're familiar with Microsoft 365 and the subscription levels, there's the E3 and the E5 and those are the two most common licensing levels that you as a user are probably applied to. You probably don't know which one you are unless you're in it, but if you have things like Defender and Intune on your phones and Entre Suite and all that, that's typically part of the E5 bundle. And if you don't have those things, you just have Outlook and Teams. You're probably on the E3 bundle. Just give you have a rough guess. Again, this is a wild complex wild wild west of Microsoft licensing. So what you're actually on, I can't actually tell you, but that's the general rule of thumb. So E5 used to be the top line of that. And as over the Last couple years E5 has come out and then they've been basically upcharging you into co pilot licensing on an individual product by product SKU basis. No more. They now will bundle all of it in a new bundle called the Microsoft 365 E7, the Frontier Suite available starting May 1st at $99 per user, bundling Microsoft 365 E5, Microsoft 365 Copilot, and the new Agent 365 into a single SKU that includes Entre Suite Defender, Intune and purview capabilities. Agent365 will also be generally available May 1 at $15 per user. Functions as a control plane for AI agents, giving IT and security teams a single interface to observe, govern and secure agents across organizations. Microsoft reports busily into over 500,000 internal agents as Customer Zero, generating 65,000 daily responses in the past 28 days. Wave 3 of Microsoft 365 copilot introduces model diversity by adding anthropic Claude to the Mainline chat alongside OpenAI models. Includes a research review of Copilot we just talked about and the concept work IQ is central to the announcement. Positioning microfiber Copilot as differentiated from generic model plus connector solutions by embedding organizational context about how people work, who they work with and what content they use. Adoption metrics cited include paid Copilot seats growing over 160% year over year, daily active usage by up 10 times, and the number of customers deploying more than 35,000 seats tripling year over year. If you have Microsoft stock, you should hold onto it. If you do not have Microsoft stock, you should probably think about buying it. Because as this rolls out over the next year and customers start renewing their Microsoft 365 licenses, they will bundle because a lot of them are paying a lot of extra money for these Copilot capabilities today. [01:08:27] Speaker A: Yeah, but they're also spending money like hand over fist. [01:08:29] Speaker C: So I don't know. [01:08:30] Speaker A: But this is interesting because I, I know at least an evaluation in talking to people at a couple different companies when they were rolling this out originally I think it was something like 30 or 50 bucks a user and no one wanted to pay that price. [01:08:44] Speaker C: 3 bucks per person with like X number of minimum too because that was the other piece of it. You had to have like 100 minimum or something. [01:08:50] Speaker A: Oh yeah, right. And there was a minimum number of users. Like it was a large amount of. And so now they're just three. Three timings that and so that's crazy. [01:08:59] Speaker B: Well, I'm sure they'll offer you discounts for the first couple years and people will get into it. You know it. It's a long term thing. Yeah. [01:09:07] Speaker A: I mean, because that I was, I mean my big gripe was this was the thing that I wanted. This is where I thought AI, especially in the early days was going to make the biggest impact in my life. And the fact that it was so expensive, you know, made me really angry because I can't now, I can't have it directly interact with my email. Can't all the stuff. And so now there's other solutions for that, but largely because I thought Microsoft missed the boat on pricing. So interesting. [01:09:31] Speaker B: We'll see how E7 adoption happens. You know, I'm sure it'll be interesting, if not enlightening to my budget later. So all of a sudden I was [01:09:40] Speaker A: thinking about how you, how else you could determine which, which level you're on. Like how angry your IT department is. [01:09:46] Speaker B: Exactly. Right. [01:09:47] Speaker A: Like if they're, if they're like, oh, I'll help you fix something, they're on E3. [01:09:51] Speaker C: If they're like, Talk to AI, you're on E7. [01:09:55] Speaker A: If they're like, I'm not turning anything on ever, they're on E5. [01:09:59] Speaker B: Yep. And finally, I got a couple Oracle stories for us to wrap up the week. First up, Oracle is launching OCI Cost Anomaly Detection as a no cost feature that uses machine learning to monitor daily cloud spend across all services and regions, alerting users when costs deviate from forecasted baselines. I mean, this has been every other cloud forever, so welcome to the party. [01:10:21] Speaker A: Yeah, OCI users get used to spam email about how you're got an anomaly in your billing and no one will do anything about it unless it's you. Hopefully you'll do something about it. [01:10:32] Speaker C: I love when I get mine that are like you're a 0.02% higher and I'm like, cool, is that bandwidth? Like you provide me any insight into it, just not that there's an anomaly. [01:10:44] Speaker B: Yeah. [01:10:45] Speaker A: Also I don't care if it's, if it's a, an overspend by $122 in my app, my product, application. [01:10:52] Speaker B: Yeah, I need a threshold for my anomaly that I, that I care about. Right. It's definitely a challenge. And then because Oracle announces their earnings. Such weird timings. We'll talk about Oracle earnings here without the noise. You guys don't have to cover your ears. [01:11:06] Speaker C: Oh, thank God, I was. I have moved over to the mute button. [01:11:09] Speaker B: Originally found, Oracle reported Q3 fiscal 2026 total revenue of $17.2 billion, up 22% year over year with cloud revenue specifically hitting $8.9 billion, a 44% increase marking the first quarter in over 15 years where both organic revenue and non GAAP EPS grew at 20% or more simultaneously. The remaining performance obligations figure at 553 billion, up 325% from last year is the headline number worth of squeezing. Oracle notes most of its growth comes from large scale AI contracts funded either through customer prepayments for GPU purchases or customer supplied hardware, which is a notably different model than traditional cloud commitments. Oracle raised 30 billion in debt and equity financing within days of announcing a $50 billion capital raise program with the proceeds tied to funding infrastructure for AI training and inferencing capabilities. Oracle is openly stating has restructured product development teams into smaller groups due to AI code generation tools framing. This is a cost reduction and productivity improvement for SaaS development, but the workforce implications of building more software with fewer people deserves some attention. Look back at Amazon Co. Raised fiscal year 2027 total revenue guidance up to to 90 billion up from prior estimates, with maintaining fiscal year 2026 guidance at 67 billion, suggesting Oracle is betting heavily that AI infrastructure demand will remain supply constrained and that cloud positioning will capture a meaningful share of the future spending. [01:12:24] Speaker A: Yeah, well I think that's a pretty good bet. So I get makes sense to me. I I also think that, you know, Oracle's kind of looking into, you know, the multicloud, you know, avoiding lock in because of the GPU shortage. Like people are having to adopt Oracle Cloud to get the capacity they need, which may not be a bad thing. Like as much as I kind of dislike Oracle as a company, like I do feel like diversification of our cloud hosting providers is a good thing. [01:12:52] Speaker B: Agreed. Well gentlemen, that is it for another fantastic week here in Cloud and AI land. So I hope you guys have a good week and we'll talk to you next time. [01:13:01] Speaker A: All right, bye everybody. [01:13:03] Speaker C: See you next week. [01:13:07] Speaker A: And that's all for this week in Cloud. Head over to our [email protected] where you [01:13:12] Speaker B: can subscribe to our newsletter, join our Slack community, send us your feedback and ask any questions you might have. Thanks for listening and we'll catch you on the next episode. Foreign. We do have a quick after show this week for those of you who pay attention to any gaming. You might be aware that Xbox had recently replaced Phil Spencer, who has been Microsoft for 38 years, with a new Xbox CEO Asha Sharma. The announcement is notable given persistent industry speculation that Microsoft might exit the console hardware business entirely. But no, no they after the CEO change, Project Helix is the codename of its next console and which would be interesting against the current RAM shortage because Xboxes use a ton of RAM as well as potentially you know how much cloud is going to get pulled into the next Xbox. As a person who wishes they gamed more that I am, that I have an Xbox, you know, I have sort of felt like Xbox kind of lost its mojo in the Last few years. So I'm kind of curious to see in general how this transitions over the next few years. But I'm excited. New console is always exciting if you're in the gaming world, but I wish I played more. I do. [01:14:29] Speaker A: Yeah. [01:14:29] Speaker B: It's just one of those things I don't get a chance to do. [01:14:31] Speaker A: A lot of it is kind of. It is strange because it's like console has definitely console of any sort. Right. Whether it's Xbox or PlayStation seems to be declining and more. You know, it's still on the Windows platform, so that's good for Microsoft. But a lot of PC gaming still. [01:14:46] Speaker B: I mean, PC gaming has been big for. Since VR ever became a thing. [01:14:49] Speaker A: Yeah, Mac for a little while tried to make some notable inroads. Right. With the CPU architecture change, but that seems to have gone out the window. So. [01:14:58] Speaker C: Didn't they have their own VR headset that I don't have seen anybody ever use? [01:15:03] Speaker B: Yes, PS5 does have a VR headset [01:15:05] Speaker A: that no one uses. Yeah, yeah. That I've never once seen anyone use. [01:15:11] Speaker B: Yeah. But I mean, it's. I definitely think console gaming has degraded some. But I mean, kids love console gaming, so it's like one of those things. Like, even my first gaming was on Nintendo back in the day. And then, you know, I moved into PC gaming and then I got back to Xbox when Halo came out, because Halo was awesome. And then I did that quite a bit. But yeah, I mean, in general, you're right. The console market is contracting pretty meaningfully for. At least for Xbox. PS5 has moved kind of steady. And so again, this kind of goes back to why. Why you need to have a new CEO and a new mindset around Xbox, I think. And how do you, you know, get. They kind of moved away from console exclusives, which is what was a big driver of business. Plus, the games they were doing exclusively to Xbox were terrible. It's kind of the general. And all the good games, with the exception of Halo. Yeah, yeah, yeah. And all the good games are kind of ending up on PlayStation. You know, things like, God, what is that show with Peter Pascal? Last of Us. [01:16:03] Speaker A: Last of Us, Yeah. [01:16:04] Speaker B: You know, so some of those games, fnaf, et cetera, those are all things that are coming on the PS5 side that have been driving pretty aggressive adoption on that side for quite a while. So it'll be interesting if they can actually make this thing interesting. And also it's a way for them to potentially bring AI to your living room. You know, that's one of the things they, they had the Kinect a while back. You remember it was how you got a camera into your living room for the first time to be able to do connect games. [01:16:26] Speaker C: I thought about those, I thought they [01:16:26] Speaker B: were fun, I thought they were cool too. But they, you know, people got a little skizzed out about them from a camera on the privacy perspective. [01:16:33] Speaker A: Followed you around the room. [01:16:34] Speaker C: Yeah, yeah. [01:16:35] Speaker B: But I do think Xbox was the [01:16:36] Speaker A: first sort of media server in a lot of people's living rooms, you know, for digital media and that's become sort of commonplace and so yeah, no, there's definitely some, some opportunity there for them which would be cool to see and then, yeah, who knows what's going to happen with the AI world in that space. There's, there's a huge opportunity there. I just don't even know what it would look like. [01:16:55] Speaker B: I mean, interesting because like some of the rumors of this Helix are that's going to be a premium product. A thousand to fifteen hundred dollars. I'm like, I'm not sure I'm going to buy a gaming ma at that pricing. You know, I think the current Xbox is 799.99. It's pretty pricey and I, I didn't buy for myself. It was given to me as a gift for Christmas when I guess I gave it to myself technically from a dollar perspective. But my parent, you know, my wife gave it to me and my friends and everything, they all gave it to me as part of Christmas gift. So it's definitely expensive. And so I don't know if a twelve or fifteen hundred dollar box makes any sense. And they talked about doing subscription for a while too where you could subscribe to your Xbox. I'm like oh good, another subscription payment. Just like my kit, you know, everything else in my life. [01:17:37] Speaker A: So I mean maybe before they do that, like reach out to Apple and see how the Vision Pro is doing, [01:17:42] Speaker B: you know, I mean if they, if the Vision Pro is a subscription device, I might actually have bought one. [01:17:48] Speaker A: Well, right, but you probably still have to pay like thousands of dollars. That's always how this works. [01:17:54] Speaker B: Exactly. All right, well, let's keep an eye on Xbox. I'm curious. Anything? Matt, we didn't talk about this. I know you added this in the show notes because you were interested. [01:18:02] Speaker C: I'm not actually that interested. I'm a terrible gamer. Never really been into it but you know, I think it's always fun to see where these things are going. [01:18:10] Speaker B: Well, a lot of times things lead in the console market and then follow into computing, and sometimes they go the other way. So we'll see what happens in this case. But if they could put some type of, like, custom AI in the society on your gaming things you do custom levels and stuff, there's some really cool ideas you come up with that maybe justify the premium price. But the games have to be perfect. [01:18:30] Speaker A: They have to be really good. Yeah. [01:18:32] Speaker B: All right, gentlemen, talk to you next week. [01:18:35] Speaker A: All right, see you. Bye.

Other Episodes

Episode 201

February 27, 2023 00:36:04
Episode Cover

201: The CloudPod is assimilated and joins the Azure Collective

On this episode of The Cloud Pod, the team discusses the AWS systems manager default enablement option for all EC2 instances in an account,...

Listen

Episode 240

December 30, 2023 01:24:46
Episode Cover

240: Secure AI? We Didn’t Train for That!

Welcome to episode 240! It’s a doozy this week! Justin, Ryan, Jonathan and Matthew are your hosts in this supersized episode. Today we talk...

Listen

Episode

December 24, 2018 27:54
Episode Cover

Episode 2 – Larry says no normal person would listen to this podcast

Show NotesAWS Homework Assignment – Now Go Build E1 https://www.youtube.com/watch?v=a42kxHSX4Xw Show Topic AWS ECS Container Roadmaphttps://github.com/aws/containers-roadmap GCP Google Cloud Next: https://cloud.google.com/blog/products/gcp/mark-your-calendar-google-cloud-next-2019?utm_source=DevOps%27ish&utm_campaign=3fc0c13de2-106&utm_medium=email&utm_term=0_eab566bc9f-3fc0c13de2-46450203 Save the date:...

Listen