326: Oracle Discovers the Dark Side (And Finally Has Cookies)

Episode 326 October 23, 2025 00:50:54
326: Oracle Discovers the Dark Side (And Finally Has Cookies)
The Cloud Pod
326: Oracle Discovers the Dark Side (And Finally Has Cookies)

Oct 23 2025 | 00:50:54

/

Hosted By

Jonathan Baker Justin Brodley Matthew Kohn Ryan Lucas

Show Notes

Welcome to episode 326 of The Cloud Pod, where the forecast is always cloudy! Justin and Ryan are your guides to all things cloud and AI this week! We’ve got news from SonicWall (and it’s not great), a host of goodbyes to say over at AWS, Oracle (finally) joins the dark side, and even Slurm – and you don’t even need to ride on a creepy river to experience it. Let’s get started! 

Titles we almost went with this week

General News 

01:24 SonicWall: Firewall configs stolen for all cloud backup customers

02:36 Justin – “You know, providing your own encryption keys is also good; not allowing your SaaS vendor to have the encryption key is a positive thing to do. There’s all kinds of ways to protect your data in the cloud when you’re leveraging a SaaS service.”

04:43 Take this rob and shove it! Salesforce issues stern retort to ransomware extort

06:31 Ryan – “I do also really like Salesforce’s response, just because I feel like the ransomware has gotten a little out of hand, and I think a lot of companies are quiet quietly sort of paying these ransoms, which has only made the attacks just skyrocket. So making a big public show of saying we’re not going to pay for this is, is a good idea.”

AI is Going Great – Or How ML Makes Money 

07:06 Introducing AgentKit

09:03 Codex Now Generally Available

09:48 Ryan – “I don’t know why, but something about having it available in Slack to boss it around sort of rubs me the wrong way. I feel like it’s the poor new college grad joining the team  – it’s just delegated all the crap jobs.” 

10:14 Introducing the Gemini 2.5 Computer Use model

11:48 Ryan – “I think this is the type of thing that really is going to get AI to be as big as the Agentic model in general; having it be able to understand click and UIs and operate on people’s behalf. It’s going to open up just a ton of use cases for it.”    

AWS

12:35 AWS Service Availability Change Announcement

13:53 Ryan – “It’s interesting, because I was a heavy user of CodeGuru and CodeCatalyst for a while, so the announcement I got as a customer was a lot less friendly than maintenance mode. It was like, your stuff’s going to end. So I don’t know if it’s true across all these services, but I know with at least those two. I did not get one for Glacier – because I also have a ton of stuff in Glacier, because I’m cheap.” 

17:01 AWS Direct Connect announces 100G expansion in Kansas City, MO

18:07 AWS IAM Identity Center now supports customer-managed KMS keys for encryption at rest | AWS News Blog

18:52 Justin – “Encrypt setup can disrupt Identity Center operations, like revoking your encryption key, might be bad for your access to your cloud. So be careful with this one.” 

19:28 New general-purpose Amazon EC2 M8a instances are now available | AWS News Blog

20:01 Ryan – “That’s a big one! I still don’t have a use case for it.” 

 

20:09 Announcing Amazon Quick Suite: your agentic teammate for answering questions and taking action | AWS News Blog

22:13 Justin – “This is a confusing product. It’s doing a lot of things, probably kind of poorly.” 

23:13 AWS Strengthens AI Security by Hiring Ex-DataStax CEO As New VP – Business Insider

26:03 Justin – “Also, DataStax was bought by IBM – and everyone knows that anything bought by IBM will be killed mercilessly.” 

26:50 Amazon Bedrock AgentCore is now generally available

28:17 Ryan – “This really to me, seems like a full app, you know, like this is a core component instead of doing development; you’re just taking  AI agents, putting them together, and giving them tasks. Then, the eight-hour runtime is crazy. It feels like it’s getting warmer in here just reading that.”

28:49 AWS’ Custom Chip Now Powers Most of Its Key AI Cloud Service — The Information

29:39 Ryan – “Explains all the Oracle and Azure Nvidia announcements.” 

30:16 Introducing Amazon EBS Volume Clones: Create instant copies of your EBS volumes | AWS News Blog

32:06 Running Slurm on Amazon EKS with Slinky | Containers

GCP

33:09 Introducing Gemini Enterprise | Google Cloud Blog

35:01 Justin – “I think both Azure and Amazon have similar problems; they are rushing so fast to make products, that they’re creating the same products over and over again, just with slightly different limitations or use cases.” 

36:05 Introducing LLM-Evalkit | Google Cloud Blog

37:09 Ryan – “Reading through this announcement, it’s solving a problem I had – but I didn’t know I had.” 

38:17 Announcing enhancements to Google Cloud NetApp Volumes | Google Cloud Blog

40:30 Justin – “I have a specific workload that needs storage, that’s shared across boxes, and iSCSI is a great option for that, in addition to other methods you could use that I’m currently using, which have some sharp edges. So I’m definitely going to do some price calculation models. This might be good, because Google has multi-writer files, like EBS-type solutions, but does not have the performance that I need quite yet.”

Azure

41:08 GitHub Will Prioritize Migrating to Azure Over Feature Development – The New Stack

43:17 Ryan – “I just hope the service stays up; it’s so disruptive to my day job when GitHub has issues.” 

43:33 Microsoft 365 services fall over in North America • The Register

44:17 Introducing Microsoft Agent Framework | Microsoft Azure Blog

44:48 Justin – “We continue to be in a world of confusion around Agentic and out of control of Agentic things.” 

45:54 NVIDIA GB300 NVL72: Next-generation AI infrastructure at scale | Microsoft Azure Blog

47:24 Ryan – “Pricing isn’t disclosed because it’s the GDP of a small country.” 

48:05 Generally Available: CLI command for migration from Availability Sets and basic load balancer on AKS 

49:01 Ryan – “This is why you drag your feet on getting off of everything.” 

Oracle

49:12 Announcing Dark Mode For The OCI Console

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod, where you can join our newsletter, Slack team, send feedback, or ask questions at theCloudPod.net or tweet at us with the hashtag #theCloudPod

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. [00:00:06] Speaker B: Welcome to the cloud pod where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure. [00:00:14] Speaker C: We are your hosts, Justin, Jonathan, Ryan and Matthew. [00:00:18] Speaker A: Episode 326 recorded for October 14, 2025. Oracle discovers the dark side finally has cookies. Good evening, Ryan. How you doing? [00:00:28] Speaker C: I'm doing well, Doing well. Good, good. [00:00:31] Speaker A: How'd you guys do last week while I was in Alaska enjoying myself? [00:00:35] Speaker C: Well, I don't remember. I was so tired as we. I think we discussed probably to exhaustion, pun intended on the show. But yeah, I think me and Matt, we pulled out what we could. Hopefully isn't entertaining. If nothing else, just to make fun of us for being semi coherent. [00:00:50] Speaker A: I mean, I'm just proud that you guys get it done. Like, it's really. It's. You know, it's not that I have high hopes or. No, I do. I think you guys do a good job without me, but I mean, just in general, I just proud that you guys get it. You guys could. It takes me enough time to get you guys coordinated as it is to have you guys do it without me. I'm always happy about. But Matt, Matt does help out. Yeah. Yeah. Because if it was up to me, we all know you, Peter and Jonathan, there was no hope, no chance. But yeah, Matt. Matt's a little bit more you. Like he cares. Not that. Not that you and Jonathan did this. [00:01:19] Speaker C: No, it's just organized and has executive function. It's not just like completely scattered and running around going, oh, no, it's time for. [00:01:26] Speaker A: Yeah. [00:01:26] Speaker C: Which is how I usually am right before we record. [00:01:29] Speaker A: Agreed. All right, well, let's get into some news for this week. First up, apparently Sonicwall has confirmed that all customers using their cloud backup service had their firewall configuration files exposed in a breach, expanding their initial initial estimate from 5% of their customers to 100% of their cloud backup users. That's a big difference. [00:01:48] Speaker C: Oh. [00:01:49] Speaker A: The exposed backup files contain the AES 286 encrypted credentials and configuration data, which could include MFA seeds for TOTP authentication, potentially explaining recent Acura ransomware attacks that bypassed MFAs. Sonic Wall requires effective customers reset all credentials, including local user passwords, TOTP codes, which I guess is. What is that? What is a TOTP code? Security man. [00:02:15] Speaker C: Like temporary one time password. [00:02:18] Speaker A: Time based. One time password. Okay. Acronyms. Security. [00:02:21] Speaker C: I was so close. [00:02:22] Speaker A: Yeah, yeah. VPN shared secrets, API keys and authentication tokens across their entire infrastructure. This incidence highlights a fundamental security risk of cloud based configuration backups where sensitive credentials are stored centrally Making them attractive targets for attackers. And the breach demonstrates why web auth in passkeys offer superior security architecture since they don't rely on shared secrets that can be stolen from backups or servers. I mean, also, you know, providing your own encryption keys, you know, is also good. And allowing your SaaS vendor to have that encryption key is a positive thing to do. There's all kinds of ways to protect your data in the cloud when you're leveraging a SaaS service. I would have higher expectations of a company like Sonicwall, even though I don't really respect Sonicwall as a vendor for firewalls, but for small businesses who do highly rely on them. I appreciate that you would expect this to work and not be hacked. [00:03:14] Speaker C: Yeah, I mean, this looks bad, right? Like, I, I'm not, I've never used Sonicwall. I'm not familiar with it, but it's, you know, having cloud backups, you know, as a service is definitely something you see across a lot of different things. And this definitely highlights the risk of that. [00:03:30] Speaker A: Right. [00:03:30] Speaker C: You're, it's, it's a convenience feature for sure because then you're not managing your own backups and dealing with all that. But it's quite the lucrative target, which is scary. Yeah, this isn't the type of thing you want stolen. [00:03:44] Speaker A: No, not at all. They've actually branched out quite a bit since I last checked out Sonicwall. They were, you know, again, like when I was in the networking space, Sonic Wall was kind of a big deal. They got bought a few times or merged with other companies. But it looks like they have a next gen firewall. So PM and Palo Alto on the marketing something called a hybrid mesh firewall. No idea what that is. Then they have SD WAN and they have network security management. They have hosted email security solutions and on prem email security for your span filtering, they've got switches access points. So they have, they merge with somebody who do wireless networking and then they've got security service edge for VPNs basically. And then they have managed XDR solutions. So yeah, good job, Sonicwall. Do all those services back up to their cloud or just the firewall? Because some of those you also don't want your data to be exposed on. Yeah. [00:04:33] Speaker C: I mean, yeah. I mean the article points out VPN shared secrets is something specific, so it would make sense. Maybe it's everything. [00:04:44] Speaker A: All right, Our next article comes from the Register, so I apologize for the British headline. I don't understand. And Jonathan's not here. Take this, Rob, and shove it. Salesforce Issues a stern retort to ransomware as extort that basically Salesforce is refusing to pay ransomware demands from criminals claiming to have stolen nearly 1 billion customer records, stating they will not engage, negotiate with or pay any extortion demand. This, the firm stance sets a precedent for how major collaborators handle ransomware attacks. Stolen data appears to be from previous breaches rather than the new intrusion, specifically from when Shiny Hunters compromised Sales Loft Drift application earlier this year. Attackers use stolen OAuth tokens to access multiple companies. Salesforce instances. The incident highlights the security risk of third party integrations in cloud environments, as the breach originated through a compromised integration app rather than Salesforce's core platform. This demonstrates how supply chain vulnerabilities can expose customer data across multiple organizations. Scattered lapses. Hunters set an October 10 deadline for payment and offered $10 in Bitcoin to anyone willing to harass executives of affected companies. This unusual tactic shows evolving extortion methods beyond traditional ransomware encryption. And Salesforce maintains there's no indication their platform has been compromised and no known vulnerabilities in their technologies were exploited. Which is kind of them passing the buck a little bit because again, yes, it is a third party integration into their solution that has been compromised, which is what exposed these OAuth tokens. But they're still a partner of Salesforce. They grant them partner rights, so I feel like they have some responsibility there potentially. [00:06:12] Speaker C: Yeah, I mean it's, it's interesting because it's, it's frustrating because it's if, if you're Salesforce, someone with effectively a valid, you know, OAuth token accessed, you know, a customer instance. And so it's from their standpoint, like if, if you, if you, if a customer loses a password or, you know, they expose that, you know, there's not much that Salesforce can do. So I do kind of get that. I do also really like Salesforce's response just because I, I feel like the ransomware has gotten a little out of hand and I think a lot of companies are quiet, quietly sort of paying these ransoms and which has only just made the, the attacks just skyrocket. So making a big public show of, of, you know, saying we're not going to pay for this is, is a good, good idea. [00:07:02] Speaker A: Agree. Moving on to AI is how ML makes money. This week, OpenAI has back with a couple of new things for us. So first up is Agent Kit provides a framework for building and managing AI agents with simplified deployment and customization options, addressing the growing need for autonomous AI systems in Cloud environments. This new tool integrates with existing OpenAI technologies and supports multiple programming languages, enabling you, the developer, to create agents that can interact with various cloud services and APIs without extensive infrastructure setup. Agent Kit's architecture allows for efficient agent lifecycle management, including deployment monitoring and behavior customization, which could reduce operational overhead for your business running AI workloads at scale. Key use cases include automated customer service agents, data processing pipelines, and intelligent workflow automation that can adapt to changing conditions in your cloud. Native app development matters for cloud practitioners as it potentially lowers the barrier to entry for implementing sophisticated agents while providing the scalability and reliability expected enterprise cloud deployments. No mention of security though, so minor, minor complaints. [00:08:04] Speaker C: Yeah, I mean I haven't used any of these types of tools, you know, because I haven't really set up a complex multiple agent workflow for anything. [00:08:16] Speaker A: In. [00:08:16] Speaker C: You know, in my personal life or in my day to day job. But I, you know, it is kind of becoming cumbersome in how many places to set up and how many different tools for, you know, that set up and configure MCP servers for everything. So I do understand, I do like that we're getting sort of that ecosystem, you know, getting fleshed out for sort of management. I do think that, you know, as much as I abhor no code, I do feel like this is one of those areas where there's a lot of people could really benefit from setting up agents and a no code sort of solution with AI. So I, I do like that a whole lot. I'll have to play with this and see if I can provide any like real feedback, but it seems cool. [00:08:59] Speaker A: Yeah, I think it has potential to be very interesting. If you're a big fan of OpenAI's codecs, it's now generally available offering you GPT3 based AI that's fine tuned specifically for co generation understanding across multiple multiple programming languages. It has several new features for the GA First a new Slack integration to delegate tasks or ask questions to Codex directly from a teams channel or thread. Just like a coworker. And yes, I'll also be ignoring you Codex when you message me a DM during a meeting asking to do something for you Codex SDK to embed the same agent that powers Codex CLI into your own workflows, tools and apps for state of the art performance on GPT5 codecs without more tuning and new admin tools with environment controls, monitoring analytics dashboards for your chat GPT workspace admins to have more control. [00:09:45] Speaker C: I don't know why but something about the, the, you know, having it available in Slack to give it to boss it around just sort of rubs me the wrong way. Like it's gonna. I feel like it's the poor new college grad joining the team who gets just delegated all the crap jobs. Yeah, still anthropomorphizing like AI I guess, but it's pretty funny indeed. [00:10:10] Speaker A: And then final gift from OpenAI this week or sorry, this one's from Gemini. Sorry, not for OpenAI. Sorry. Google is giving you Gemini 2.5 computer use model via the Gemini API, enabling your AI agent to interact with graphical user interfaces through clicking, typing and scrolling actions. Available to you in Google AI Studio and Vertex AI for developers to build your automation agent. The model operates in a loop using screenshots and action history to navigate web pages and applications, outperforming the competitors on web and mobile control benchmarks while maintaining the lowest latency among tested solutions. Built in safety features including per step safety, service validation and system instructions to prevent high risk actions like bypassing captchas or compromising security, with developers able to require user confirmation for sensitive operations. Early adopters include Google teams use it for UI testing and workflow automation, with the model already powering Project Mariner, Firebase Testing Agent and AI mode in search, which is nice. So thanks Google. [00:11:06] Speaker C: Yeah, this is seemingly really powerful. I went down the rabbit hole a little bit on this, trying to figure out how I could, you know, get started with this and it's a little cumbersome and not very clear, but the main announcement is really just the. The sort of specialization of the model and, and having it sort of be very finely tuned towards understanding UIs. The demos look pretty cool. There's a couple really simple ones. My favorite was the the simple 2048 game, like having the AI agent basically play that using arrow commands, which is kind of cool. I think this is the type of thing that really is going to get AI from. You know, like it's going to be as big as just the agentic model in general. I think having it be able to click and understand uis and operate on people's behalf is going to be. It's going to open up just a ton of use cases for it, which is neat. I know I have a ton of things that, you know, if I can never look at a UI again, be awesome as I have two screens open with uis on both of them just to record the podcast. [00:12:13] Speaker A: Indeed. I'm starting more of a tan from the multiple monitors I now have on my Desk like I'm blinded by light. All right, well, Amazon has bad news for you. For some service, or maybe for you if you're using some Service, they're moving 19 services to maintenance mode starting on November 7, 2025 in just a few short weeks. Several of these feature. These services include ones that were. Surprised me a bit. Glacier is on the list. Yeah. What now? Some other ones that jumped out to me. Net Modernization Tools seems like something people will be doing a lot of these days with AI, but maybe it's being deprecated for an AI Q version of that. The Fraud Detector, the Code Guru, Guru Reviewer, the Amazon Cloud Directory S3 object Lambdas, the Amazon Web Access client for PCOIP, which I don't even know that is. So, yeah, goodbye, good riddance. AWS Application Discovery Server Code Catalysts Health Omics Variant and annotation store IoT sitewise edge data processing pack and sitewise monitor for IoT the mainframe modernization Service, the Migration Hub and then Snowball Edge Compute Optimized, Edge Storage Optimized and AWS Systems Manager, Change Manager and Incident Manager, as well as AWS Thinkbox Deadline 10, which is. Was a tool for people in the media industry. Sue, that's interesting and a little bit surprising. [00:13:36] Speaker C: It's interesting because I got, you know, as I was a heavy user of Code Guru and codecatalyst for a while and so like the, the announcement I got as a customer was a lot less friendly than Maintenance mode. It was like, your stuff's going to end. So I don't, I don't know if it's true across all these services, but I know with at least those two, I did not get one for Glacier because I also have a ton of stuff in Glacier. I'm cheap. [00:14:01] Speaker A: Well, I mean like, this is the one thing about maintenance is that these are just services. Yeah. Oh. I mean, like they do say customer. Current customers continue using the service for feature while exploring alternative solutions. They're not saying it's being Sunset per se, but they are definitely, you know, making it appear that it's not something they want for new customers to have, which is interesting. So I guess they just think that other S3 solutions they've given are low enough cost that this doesn't make sense anymore. It is sort of interesting. I would love to know more of the details of why they think this one should go away. [00:14:32] Speaker C: Yeah, I mean, Glacier, I imagine it's, it's, it's very specific to that. [00:14:36] Speaker A: Right. [00:14:36] Speaker C: I'm sure the hardware in the back end and Sort of the mechanisms for, for rehydrating data are probably antiquated and, and it's. I'll look at cost comparisons because I do have a whole bunch of data, you know, vault workloads that I have just for backups for years that I've, you know, hopefully will never need but maybe it is just cheaper to have that and like some sort of low resiliency, you know S3. But the notification for Catalyst was not maintenance mode like it was. The service is ending. So like it's kind of funny like yeah, that one's. [00:15:09] Speaker A: I mean so it doesn't say no longer to be open till open before but the ones they are killing they're actually sunsetting. Amazon Thinspace, Amazon Lookout for equipment AWS IoT Greengrass V1 and AWS Proton are all entering sunset which they recommend viewing the migration documentation for right away. And then the AWS mainframe modernization app testing service is no longer available as of October 7th. So there's a couple here that are a little more serious than the maintenance mode ones. But yeah, you know Amazon continues to be cutting costs and some of these make sense like again CodeGuru reviewer, potentially the fraud detector, maybe parts of codecalist like some of these things I can see like oh that's going to go into you know a queue feature or an AI feature is going to replace that because it's probably some ML backed thing that AI cannot do a better job of. Interesting the change in Incident Manager. I just saw a release for this in the federal space for Incident Manager. I saw a press release this week so that you know, sort of interesting too. Some of these seem to be a little bit unclear for exactly like they have customers for using these things still and they have some value in some use cases. [00:16:16] Speaker C: Yeah, I mean it is funny like Amazon for the longest time didn't kill anything and now, now they are starting to kill things. You know it'll be. They're not quite killed by Google, you know, yet, but it's getting there all right. [00:16:30] Speaker A: Moving along on the AWS side apparently AWS Direct Connect now offers 100 gigabytes dedicated connections with MacSec encryption in the Neutral in the Neutrality KC1 data center in Kansas City. I know I wouldn't talk about this article and these come out all the time but our former co host Peter is located in Kansas City. And this really only brings up the question what's Peter up to? Any speculation? [00:16:53] Speaker C: And why does he need all that bandwidth? [00:16:55] Speaker A: Right? Exactly. I mean he already has Google Fiber. He's already living the three gigabyte per second dream. But you know, 100 gigs, you know, it's quite a bit. [00:17:05] Speaker C: That's a lot. [00:17:06] Speaker A: Yeah. [00:17:07] Speaker C: I mean, I guess for direct Connect, you know, you're typically not personal workloads, I guess. [00:17:12] Speaker A: I mean, that's all we. We don't know that you could have. I guess. Yeah, you could have 100 gigs for your house if you really wanted to. I mean, it costs you a lot of money, but you could do it. I had friends when I was in school who had T1s at their house. I thought they were so cool. [00:17:27] Speaker C: Yeah, I remember those days. [00:17:30] Speaker A: Yeah. So Peter, let us know what you're up to. Bitcoin mining. What is it? In your retirement all right. AWS IAM Identity center now supports customer managed KMS keys for encrypting identity data at rest, giving organizations and regulated industries full control over encryption key lifecycle, including creation, rotation and deletion. This addresses compliance requirements for customers who previously could only use AWS owned keys. The feature requires symmetric KMS keys in the same AWS account and region as Identity center instance multiplayer region keys recommended for future flexibility. Not all AWS managed applications currently support Identity center though, so be aware of that and make sure you read the documentation before you go. Turn this on widely and then break access to all the things you care about and need. Standard AWS KMS pricing applies for key storage and API usage while Identity center remains free, of course, and key considerations include the critical nature of proper permission configuration. Encrypt setup can disrupt Identity center operations like revoking your encryption key might be bad for your access to your cloud. So be careful with this one. [00:18:30] Speaker C: Yeah, I mean if you mess up this one, it's going to be very difficult to restore. [00:18:35] Speaker A: Right? It's going to be a bad day. [00:18:36] Speaker C: Yeah, but I do think it's good. You know, I, you know, especially we were talking about Sonic Wall having all those, you know, back centralized backup and passwords and everything sort of stolen. Having the ability to decrypt and revoke the key is a good thing. [00:18:56] Speaker A: Amazon's launching the M8A. I just, I remember the M3 when it first came out. Instance powered by the 5th gen AMD EPYC Turing processor. So they're up to 30 better performance and 19 better price performance than the M7A instance for general purpose workloads. The new instances feature 45% more memory bandwidth and 50% improvement in networking 75 gigabits per second, an EBS bandwidth of 60 gigabits per second, making them suitable for financial applications, gaming databases and SAP certified enterprise workloads. Thanks. Always like new instance. [00:19:27] Speaker C: Yeah, that's a big one. I still don't have a use case for. [00:19:33] Speaker A: I can introduce you to some SAP people who have lots of use cases for it. Amazon continues to muddy the waters of their Agentic AI branding with their new Amazon QuickSuite, your Agentic teammate for answering questions and taking action. QuickSuite combines AI powered research, business intelligence and automation into a single workspace, eliminating the need to switch between multiple applications for data gathering and analysis. The service includes Quick Research for comprehensive analysis across enterprise and external sources, QuickSight for natural language, BI queries and Quickflow Automate for process automations. The Qwik Index serves as the foundational knowledge layer, creating a unified searchable repository across databases, documents and applications that power AI responses throughout your suite. This addresses the common enterprise challenge of fragmented data sources by consolidating everything from S3, Snowflake, Google Drive and SharePoint into one intelligent knowledge base. Okay, so wait, there was multiple services that do this now. So this is the third of these. I think that unless Amazon's killed one of them that I've forgotten about because they have Q for Business which does this. And then wasn't it Kendra, something like that? [00:20:41] Speaker C: I thought Kendra was the Cody thing. [00:20:44] Speaker A: No, Amazon Kendra is the enterprise search engine. And then there was maybe one other one out there too, but yeah, they all had connectors for different things. So that's. So we have a third one now. The automation capabilities are split between Quick Flows for business users, Natural language workflow and Quick Automation for technical teams. Complex multi department process of approval, routing and system integrations. So forms and workflows. Both tools generate workflows from simple descriptions, but Qwik Automate handles enterprise scale processes like customer onboarding with advanced orchestration and monitoring. And existing Amazon QuickSight customers will be automatically upgraded to Quick Suite with all current BI capabilities preserved under the Quicksight branding, maintaining the same data activities, controls and user permission. So now it's doing reporting and then finally the service also introduces spaces or contextual data organization and custom chat agents that can be configured for specific departments or use cases. So it's a bit of agent spaces too. So yeah, this is a confusing product. Yeah, doing a lot of things probably kind of poorly. [00:21:43] Speaker C: Well, and you know, like it's, it has the same flaws as like Quicksight where it's got a dedicated portal and you know, separate sort of authentication from the rest of your Amazon ecosystem. And I, you know, I had to reread and, and, and the article and then also like dive into some product docs because I was really confused at what it is and what it does and like why it was needed. And it really does feel like just a very like if you're into quicksight, this is a, like a, a one stop shop kind of for all the business data and I guess they're like trying to sell this on its own. I guess like, I don't know, I. [00:22:23] Speaker A: I assume it's maybe like Agent Spaces where they're trying to sell it on its own, but it's. [00:22:28] Speaker C: It could be. [00:22:29] Speaker A: Yeah. Yeah. Well, Amazon has made a hire this week for someone to be the new VP of Security Services and Observability, reporting directly to CEO Matt Garman to strengthen security offerings as AWS expands its AI business. And he is Shet Cooper, former Data Stack CEO. So we hired the database guy to be the new head of Security Services and Observability. Okay, don't understand that fully, but sure. Kapur brings experience from data stacks where he led Astra DB development and integrated real time and capabilities, positioning him to address the security challenges of increasing complex cloud deployments. Don't think they know what security and databases are? The role consolidates leadership of security services, governance and operations portfolios under one executive, with teams from Jirenhouse, Nandini Romani, Georgia Sataris and Brad Marshall now reporting to Caper directly. This fire also follows recent A leadership changes including departures of VP of AI Matt Wood and VP of Generative AI Vasi Fillman. Sealing. AWS is focusing on strengthening AI security expertise and Kapur will work alongside AWS CISO Amy Herzog to develop security and EL services that address what Garmin describes as changing requirements driven by AI adoption. So any speculation on this one? I have some. Do you? I do. I do. [00:23:46] Speaker C: This is confusing. [00:23:47] Speaker A: Yeah. I mean I don't. So what I think what's actually being said here is both Azure and Google have invested heavily in SIM and Soar and they feel a threat that Security Command center and the other things are available to those and Amazon has no service that's competing with either of those Azure one or this one Chronicles Azures. Right. And so by hiring a guy who's really big with data from data stacks, which is going to be big data led and putting him in charge of Security services and observability, Amazon is going to try to build out a competitive product to both Chronicle and to Security Command Center. That's how I read this. [00:24:31] Speaker C: You are 100% right now that you say it out loud because all of all the security tooling for a long time has all just been about big data and you know, how do you mine the lake? And so that's. This does now make a ton of sense now that you say that. [00:24:46] Speaker A: Yeah, that's really what it's weird that you're in a data guy until you realize oh well to build a proper threat model system and SIM and soar and all of the AI tooling you want for a SOC team, like you need a lot of data analysis data. So I, I think that's why it makes sense. It just, you know, happens to be that he was a CEO. There is the only weird part of this like how much was he actually doing developing it? I don't know but definitely an interesting hire. I would expect not this year, but maybe next year potentially see some reinvent announcements. So mark that on your sheet. Also, Data Sax was bought by IBM and everyone knows that anything bought by IBM will be killed purely. So yeah, this is probably a much better choice for chat Cher. So I'm looking forward to seeing what they, they roll out. [00:25:32] Speaker C: Yeah, no, I mean that's, that is interesting. I think that you know like it's, it is interesting. As I, as I've migrated into the security space, it's been real funny how much observability and, and big data it is. [00:25:46] Speaker A: Right. [00:25:46] Speaker C: Because it, it really is just the same things with just a very particular lens. And so like all a SIM is, is a. As a giant, you know, search indexer and has been for a while. That's why Splunk is so popular for in the SIM space. So makes sense. [00:26:00] Speaker A: Agreed. [00:26:04] Speaker B: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment to short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask, will you actually give me my money back? Our chair will click the link in the show notes to check them out on the AWS marketplace. [00:26:43] Speaker A: Amazon Bedrock Agent Core is providing you a new managed platform for building and deploying AI agents that can execute for up to 8 hours with complete session isolation supporting any framework like CrewAI, Lang Graph or Llama Index and any model inside or outside Amazon Bedrock the service includes five core components. Runtime for execution, memory for state management, Gateway for tool integration via MCP, identity for OAuth and IAM authorization and observability with CloudWatch dashboards and open telemetry compatibility for monitoring agents in production. The agent core enables agents to communicate with each other through the agent to agent protocol and securely act on the behalf of users with Identity Aware authorization, making it suitable for enterprise automation scenarios that require extended execution times and complex tool interactions. The platform eliminates infrastructure management while providing enterprise features like VPC support, AWS PrivateLink and CloudFormation templates with consumption based pricing and no upfront costs across nine AWS regions. Integrations with existing Azure tools like datadog, Dynatrace and linksmith allow teams with existing Azure tools to monitor agent performance using their current tool chain, while the self managed memory strategy gives developers control over how agents store and process all of their relevant information. [00:27:53] Speaker C: Yeah, this is, I mean this is one of those articles where I read and I'm like oh I've. I've fallen behind AI again. I don't know how, how that happened, but so fast. I guess it's been 15 minutes since, you know, I felt like I was caught up but this really to me is. Is it. This seems like a. Just a full app, you know, like this is core component. Instead of doing development, you're just taking you know, AI agents, putting them together and giving them tasks and then you know, the, the eight hour runtime is crazy. [00:28:28] Speaker A: Yeah, eight hours. [00:28:29] Speaker C: Feels like it's getting warmer in here just reading that right for sure. [00:28:35] Speaker A: Well, luckily they're transitioning to lower cost, lower usage chips to help Mega not so hot. They have now announced that Amazon has transitioned the majority of its AI inference workloads to its custom Inferentia chips, marking a significant shift away from Nvidia GPUs for production AI services. The move demonstrates AWS's commitment to vertical integration and cost authorization in the AI infrastructure space. Inferential chips now handle mostly most inference tasks for services like Amazon, Bedrock, SageMaker and internal AI features across AWS products. The custom silicone strategy allows AWS to reduce dependency on expensive third party GPUs while potentially offering customers lower cost AI inference options. The shift represents a broader industry trend where cloud providers develop custom chips to reduce their service and control costs. AWS can now optimize entire stack from silicone to software for a specific AI workload, similar to Apple's approach with its M series chips. So nice. [00:29:29] Speaker C: Yeah, you know it explains all the Oracle and Azure Nvidia announcements. But yeah, I mean this feels like a natural move and maybe it's completely necessary given how hard it's been to get GPU capacity for so long. So it's Amazon's fortunate to have the scale to dive into that space and just make their own. And of course they can tailor it and make it best suited to their use case. So it's going to be purpose fit and it's going to be more efficient that way. [00:30:07] Speaker A: Amazon is continuing in their quest to make your data recovery or clone faster than ever with Amazon EBS volume clones, which enable instant point in time copies of encrypted EBS volumes within the same availability zone through a single API call, eliminating the previous multi step process of creating snapshots in S3 and then new volumes. Clone volumes are available within seconds with single digit millisecond latency, though performance during installation is limited to the lowest of 3000 IOPS for 125Mbps. Baseline source volume performance or target volume performance. This feature targets development and testing workflows where teams need quick access to production data copies, but it complements rather than replaces EBS snapshots, which remain the recommended backup solution with 11 nines of durability in S3. Pricing includes a one time fee per gigabyte of source volume data at initiation, plus standard EBS charges for the new volume. Main cost governance importance is Cloned volumes persist independently until manually deleted. Feature currently requires encrypted volumes and operates only within the same availability zone, spreading all EBS volume types across AWS commercial regions and selected local zones. [00:31:10] Speaker C: And this is just so neat for those, you know, use cases where you want to create just like an army of like similar sort of or the same machines for for testing or for any kind of, you know, even AI workloads. I can imagine you'd want maybe you want to separate this out, but. [00:31:31] Speaker A: If. [00:31:31] Speaker C: You'Re doing smaller stuff so it's kind of cool that it's instant now. You know it's been getting more near real time over the years and so kind of neat. Very cool. [00:31:45] Speaker A: And in the most interesting announcement of the week, you can now run Slurm on Amazon EKS with Slinky. Slinky is an open source project that lets you run Slurm workload manager inside Amazon eks, enabling organizations to manage both traditional HPC batch jobs and modern Kubernetes workloads on the same infrastructure. They're just making these words up at this point. I don't Think this is a real announcement? [00:32:11] Speaker C: Yeah, one of those. We were reading it, the pre. Like our pre read and I was like, I, I have nothing on this one. Like I get it. But it's just funny. Like all the, all the different words and all the different things. Like, and I get it. It's, you know, kubernetes, batch jobs. But they're, they're named funny. So it's. [00:32:30] Speaker A: Yeah, we, we've laughed at Slurm a few times in the past, but yeah, Slurm was Slinkies. I was like, oh, okay. It seems like Slurm would disrupt Slinky's ability to operate. It's slimy, right? At least in my mind. Slurm is a slimy thing. [00:32:45] Speaker C: Oh yeah. [00:32:46] Speaker A: All right, well. Cool. Awesome. GCP is introducing Gemini Enterprise this week as a unified AI platform that combines Gemini models, no code agent, building pre built agents and data connectors for Google Workspace and Microsoft 365 and centralized governance through a single chat interface. This positions Google as offering complete AI stack rather than just models or toolkits like their competitors. The platform includes notable integrations with Microsoft 365 and SharePoint environments while offering enhanced features when paired with Google Workspaces, including new multimodal agents for video creation and real time speech translation. In Google Meet, this cross platform approach differentiates it from more siloed offerings. Google introduces next generation conversational agents with low code visual builders supporting 40 plus languages and announcements includes a new developer tool like Gemini CLI as well as agent interoperability between A2A AP2 and model context Protocol systems. And again, I How is this different than Agent Space? Like, I mean it's probably going to cost more. [00:33:42] Speaker C: What's my. [00:33:44] Speaker A: That's a likely answer. Yes. [00:33:47] Speaker C: I mean it's got more functionalities than Agent Space. But yeah, I had the same reaction, like tried to read through this and go through the documentation just to understand like what it is and what it offers that's new. And it, it does just seem to be like a consolidation and I guess. [00:34:03] Speaker A: That'S, I mean it's basically like okay, so we have Copilot or Claude code or Gemini cli and then you typically will have, you know, things like Chat GBT or Claude AI or the different websites or the Gemini website or if you're using Gemini workspaces and you have access to Gemini. And so I assume this Enterprise is a more fully featured, more integratable version of what you get when you go to the website. And then Agent Spaces is this other which is really LM Studio or is that what it's called? It's one of the, one of the other products basically as an enterprise. And so it's confusing. And again, I think Google's not alone in this problem. I think both Azure and Amazon have similar problems of that they are rushing so fast to make products that they're creating the same product over and over again in some ways but with slightly different implementations or use cases and they're trying to target a very large market. But I think it just causes confusion. [00:35:00] Speaker C: So yeah, it's, yeah, I mean it, it, you know, Google and Amazon have both like rebranded several times if there's definitely a lot of duplication and, and yeah, it's very unclear on what these are, but I also feel like this is the market a little bit too, where businesses are, are basically scrambling just as much to have answers for AI strategy and how you're integrating AI into your workloads to increase efficiency and the whole thing. So I think it's not only is it the people coming up with the service, but I think the customers are sort of tripping over themselves trying to throw money at these problems. I think this is part of why we're getting these products. [00:35:42] Speaker A: Yeah, I agree. Well, for those of you who want lots of lms, you have a problem that comes up out of it is that you need to evaluate them to determine which one is best. And so Google's giving you a new tool for that. LLM Eval Kit, an open source framework that centralizes prompt engineering workflows on vertex AI. Replacing the current fragmented approach of managing prompts across multiple documents and consoles, the tool shifts prompt development from subjective testing to data driven iteration by requiring teams to define specific problems, create test data sets and establish concrete metrics for measuring the LLM performance. LLM Eval Kit features a no code interface designed to democratize prompt engineering, allowing non technical team members like product managers and UX writers to contribute to the development process. The framework integrates directly with Vertex AI SDKs and provides versioning, benchmarking and performance tracking capabilities in a single app. I mean, do I have something to evaluate the Eval Kit's ability to do this? Like, I mean I'm relying on a lot of these things. [00:36:47] Speaker C: Yeah, it's kind of interesting. Like this is a reading through this announcement is like it's solving a problem I had but I didn't know I had. Which is kind of neat, right? Like that's the best type of release for me is, is one like, oh, this is this is going to make my life better and I didn't know that it was bad. It's just like trying to you know, in day job like coordinate across multiple people on like is this working better? Is this working for you? Like I've I'm called in to do a lot of sort of security analysis on Is this secure enough for accessing data? As you're creating these AI sort of interfaces for things and being able to sort of consolidate that all in one place and then also have it directly linked to sort of the the metrics right next to it is is super neat. I like that a lot just because I do have documents where I've got prompts that I've you know like that are just stored places and you know the measurement is all just anecdotal whatever I remember. So like I like tracking it. [00:37:48] Speaker A: So I'll probably play around with this. [00:37:49] Speaker C: And see if it can make it work. [00:37:52] Speaker A: Nice. Well I look forward to hearing your report. Well we laughed mightily when Azure got a san and now today we get to laugh with Google as they also now get a sand Google Cloud NetApp volumes, or GCNV for short, now support ISCSI block storage alongside file storage, enabling enterprises to migrate SAN workloads to GCP without architectural changes. The service delivers up to 5 gigabits per second of throughput and 160,000 IOPS per volume with independent scaling of capacity, throughput and IOPS. The NetApp Flex Cache provides local read caches of remote volumes for distributed teams and hybrid cloud environments. This allows organizations to access shared datasets with local like performance across regions, which is great for your enterprise users. Use NAS workloads and the service navigates to Gemini Enterprise as a data store for rag applications, allowing organizations to ground AI models and their secure enterprise data without complex ETL processes while the data remains governed within NetApp volumes while being accessible for search and inference workloads. Auto tiering automatically moves cold data to lower cost storage at $0.03 per gigabyte for the Flex service level with configurable thresholds from 2 to 183 days. Large capacity volumes now scale up to 15 terabytes to 3 petabytes with over 21 gigabits of for staging the throughput per volume For HPC and AI workloads and NetApp Snap Mirror is available to you, which enables replication between on premise nav systems and Google Cloud with 0rpo and near 0rto Positioning GCP competitively against AWS FSX for NetApp ONTAP and Azure NetApp files for Enterprise storage migrations. Now, the Azure san is not NetApp and the Amazon FX for NetApp does not support ISCSI as far as I'm aware. And so, yes, similar but different. Hmm. [00:39:32] Speaker C: I thought the Azure SAN was netapp or maybe they just had a Marketplace offering and I'm confusing the two. [00:39:39] Speaker A: I do not think it is. I will double check right now because that's the kind of quality that I want to bring. Yeah, as Azure Elastic San, there's no mention of the word NetApp anywhere on this page and I can tell you that if it was backed by NetApp, it would be everywhere. [00:39:55] Speaker C: Yeah, it absolutely would. [00:39:58] Speaker A: Yeah. [00:39:58] Speaker C: I mean, still sort of confounded by the need of needing an ISCSI service. [00:40:03] Speaker A: I literally looked at this today and said I might want this. So I have a specific workload that needs storage that's shared across boxes and ISCSI is a great option for that. [00:40:12] Speaker C: Yeah, it is. [00:40:13] Speaker A: In addition to other methods you could use that I'm currently using which have some sharp edges. So I'm definitely going to do some price calculation models like this might be good because Google's has Multi Writer file EBS type solution, but does not have the performance that I need quite yet and they're seem to be unwilling to commit to a date when they're going to have the performance I want. Yeah, in the short term this might be a great option. So had to run some numbers. [00:40:40] Speaker C: Interesting. [00:40:43] Speaker A: All right, moving on to Azure, who recently reported about the CEO Thomas Dunk's departure from GitHub. And basically this is the kind of shift that they said was the first time Microsoft's trying to take control of GitHub after their purchase of it in 2018. And this announcement or news article really kind of seems to confirm that as GitHub is apparently migrating its entire infrastructure from its Virginia Data center to Azure within the next 24 months, with teams being asked to delay future development to focus on this migration due to capacity constraints from AI and Copilot workloads. So basically the answer is we don't have enough capacity for all your new fancy shiny stuff, so move to Azure so we can get you more capacity to do those fun shiny things. And I guess other than GitHub Actions and AI, I don't really know what future development's happening in GitHub these days, but. [00:41:32] Speaker C: Okay, well, I imagine they've got a whole lot of AI features they want to add and that's driving capacity requirements. [00:41:38] Speaker A: Right. [00:41:39] Speaker C: So yeah, I was surprised it took this long. I remember when Microsoft bought LinkedIn, you know, the move to make that all into Azure was very fast after that deal was closed, so it's surprising that they got away with it for this long. I don't know much about, you know, GitHub's tech stack and their own data center, but hopefully that this move goes. [00:42:02] Speaker A: Fairly smoothly because apparently they have some challenges. Apparently they include migrating their GitHub MySQL cluster that run on bare metal servers to Azure, which some employees could lead to worry could lead to more outages during the transition period, which is cool. And then this also positions Azure to capture one of the world's largest developer puppets as a flagship customer. So you got to eat your own dog food. And we know from Matt that running on Azure can be somewhat painful at times. But yeah, it's interesting. So I'm sure this won't be an easy transition for them but I'm sure they'll write a lot of blog posts about it when they're done. I remember when they moved Hotmail from Linux to Windows, how many press articles they wrote about that. So you know this is going to have case studies out the wazoo when it's done. Yeah, definitely. [00:42:48] Speaker C: I just hope they don't. I hope the service stays up because it's so disruptive to my data when this, when GitHub has used. [00:42:58] Speaker A: I mean they might want to think twice though because Microsoft 365, another prominent customer of Azure experienced a large over an hour outage on October 9th caused by a misconfigured network. Of course it was that affected all services including teams which please take down teams anytime highlighting the fragility of centralized cloud services when configuration errors occur. The incident followed another Azure Azure kubernetes crash took down Azure front door which I think we talked about a couple weeks ago. So yeah, so great, great time to move to Azure stability in mind. [00:43:29] Speaker C: Yeah, yeah, no, I mean it's, I mean every cloud and service has their, you know, their, their issues but it is sort of, you know, like it's not great when you get a whole bunch of announcements like this. [00:43:46] Speaker A: Microsoft is introducing the Microsoft Agent framework which converges auto gen research project with semantic kernel into a unified open source SDK for orchestrating multi agent AI systems addressing the fragmentation of challenges of 80% of enterprises now using agent based AI according to PwC. So I mean here's another one where we're giving say to manage agents but then like they also have the Foundry which we've talked about in the past as well. And this does have out of the foundry quite a bit as the framework enables developers to build locally, then deploy to Azure AI Foundry. But again we continue to be in a world of confusion around agentic and how to control agentic things and there's lots of technology and lots of acronym soup that you get to learn if this is your world. But all the cloud developers are trying to give you lots of tools to help make this easier. While right now I think causing confusion. I hope the dust will eventually settle and make these clearer. [00:44:37] Speaker C: I mean I think it just goes to show you've got all these like low code, no code solutions where people are stringing together, you know, prompts to form agents and then having the agents talk to agents and and then trying to, you know, make that work at scale or trying to even share that across the the rest of the business I think is super difficult. And so I think, while I don't think a lot of these tools are necessarily the right solution because I don't feel like they're leveraging enough of lessons learned that we've had over with software development and specifically software development lifecycle, hopefully yeah, as you said, the dust will settle and we'll get the right solution. [00:45:15] Speaker A: Out of this eventually. Well, if you are in need of GPUs. Lots and lots of GPUs. Microsoft's deploying the first production cluster with over 4600 Nvidia GB300 NVL72 systems featuring the Blackwell Ultra GPU, enabling AI model training in weeks instead of months and supporting models with hundreds of trillions of parameters. This positions Azure as the first cloud provider to deliver Blackwell Ultra at scale for production workloads. Each indie GV300V6VM rack contains 72 GPUs with 130 terabytes per per second of NV Link bandwidth and 37 terabytes of fast memory, delivering up to 1,440 petaflops of FP4 performance, which I don't know what any of those words mean. The system uses 800 gigabits per second of Nvidia Quantum X800 InfiniBand for cross rack connectivity, doubling the bandwidth for previous GB200 systems. The infrastructure targets frontier AI workloads including reasoning models, agentic AI systems and multimodal generative AI. With OpenAI already using these clusters for training and deploying their largest models. Of course they are Azure implemented custom cooling systems using standalone heat exchangers and new power distribution models to handle the high Energy density requirements of these dense GPU clusters and the co engineered software stack optimizes storage, orchestration and scheduling for supercomputing. Scale pricing wasn't disclosed. Shockingly enough, the scale and specialized nature of these VMs suggests they'll target enterprise customers and AI research organizations requiring cutting edge performance for training trillion parameter models and Azure plans to deploy hundreds of thousands of Blackwell Ultra GPUs globally. [00:46:48] Speaker C: Yeah, I mean pricing isn't disclosed because it's the GDP of a small country. [00:46:52] Speaker A: Yeah, we're talking in how many shares of equity in your company can we give you in combination of the hundreds of millions of dollars in equity and revenue that we need to also generate from you for this? Yeah, it's. Yeah, small GDP countries probably too small. Still too small. [00:47:11] Speaker C: Yeah. I do hope they release like more tech specs around the custom cooling because I do think that that's. [00:47:16] Speaker A: I don't know. [00:47:16] Speaker C: As a nerd, I love that kind of thing and I think AI is going to drive a lot of innovation by. Because it has to. So it'd be fun for sure. [00:47:29] Speaker A: Azure is introducing a single CLI command to migrate AKS clusters from deprecated availability sets to virtual machine scale sets before the September 25th deadline. Well, if that's you, thank you. The automated migration upgrades clusters from basic load balancers to standard load balancers, providing improved reliability, zone redundancy and support for up to 1000 nodes compared to the basic tiers 100 node limit. Well, I guess that's nice. So yeah, if you need to do that, you now have a tool. [00:47:54] Speaker C: Wait, is it a September 2025 deadline? [00:47:58] Speaker A: Thanks a lot. [00:48:01] Speaker C: Calendar genius. But. [00:48:05] Speaker A: Yeah, availability's had some basic load balance for being deprecated September 23rd. Yeah. But if you're, if you're still there on that deprecated thing unsupported, you now have a tool. So, you know, thanks, I guess. [00:48:13] Speaker C: I guess. Yeah, it's two weeks too late. [00:48:16] Speaker A: I guess all the people who've been fighting this battle for the last year, migrating all their stuff, are like you sons of the last possible minute, you give us a call. [00:48:24] Speaker C: Right. That's the way you drag your feet on getting off of everything. [00:48:29] Speaker A: Indeed. I do have an Oracle story for us this week. Oracle's joining the Dark Mode club. Welcome. With the OCI console now following all their peers, AWS in 2017, Azure in 2019 and GCP in 2020. The basic UI feature that took surprisingly long is now available to you in oci, so you can now toggle between light and Dark themes and the console or reduces eye strain, improves battery life on devices. Yeah. No time. [00:48:56] Speaker C: Yeah. [00:48:58] Speaker A: It also persists across browser sessions. So that's nice. That's what I would expect. And so they're calling this a welcome quality of life improvement for developers working late hours. No, I don't work late hours, but I'd like to work in the dark. So yes, if I ever remembered my password to log into the Oracle console where I do have an account, I will be very pleased that I can now toggle the blinding white light of the Oracle console into dark mode on the dark side. So thank you Oracle for that. [00:49:26] Speaker C: Yeah, it is really funny the stuff that I like. I do keep some things like Google Docs for instance, like into the non dark mode but like it is jarring whenever. Yeah, you switch. [00:49:40] Speaker A: Yeah, yeah. Alex1i. I keep the email body itself with the white background because you know how many emails. Emails don't understand that you might have a background other than white? Not very many. So that I keep in white but everything else in my outlook is black for that kind of same similar story. Yeah, but yeah, it's just sort of funny. But yeah. Thanks Oracle. Always appreciate it. All right, Ryan, we made it. Woohoo. Woohoo. All right, sir, I will talk to you next week here in the Cloud and hopefully we'll get Matt or Jonathan back next week as well. So it's not just two of us. Maybe. Yeah, maybe. We'll see. You never know. All right, later. [00:50:17] Speaker C: Bye everybody. [00:50:21] Speaker B: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services. While you're at it, head over to our [email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback and ask any questions you might have. Thanks for listening and we'll catch you on the next episode. [00:50:44] Speaker A: Sam.

Other Episodes

Episode 251

March 20, 2024 01:02:08
Episode Cover

251: AI Is the Final Nail in the Coffin for Low Code

Welcome to episode 251 of The Cloud Pod podcast – where the forecast is always cloudy! This week we’re looking at the potential end...

Listen

Episode 272

August 24, 2024 00:50:51
Episode Cover

272: AI: Now with JSON Schemas!

Welcome to episode 272 of The Cloud Pod! This week, Matthew and Justin are bringing you all the latest in cloud and AI news,...

Listen

Episode 243

January 17, 2024 00:30:17
Episode Cover

243: WHOIS The Cloud Pod? We’ll Never Know

Welcome to episode 243 of the Cloud Pod podcast – where the forecast is always cloudy! It’s a bit of a slow new week,...

Listen