276: New from AWS - Elastic Commute - Flex Your Way to an Empty Office

Episode 276 October 01, 2024 01:10:14
276: New from AWS - Elastic Commute - Flex Your Way to an Empty Office
tcp.fm
276: New from AWS - Elastic Commute - Flex Your Way to an Empty Office

Oct 01 2024 | 01:10:14

/

Show Notes

Welcome to episode 276 of The Cloud Pod, where the forecast is always cloudy! This week, our hosts Justin, Matthew, and Jonathan do a speedrun of OpenWorld news, talk about energy needs and the totally not controversial decision to reopen 3 Mile Island, a “managed” exodus from cloud, and Kubernetes news. As well as Amazon’s RTO we are calling “Elastic Commute”. All this and more, right now on The Cloud Pod. 

Titles we almost went with this week:

A big thanks to this week’s sponsor:

We’re sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You’ve come to the right place! Send us an email or hit us up on our slack channel for more info. 

General News

01:08 IBM acquires Kubernetes cost optimization startup Kubecost 

02:26 Jsutin- “…so KubeCost lives inside of Kubernetes, and basically has the ability to see how much CPU, how much memory they’re using, then calculate basically the price of the EC2 broken down into the different pods and services.”

AI Is Going Great – Or How ML Makes All It’s Money

05:03 Introducing OpenAI o1-preview 

07:12 Jonathan – “I have not played with the O1 preview. I’ve been all in on Claude lately. I’ve been playing around with different system prompts to promote the whole chain of thought thing. I know opening ISA, the reasoning engine is not just a chain of thought under the hood. But I’m curious to know what it was you asked it. And I’ll run your prompts through what I’ve got. Because I do a similar thing where I evaluate

evaluate what was asked and then sort of like almost fan out ideas from the central topic. In a way, just having like other ideas be present as tokens in the context window gives the LLM the opportunity to kind of explore more options in the answers that it gives. And so, yeah, it’ll be interesting.”

AWS

28:31 AWS Claims Customers are Packing Bags and Heading Back On-Prem 

10:41 Matthew – “I wouldn’t say it’s played as aggressive, but I’ve definitely started to see more articles and I’ve talked with a few companies in the last couple of years that are really kind of evaluating whether their cloud moves were the right moves and whether to move back or not. And the other piece of it is these companies either are using highly specialized workloads that don’t really fit the cloud or they’re large enough. That makes sense to keep them running, but the majority of customers are doing a simple app, and the cloud makes more sense.”

16:21 Message from CEO Andy Jassy: Strengthening our culture and teams 

18:43 Justin – “I don’t know how well you can innovate and do the right things for your customers. If you lose all of your senior talent, to attrition. So, I’m definitely a little concerned about maybe what I would call 25, I’m maybe the lost year for Amazon.”

19:02 Jonathan – “…they may have had that culture before, but then the pandemic happened and people realize that things didn’t have to be that way and things could be different and they see the benefits. And I don’t think he’s going to make the change that he thinks he is by doing this. I think it’ll demotivate people. You can’t force culture change through policy. That’s not what the culture is. Culture is the result of all the things that you do, including those policies.”

25:29 Amazon RDS for MySQL zero-ETL integration with Amazon Redshift, now generally available, enables near real-time analytics.

26:12 Jonathan – “What’s more painful is having somebody click it in a console and then lose it and then have no commit history to refer back to if they need to rebuild it again. So at least it’s a manageable artifact.”

26:54 AWS Welcomes the OpenSearch Software Foundation

29:54 AWS shuts down DeepComposer, its MIDI keyboard for AI music 

30:49 Matthew – “It’s so funny to look back and think that Deep Compose was five years ago. They had AI in the palm of their hands and let it go.”

37:25 Amazon S3 Express One Zone now supports AWS KMS with customer managed keys  

37:58 Now available: Graviton4-powered memory-optimized Amazon EC2 X8g instances

38:33 Justin – “I think the limitation on CPU is 64 or 96 before. Like, this is doubling or tripling the number of CPUs too, which wasn’t typically the Graviton runs so well, but I don’t see the CPU being my problem. It’s really when I want to run a database in the memory.”

GCP

39:37 Safer by default: Automate access control with Sensitive Data Protection and conditional IAM

40:57 Justin – “I would hope this is something you wouldn’t necessarily use for machine accounts or service to service accounts. This to me is a person who’s getting this type of access. This is where you care about the primitives and the context and those things. And this is a context that you are caring about based on the data sensitivity and the context is important to the end user, not necessarily to the machine.”

41:26 How to prevent account takeovers with new certificate-based access 

42:21 Matthew- “It’s a great level up though to protect because you see all these articles online of like, somebody got in and 12 things went wrong or in someone’s personal account, somebody launched $30,000 worth of Bitcoin miners. So a really good level up to see.”

42:43 Announcing expanded CIEM support to reduce multi cloud risk in Security Command Center

Azure

42:43 Introducing o1: OpenAI’s new reasoning model series for developers and enterprises on Azure 

44:36 Jonathan – “A model garden. It sounds so beautiful until you realize it’s just a concrete building that uses millions of gallons of water a day.”

44:49 Azure Public IPs are now zone-redundant by default

45:20 Matthew – “So when I started in Azure, I realized that these weren’t set up. If you try to attach a non multizonal IP address to a multi zonal service, it just yells at you. So to me, this is like one of those EIPs that are all multizonal by default. You don’t even think about what zone…so you don’t have to think about it. Where here you used to think about it and then there was no migration path to say, hey, take this single zone IP address and move it to be multi-zone. Even if you charge me more for it, there was nothing. So you would have to completely change your IP address, which we all know customers never whitelist specific IP addresses. They never caused the problems. You do that change never.”

46:53 Microsoft and Oracle enhance Oracle Database@Azure with data and AI integration 

47:36 Jonathan – “Out of all the companies who build Oracle database and re -select the cloud providers, Oracle is most definitely the industry leader.”

47:57 Advanced Container Networking Services: Enhancing security and observability in AKS   

48:26 Microsoft Deal Will Reopen Three Mile Island Nuclear Plant to Power AI

49:39 Justin – “And they’re willing to wait for it till 2028. So they have expectations that not only is this plausible and something they can get the Nuclear Energy Commission to approve, but that they will still have AI dominating this much of their power consumption that they need 800 megawatts.”

50:29 Elastic pools for Azure SQL Database Hyperscale now Generally Available!

51:02 Justin – “Yeah, I mean it’s no different than back in the day when you would take all your VM’s on Prem and say OK cool, we had 100 gigabytes memory. I’m going to allocate 200 gigabytes of memory to all your servers and hope that none of them, not all of them, blow up at once. Because you know your workloads. So now you’re able to do this. With hyper scale, which is equivalent to Aurora, but is actually with Microsoft SQL engine and it also gets rid of the increased storage price, but they’ve gotten rid of the SQL licensing.”

Oracle

Hold onto your butts – it’s time for OpenWorld news. 

55:12 Introducing the best platform for AI workloads – OCI Kubernetes Engine 

(OKE)

55:28 Announcing Oracle Cloud Guard Container Security

55:38 Enhanced monitoring in OKE with Logging Analytics 

55:50 OCI Kubernetes Engine supports OpenId Connect (OIDC)

55:55 Simplify operations with OCI Kubernetes Engine (OKE) add-ons  

Please note – someone write down the date and time – Jonathan is impressed with something from Oracle. 

56:49 Announcing OCI Streaming with Apache Kafka, in limited availability

57:00 OCI Database with PostgreSQL release new features

57:09 Streamline your IT management with OCI Resource Analytics

57:23  Announcing GPU support for Oracle Machine Learning Notebooks on Autonomous Database

57:30 Announcing Oracle Code Assist beta and NetSuite SuiteScript support

58:19  Announcing private IP address support for OCI Object Storage using private endpoints

58:48 OCI Fleet Application Management is now generally available – simplifying full-stack patching and compliance management at scale

59:15 Building storage systems for the future: The OCI roadmap

1:00:23 Introducing the new standardized OCI Landing Zones framework for an even easier onboarding to OCI 

Accelerating your zero trust journey on OCI with zero trust landing zone

1:01:20 Oracle Expands Multi Cloud Capabilities with AWS, Google Cloud, and Microsoft Azure

1:01:45 Oracle Offers First Zettascale Cloud Computing Cluster

1:04:49 Oracle Introduces an AI-centric Generative Development Infrastructure for Enterprises

1:06:49 Jonathan – “I can see how there’s value in things like, you know, document stores, reference information, technical decisions, like a way of organizing the structure of projects so that the developer can better use the tool to reach the end goal. So I actually think this is probably a really good product aimed at helping kind of organize and like shepherd the whole process through because I mean, sure you can sit down in front of chat GPT and ask you to write some code, but with limited context window, you have to kind of keep copying stuff out or restarting the chat. You have to keep referring back to original design documents, which is kind of cumbersome. And so solving the usability of these systems to actually deliver applications is great. And I wish them well with it. I’d really like to play with it.”

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod

View Full Transcript

Episode Transcript

[00:00:07] Speaker A: Welcome to the cloud pod, where the forecast is always cloudy. We talk weekly about all things AWs, GCP, and Azure. [00:00:14] Speaker B: We are your hosts, Justin, Jonathan, Ryan, and Matthew. Episode 276 recorded for the week of September 17 and the 24th. New from AWS elastic commute. Flex your way to an empty office. Good evening Jonathan and Matt. How's it going? [00:00:32] Speaker A: It's going good. [00:00:33] Speaker B: Yeah. So I was supposed to record with Jonathan on Friday last week, and for once in a long time I failed. Completely failed. Now, Jonathan didn't text me either, so I sort of put it on Jonathan slightly, although I'll take full blame. But he messaged me on slack and I ignored the notification on my phone and then saw it several hours later after he had given up on me and I was supposed to record with him. So sorry about that, Jonathan. I try to be better in the future. [00:01:00] Speaker A: That's right. Secretly I was hoping that you weren't going to show up anyway, so it all worked out pretty well. [00:01:05] Speaker C: Now I have to join you at 09:00 on a Friday. Thanks guys. [00:01:10] Speaker B: Yeah, well, things happen because it's been two weeks and one of them was Oracle open world. We have a lot to get through, so let's jump right into it. First up, IBM is acquiring Kubernetes cost optimization startup, Kube cost. I think IBM is quickly becoming the place where cloud cost companies go to be assimilated, die, or rebirth into something worse than they were originally, as I think they now own five different solutions, including big ones like Aptio, Turbonomic and Instana, which they bought over the last few years with the auxiliary of Kube costs, a fine app startup that helps teams monitor and optimize their Kubernetes cluster costs. Are they focused on efficiency and ultimately lowering your cost? Kubecast gives them the access to the open source project Opencost, which is a vendor neutral open source project that forms a core part of Kubecast commercial offerings. And Opencast is part of the Cloud native Computing Foundation. IBM, I guess, gets the management oversight of it. Kubecast is expected to be integrated into IBM Finop suite of cloud abilities and turbonomic. And there's also speculation that it might make its way into the openshift product, which would make the most sense to me of any place they could put it. [00:02:19] Speaker A: What am I missing exactly? That specific to kubernetes for customization rather than any other kind of cloud cost optimization pods. [00:02:28] Speaker B: How do you price that? [00:02:30] Speaker C: It's inside the cluster? [00:02:32] Speaker B: Yeah, it's all inside the house. No, it's basically for most of the phenops tools are giving you cost of the EC two instances, or the virtual machines that run the the nodes. But if you want to understand how much of the node is being used by this service or that service or this pod that isn't easily extracted in most of the cloud cost management tools, Kube cost lives inside of kubernetes and basically has the ability to see how much cpu, how much memory they're using and then calculate basically the price of the EC two broken down into the different pods and services. [00:03:06] Speaker A: Okay, that's pretty useful. I guess it's not just reducing costs, but it's cost awareness. Perhaps if you divide custom between different services running on the same cluster. [00:03:16] Speaker B: Correct? [00:03:16] Speaker C: Yeah, it's just tagging the pods and then pulling that metadata out so it's able to read all of it at that level versus the EC two or any of the other services level. [00:03:27] Speaker A: I mean, nice if that was built into the platform completely rather than being a separate tool. [00:03:33] Speaker B: Yeah, I think that was the idea of getting donating open costs to the cloud native computing foundation was that the goal was that it would get pulled into Kubernetes core and at least the APIs would be there. Maybe you still need to do some magic to expose them, like you do with things like orcas or not aquas. Aqua APIs that they put in originally for container security. They contributed that part of it to the open source project. But then to leverage those APIs, you needed a tool like orca or Aqua or any of the other container security companies that have now come to be cool. [00:04:06] Speaker A: I think IBM is kind of becoming the holding company for all kinds of things right now. [00:04:10] Speaker B: They've always been the holding company for all kinds of things. It's just they've arrived at the cloud era and they're buying things they can't. [00:04:16] Speaker C: So it'd be interesting if they integrated it with like terraform somehow too. Like kind of made it into that whole ecosystem. [00:04:24] Speaker B: But I mean, they could since they, you know, the Hashicorp acquisition hasn't closed yet, but they are about to own it. So. Yeah, there's a terraform cost plugin that gives you kind of an idea of what your terraform plan and will cost you when it applies, which is kind of neat. Which would, you know, this could extend into that for kubernetes. [00:04:43] Speaker C: Yeah, I played with that plug in once. It was kind of cool. It was really early in the days I've been used in the last couple of years. I haven't asked on my team to set up for my day job just because I was curious as a fun tool to set up, but I haven't actually played within the last couple of years. [00:05:00] Speaker B: Let's move on to AI. OpenAI has a new product this week called OpenAI Zero one preview, and this LLM is special because it's the first reasoning LLM. Allegedly, the idea behind reasoning models is to take more time to think before they respond to you. This allows them to reason through complex tasks and solve harder problems than previous models in science, coding and math chat JPT is releasing the first with the OpenAI Zero one preview, which they expect to ship regular updates and improvements to and alongside this release. They're considering evaluations for the next updates, which are in development. In chat GPT's test, they said the model performed similar to a PhD student on benchmark tasks in physics, chemistry and biology, and it also excels in math and coding. So basically the international mathematics Olympiad, or the IMO, basically had GPT 4.0 correctly solve only 13% of the problems, while the reasoning model scored an 83% on the same test. As part of the development of these models, OpenAI has come up with some new safety training approaches that harness the reasoning capabilities to make them adhere to safety and alignment guidelines. And one way they measure this is by testing how well the model continues its safety rules after a user bypasses them through jailbreaking one of their hardest tests. GPT 40 scores 22 out of 100, whereas the zero one preview scored 84. So I had a chance to play with this over the last couple of weeks and I was asking some coding ideas and some samples of things. And so the code that chat GPT 40 provided to me didn't work on first attempt, it failed. So Python compile error. And when I ran the same thing through the reasoning engine, I got a much better outcome and actually ran and did what I tell it to do. I also was using it to create some images, some meme generation as I typically use AI for. It's my number one use case. It was able to take my prompt suggestions and turn into a much more compelling answer that I was then able to take to Gemini and tell it to draw an image for me that looked pretty darn good and I was very impressed with it as well. So yeah, this is an interesting idea. And yeah, if you slow down the AI just a little bit and you say, why don't you double check your work? It'll give you better answers. Amazingly enough. [00:07:11] Speaker A: I have not played video on preview. I've been all in on claude lately been playing around with different system prompts to promote the whole the idea of the chain of thought thing. So I know openly. I say the reasoning engine is not just chain of thought under the hood, but I'm curious to know what it was. You asked it and I'll run your prompts through what I've got because I do a similar thing where I evaluate what was asked and then almost fan out ideas from the central topic. In a way, just having other ideas be present as tokens in the context window gives the LLM the opportunity to kind of explore more options in the answers that it gives. And so, yeah, I'll be interesting. We should do like a joint dinner thing, I think, to compare the results of that. [00:08:09] Speaker B: Yeah, I'm down for it. I'll send you over my prompts, you send me over some of yours and I'll run it through my chat GPT trial and you can run it through your clod and we'll see how it turns out. [00:08:18] Speaker A: Cool. Yeah, I did some pretty cool stuff. You know, kids in school, math practice online, stuff like that. There's extra math for them. I don't know if your kids did extra math. I expect they did. My son absolutely hates extra math. He hates the timer. He's grown to detest the whole thing. I was like, okay, let's see if we can build a single page web app that basically does the same thing, which is just like single digit multiplication or division practice in JavaScript using AI and like four or five sentences with the chain of thought prompt that I have. And we've got a working math fact practice app with literally 15 seconds. It's absolutely astounding. [00:08:58] Speaker B: It's pretty darn cool. [00:08:59] Speaker C: Amazing. [00:09:00] Speaker A: Yeah. [00:09:03] Speaker B: All right, well, we'll come back to you when I get back from my vacation. We'll figure that out. We'll do some experimentation. Let's move on to AWS. So AWS had something to say to the UK watchdog and the Competition and Markets Authority in the UK, and that is that Amazon customers were packing their bags and heading back on Prem, facing stiff competition from on premise infrastructure, which is an about face. After saying that all workloads would eventually move to the cloud, AdA listed several examples of customers returning to their data centers. And AV said building a data center requires significant effort. So the fact that customers are doing it highlights the level of flexibility that they have and the attractiveness of moving back to on premises. AWS points that 29% of all cloud customers surveyed a switch back to an on premise service for at least one workload, which is not just AWS, it is also Google and Azure. And this is a convenient lawyer case to basically argue why you're not a monopoly. But I don't know how true it is because again, even when I scour the Twitters or the mastodons looking for people who are moving back on Prem, there's really only a few key use cases that people repeatedly point back to. That's DHI and basecamp and then some of the early days with Dropbox, which is a completely different use case that I think has a volume of scale that does make more sense on premise. But again, I think for your most common applications and your most common things, I don't know that it's quite as aggressive as they're saying it is here in this particular conversation. [00:10:36] Speaker C: I wouldnt say its place aggressive, but ive definitely started seeing more articles and ive talked with a few companies in the last couple of years that are really evaluating whether their cloud moves were the right moves and whether to move back or not. And the other piece of it is these companies either are using highly specialized, highly specialized workloads that dont really fit the cloud or they're large enough that makes sense to keep them running. But the majority of customers, if you're doing a simple app, the cloud makes more sense. I still think I'm biased, I mean. [00:11:13] Speaker B: I do, of course we're biased or you're listening to a cloud pod podcast. But I do think there are use cases that don't make sense on cloud. And I think the reality is that Amazon did a lot of damage early on by saying everything should eventually move to the cloud and cave this pretense that everything should. And I think the right answer is most of your scalable things that can move should be considered moving, but your databases, your mainframes, there's maybe not as much compelling of a reason to do that. You don't get the cost advantages, they're typically pets, they don't get elasticity. I think there's definitely a different way to think about some of these apps. I think that's why we're probably going to see hybrid continue for a while. I think multi cloud makes sense because okay, well I can run Oracle and OCi and I can save a bunch of money by doing that. And I get the best oracle environment I can get because they're experts. I can go to Amazon and I can get the best web server tier and some of the best tools around how to manage a web server farm at scale and how to do that dynamically and efficiently because that's what they're really good at. Or I have a really good AI ML workload. Maybe you're talking to Azure, maybe you're talking to Google, but there's different clouds have different areas where they honed in their special sweet sauce. And so I think that's why hybrid or multicloud became such a popular move for companies, was that while my windows workloads, so I put them on Azure, I have licensing advantage. If I put my non Windows workloads on Amazon, I get all the other advantages of that. And so I think that's what we've seen with multicloud. I think that's why also hybrid is probably here for the long term for some companies. [00:12:49] Speaker A: I think we're going to. So the quotes in the article, things Amazon said was eight years ago and things were very different eight years ago, they didn't have the same number of customers or the same types of customers. And so when they say things like if you want to go multi cloud, then you'll end up with the lowest common denominator, I think that was a reasonable assertion to make back then because that's what people thought multi cloud was, was finding the cheapest computer any moment in time or finding the cheapest storage. These massive use cases like Dropbox or basecamp, I dont think we even considered those eight years ago as being something to worry about in a way. I dont know, its interesting that theyre having to backpedal on some of that advice really well. [00:13:37] Speaker B: I think they were forced to because they chose a position in a market that was very aggressive and their competitors said youre being a monopoly and you're basically eliminating people's choice by saying this is the only option. And so they've been forced into a corner by these marketing tactics they took. And I think that's part of the challenge they've had. And now we'll talk about, we talked about a couple weeks ago, Amazon and Oracle signing a partnership to run to connect AWS to Oracle Cloud. And so now they're hearing what their customers are saying, which is that we don't want one cloud. We want the ability to be flexible and have options and choice. [00:14:15] Speaker C: I mean, to kind of reiterate what you said in a different way, you know, one of my things I've always said is use the right tool in the right place at the right time, you know, so leverage the different clouds for what? For different workloads, you know, and if they are isolated workloads if you don't have a monolith. But you can, you know, you're a company that's acquired other companies or you have it like the right workloads in the right locations. So that for things that make sense for you, your mainframe, like you said, unless if you're going to completely transform or modernize it, leave it in your data center, it's going to be easier to maintain and cheaper where your SQL and your.nEt comma Microsoft probably has some advantages, like you've said. So like if you kind of and take your workloads and if they are isolated and you're large enough to be able to manage multiple clouds, because that's the other piece of it is actually having the right team to manage the clouds. Definitely. I agree with what they're saying here. I don't think to the scale that they want to say it to, but still a fact. [00:15:18] Speaker B: Yeah, I definitely think in the economic climate that we're in right now, I think a lot of companies are asking themselves was this cloud cost thing the right way to go? And I think that's probably a factor. But also a lot of these AI use cases that we're talking about that all the Wall street people are excited about investing in companies for, you couldn't do that as easily on the, on prem and you can't get the GPU's on Prem like this. And so, you know, again, there's advantages to all the models and I think it's about figuring out what the right balance for any company is. [00:15:49] Speaker A: Yeah, I think they were still right, though. I think everything will end up in cloud. It may not be a public cloud. I think it's still, it's still the right mindset about service oriented delivery of infrastructure, regardless of whether it's AWS or your own data center that you run as a cloud. [00:16:09] Speaker B: Well, if you've enjoyed your time working from home a few days a week and going into your office two or three days a week, those days are over. If you're an Amazon employee, Andy Jassy says it's time to return to the office full time. Jassy wrote this in a memo he sent out on September 16. And since that came out, the Internet has basically imploded of Amazon employees looking for new work. Jassy, though, says he feels good about the progress they are making across stores, AWS and advertising, as well as prime video expansion and investment areas like Genai Cooper and healthcare. And several other areas he says are evolving nicely. But he talks about when he started the company 27 years ago, he had a plan to stay for a few years, but that the unique culture of Amazon has really made him stay. And really he rationalized with that. It's a key part of how the success of Amazon has been driven is by their culture. And so he says, the S team, which for those of you who don't know the S team is basically the executive team that manages all of Amazon's businesses, wants Amazon to operate like the world's largest startup, which means it has a passion for constantly inventing for customers, a strong urgency for big opportunities, high ownership and fast decision making and scrappiness. As part of those questions, the S team have been thinking about whether they have the right organizational structure to drive the ownership and speed they desire, and two, whether they are set to up to invent, collaborate, and be connected to each other and the culture to deliver the absolute best for the customers in the business. And they concluded they could do better on both. And to do this, they decided that they have too much management and this is slowing down and causing bureaucracy in the business. And to solve this, they plan to increase the ratio of individual contributors to managers by 15% by the end of Q 120. 25. With fewer managers, they will remove layers and flatten the organization. And it's done well. It will improve the ability to move fast, clarify and invigorate a sense of ownership and drive decision making closer to the front lines, where it most impacts customers. And he points out that they have created needless bureaucracy across the business. And to solve that, he's created a bureaucratic mailbox where you can send all your bureaucratic complaints to. And he says he'll read it. Bullshit. But someone will read it and report to him, I'm sure. The controversial part is that the return to return to the office is five days a week. They want to return to the pre pandemic days when being out of the office was an exception. And they will bring back assigned desks in the US because they know many of their employees will need to make accommodations. The new way of working will start on January 2, 2025. And, you know, I don't know how well you can innovate and do the right things for your customers if you lose all of your senior talent to attrition. So I'm definitely a little concerned about maybe what I would call 25, maybe the last year for Amazon. [00:18:47] Speaker A: Yeah, they may have had that culture before, but then the pandemic happened, and people, people realize that things didn't have to be that way. And things could be different and they see the benefits. And you know, I don't think he's going to make the change that he thinks he is by doing this. I think it'll demotivate people. You can't force culture change through policy. That's not what the culture is. Culture is the result of all the things that you do, including those policies. It emerges. It's not like you have a manifesto that says this is our company culture, although many people do. [00:19:21] Speaker B: Well, I mean Amazon has the leadership principles which basically are, they do basically their definition of a culture. Again, I don't know how much of that culture actually exists in the hearts and minds of Amazonians versus the S team, but they definitely hire for it. They definitely look for people who meet those leadership principles. And so you are biased to potentially get people who will help support and foster the culture that you want in the business. I agree with you. Once you've seen a different way of working and do those things change and going back to the old way isn't necessarily the right decision. [00:19:58] Speaker A: Yeah, I mean they should try and in my opinion they should try and build a way to make people want to go back to the office rather than mandating it like incentivize people at work, give them cool projects to work on, but they're projects that you need to get together as a team for, for some reason. But you know, with, with all the cost cutting and there's like no, no domestic or international trouble for people. People aren't going to conferences people, people working in Seattle, but their team's somewhere else in the country. They can't go and see them in person, but now they've got to go and sit at a desk in an office in Seattle instead of in the comfort of their homes? It's, it's all very strange. It's like not a good assessment of the best way to solve the problem that they have. [00:20:44] Speaker C: I mean I play devil's advocate here in some ways where there are certain things I do believe that work better in person and like kicking off a project, finalizing projects, handing off, you know, back in things like that that do work fundamentally better in person because you can get people in the room and let's be honest, one. Well, as we do the podcast, how many times have you guys gotten distracted and done other things just, and this is outside of work and, or when you're in work, when you're stuck in 6 hours of meeting a day, how much are you all actually paying attention to said meeting versus doing 16 other things that you could be doing. So I do believe that kind of one getting out of the 8 hours of meeting straight in the like at home and is useful. I don't necessarily know that just mandating five days a week. But at one point, if you do believe that getting people in the office 100% is just, is the only way to do it, you gotta pull the ripcord because, and as a business be ready to re implement, which is that kind of sounds like they are because I've seen companies where they're like, hey, everyone, come back into the office every day, but we don't have enough desk space. And oh, by the way, you know, you're gonna be hotel desking, so you can't leave everything, anything at the office. There's no comforts as there used to be of, you have a desk, you have your tea or coffee at your desk, you have whatever. So. And the other thing, I've seen them, which, which Justin read here, which is they're giving them advance notice. You know, I have a friend of a friend who was literally told, oh, next week you're to come into the office five days a week. Okay, cool. But their daycare picks up at 05:00 and that was the advantage of this, where they can go get their child at 05:00 you can leave at 445. And that was their goal. So giving some notice, I feel like they're trying to do it in a way that should have some level of success. [00:22:48] Speaker B: Yeah, I mean, I definitely think they're being somewhat generous to allow people plenty of opportunity, three months to figure out new childcare situations, but it's still, it's a disruption. And then there's all the people who, you know, were hired remotely during pandemic who can't move to an office location. And, you know, are they now sitting in a situation where they're, you know, potentially out of a job in six months through some layoff because they're not getting the Facetime of being in the office. I mean, like you, you hire these people under a pretense that they're remote and now you're changing the direction of the company. And that's also bad too. And those people aren't bad just because they live in Nebraska. They're perfectly capable people, probably, who do really good work. And they're now in a situation where move from Nebraska, a very cheap place to live, to a very expensive place like Seattle or New York or one of the other hubs of Amazon Web Services and, you know, or wait to get laid off because you're not performing at the level they expect you to because you're not in the office. That's a, that's a tough scenario. [00:23:48] Speaker C: And there are a few very good DevOps people in Nebraska, too. [00:23:52] Speaker B: Yeah, so we'll see. It's been a bit of a brain drain of talent. I've been watching on Twitter people leaving Amazon, announcing it and getting retweeted and stuff. So I suspect we'll see more of that in the next three to four months. And the people who are leaving are the people who can get jobs at Google and Facebook and other company Oracle cloud and all the other cloud providers. So they're all probably going to get better. Why? Amazon may struggle to retain top talent or get new talent that is as good of caliber. So we'll see how it pans out for them. I'm sure it'll be a case study and a business book sometime in the future. [00:24:30] Speaker C: I mean, the question is if Amazon's moving over to this, you can kind of see a lot of other companies moving over to this, or people just can't bounce around and how much attrition is higher than normal right now because of these changes. [00:24:44] Speaker B: Yeah, I mean, I think it's definitely higher from them forcing it last year, but those people that didn't leave right away, it'll be interesting if they, if this causes other companies to go back to five days a week. I think some companies might be considering it now, but they're going to watch Amazon, see how it goes first before they pull this trigger. All right, well, let's go talk about other more fun news other than people leaving Amazon. Amazon RDS for MysQL Zero ETL integration with Amazon Redshift is now generally available. For those of you who love managing etls, this is bad news for you. But those of you who hate them, the zero ETL will basically make it easier for you to unify your data across your applications and data sources for holistic insights and break down your data silos. This release also includes new features such as data filtering, support for multiple integrations, and ability to configure zero ETL integration, and your AWS cloud formation template directly. All doing your stuff as infrastructure, as code. Get your data from MySQL to redshift automatically. [00:25:41] Speaker C: Wanna know what sounds painful? Writing an ETL job in a cloud formation template. [00:25:48] Speaker A: What's more painful is having somebody click it in a console and then lose it, and then have no commit history to refer back to if they need to rebuild it again. So at least it's a manageable artifact. [00:26:01] Speaker C: You're not wrong, but I still think I want to see a JSON ETL template in my life. [00:26:08] Speaker B: Other than specifying your filtering and some of the other parameters, I don't know how much in the cloudformation is actually. You're configuring the job versus defining the data that you'd like from your MySQL. That's going to end up in redshift. And I hope Amazon handles the mappings and the transformation layer, but I don't know. I've looked at the code, so we'll see. AWS is transferring opensearch to the open search Software foundation open search to Open Search, a community driven initiative under the Linux foundation. This follows leadership expansion of the project shared earlier this year and now ends Amazon's dominating control of the open search project, which based on elasticsearch a few weeks ago. Going back to open source, I think is a good move in general, and I think Opensearch is continuing to be the dominant player in elastic community at this point it's been pretty successful. We're seeing similar things with Valky being pretty heavily adopted after their licensing changes. So it's an interesting case study because I think it worked for smaller open source projects like Voldemort, who probably weren't getting a lot of community adoption anyways. I think it looked that lesser. So for elastic and for Redis, so we'll continue to keep an eye on this. [00:27:20] Speaker A: Yeah, I think the only thing they really have left is support services. [00:27:25] Speaker B: I mean there's definitely some proprietary things that elasticsearch has been building around AI and some other stuff around security and those things that maybe are not in the open search product. But again, I don't know that I want to build AI on top of that. [00:27:41] Speaker A: Yeah, I know there's like rag use cases for elasticsearch because it's a great vector store, but that can be added to open search too. It's not patentable, so I know, I think they're going to struggle in the long run. [00:27:57] Speaker C: Now, has any of the other cloud providers jumped on supporting Opensearch and all. [00:28:03] Speaker B: Of them, or is that the. I mean, other than Google I think is elastic, but OCI adopted Opensearch. I thought Azure did as well. [00:28:11] Speaker A: Google are adopting Valky at least. [00:28:14] Speaker B: Yeah, Google's on Valkyrie. [00:28:17] Speaker C: Yeah, I mean Microsoft backed Redis right when it came out I was like, ooh, that was an interesting move. It looks like there's third party open search, but it's not directly it's on Azure managed service. It's all done through partners. [00:28:33] Speaker A: I guess you got to think about the level of service that I'm actually offering you. In that case though, are they giving you a redis or elasticsearch machine image and then it's your responsibility to spin those up and configure them? Or is it a fully managed service? [00:28:49] Speaker C: Azure has a fully managed redis service for you. Backups, same thing as elastic cache like where they'll manage all the pieces scaling up down whatever you need. Open search looks like from a 32nd real time feedback where it is just third parties that built and shit and sell through their store. But I'll happily be wrong. [00:29:17] Speaker B: Well, Jonathan, I know you do a lot of AI music with your deep composer, but I'm sorry to tell you that Amazon has announced they're killing deep composer. It's AI powered keyboard experiment that allowed you to combine music with the online side to create your fantastic raps. The Deep composer project just reached its five year milestone and the physical media piano and at service let users compose songs with the help of generative AIh. You have until September 17, 2025, so you at least have a year to still use your thing to download your data stored there, and the service will end on that date. AWS also announced that the deep racer league is ending this year, which makes me kind of sad. And I assume that deep racers will also be no longer available in the Amazon store. So buy them now while you still have a chance, before it's completely gone. The league is dying. I assume the service will die shortly after that. [00:30:11] Speaker A: It's, it's so funny to look back and think that deep composers, five years ago, right, they had, they had, like, they had AI, like in the palm of their hands and let it go. [00:30:27] Speaker B: I mean, so Google, I mean, like, in fairness, Google and Amazon had, you know, they were on the verge. They just didn't get it done. And then chat GPT took all the, took all the attention away. [00:30:38] Speaker A: Yeah. [00:30:39] Speaker C: Because they made it available to the general public versus, you know, the niche group. [00:30:44] Speaker B: Yeah, I mean, I've been hearing, you know, before chat GBT, even hearing like, oh, AI is gonna be a game changer. It's so amazing. It's only like, okay, like, but no one was really showing what they were saying was amazing and game changing. But, like, you know, if Amazon had models that were similar to, you know, and we had, like, it was that guy who got fired because he said the, the AI was real and he was in love with it or something. All kinds of weird stories were coming out before chat GPT, and now that you see it, you're like, oh, yeah, I can see how that happened. Okay. Makes a lot more sense now. [00:31:13] Speaker A: Yeah. If you think about these models before the guardrails get put in place and before the fine tuning, that absolutely makes sure that they never present any kind of unique Persona to you. And they are most definitely a chat button, nothing more. I can see how somebody in that position couple of years ago could have really easily been influenced, I'm not even gonna say deceived because I think the jury's still out. Really unlike the ethics of some of this stuff. But yeah, it's very easy to see how somebody could be led into that conclusion. Yeah. [00:31:48] Speaker C: It's also just another service now that Microsoft airship Azure Aws. Wow. As deprecated this year. They had the couple earlier this year. It feels like they're finally, after many years, saying, no, we need to turn off these services, that we are not making us money, leverage to give us any value back. I'm curious to see the depth of services as they're curious what is the. [00:32:19] Speaker B: Criteria they're looking at? You know, is it revenue, is it margin, is it usage? Because, you know, like there's talking about a next week's show, but there, you know, there was something that was just killed that they replaced with a much better service. And people were like, I'm so annoyed they killed this. I was using it and I'm like, well, why didn't you move to the new thing? Which is better? Like so much better? So I think there's, yeah, there's good V one S or MVP's that they shipped, that got some adoption and some interest, but then they realized to really build the next version of them, like serverless V one for Aurora, you had to change the model and people are upset about it because the contract sort of changed. But it's a better product now, even though it's more expensive in case of Aurora, serverless too. But these things happen and I think the amount of time they give you to deprecate off of these things is long enough that it's not the end of the world in my book, where, you know, because someone's talking about like, well, is Amazon at risk of being the next Google? And I don't, I don't think they are because, you know, Google killed products that people love, like reader and mostly consumer basic products too, by the way. Uh, a lot of consumer stuff. But, you know, reader is one that everyone always points to is like, well, I loved reader and it was great and it was amazing and it's like, yeah, and people filled the void. And, you know, reader, you know, RSS has still been a dying thing now for a decade. It's still not as popular as it used to be. And then there are things that Google does. They deprecate their APIs really quick. So if you're using this API, they give you 30 days notice that this API is being deprecated in favor of the V two or V three API. That's not deprecating a service, but it's causing a lot of disruption to your business. And so I think Google's reputation is a reputation built over ten years of destroying things that people use and causing pain where Amazon is trying to be thoughtful about it, giving you, you know, probably no one has created a deep composer song in six months to a year. I don't, I don't, I forgot about the service a year after it was announced. So it's probably been actually even maybe years. And the usage is probably so low and they're not saying it's dying tomorrow. They're giving you another year. Yeah, some change to download yourself. That feels like plenty of notice, which. [00:34:28] Speaker C: Is also one of their quicker ones. Like ultimate was like, you can keep using it for another two or three years. You know, it was something like, it was a multi year thing. And like there, you can still actually sign up for it if you're in the same organization, like, so they, they still are keeping some of these things around, especially more of the wanna call more business targeted ones, versus deep composer, which is probably less business targeted. [00:34:58] Speaker A: Yeah, I mean, almost certain that absolutely nobody, no business is using deep composer to make any money. And nobody is using deepracer to make any money. They were almost developer advocacy tools. They were to drive interest in Sagemaker and the other tools were available at the time. The fact that they launched them as services to begin with is really weird. It could just have been, hey, we're doing this thing on the side. We'll call it Amazon Deepracer, but the fact that they kind of named it as a service, now they have to kill it as a service instead of it just being this sort of ephemeral thing which happened and then went away again to drive developer interest in their tools is kind of strange. So now killing these things doesn't bother me. People have been past this. I mean, you can go on the web and you can simulate deep racer for free with like brilliant.org and stuff like this. And AI class and the composing sucked to begin with, so there's no loss there. [00:36:01] Speaker B: It was never very good. [00:36:01] Speaker A: Yeah, it was not. [00:36:03] Speaker B: I mean, that's the thing. There's other competitive things to make AI music that exists in the world today. There's alternative services that will do that. But yeah, you're kind of right. Calling it a service is probably the mistake because it really was just example applications that they built that they thought were cool to show you potential capabilities. So maybe this is where labs make sense. Like, hey, we're creating a new lab product called Deep Composer and Deep Racer. And these are fun things that we're doing to show you how to use the technology in a fun and interesting way that isn't just coding the world's next CRM with AI, which is exciting. All right, well, Amazon s three express one zone now supports AWS KMS with customer managed keys, so you can encrypt your data server side with your own key. Thanks. [00:36:51] Speaker C: Yay. [00:36:52] Speaker B: Appreciate it. [00:36:53] Speaker C: This is one of those things they thought just existed, right? [00:36:56] Speaker B: Why does it matter if it's one zone or two zones for whatever it's there now? I don't have to worry about why it didn't exist before. Onezone is not a thing I use. It's a very specific use case. And you shouldn't be using it for data you care about so that you need to encrypt the data you don't care about. And ones on already makes me a red flag on other reasons. Now available Graviton four powered memory optimized Amazon EC two x eight g instances the Graviton four powered memory optimized x eight g instances are available in ten virtual sizes and two bare metal sizes with up to three terabytes of DDR five memory and up to 192 vcpus. Glad to see some additional graviton options, especially with the memory side. That was one of the areas I always felt the, the graviton chips were a bit limited, was in memory. [00:37:45] Speaker C: And now you can have three terabytes. [00:37:47] Speaker B: Yeah, I think the limitation on cpu was 64 or 96 before. So this is doubling or tripling of the number of cpu's too, which typically the Graviton runs so well that I don't see the CPU being my problem is really when I want to run a database in the memory, but yeah. Good to see. Glad to see it. I think we're going to get Graviton five this year. Or do you think they're gonna take a year, another year to get it? [00:38:11] Speaker A: No, I don't think so. I don't think TSMC has got any capacity to make proton fives two PC. [00:38:18] Speaker C: Don't they build them in house now? Are they just designing in house? [00:38:23] Speaker B: They end up designing in house. I don't think they're. Yeah, yeah, TSMC. I mean, TSMC is making most of the chips for everybody. Intel's trying to build their foundry, but that's not going so well. And we'll talk about that next week. They might get bought, but it should be just a crazy story. But that's next week. Stay tuned for that. Moving on to Google Compute cloud. This week they're bringing you safer by default, automated access control with sensitive data protection and conditional IAM. Google Cloud sensitive data protection can automatically discover sensitive data assets and attached tags to your data assets based on the sensitivity. So things like username, email address, phone number, Social Security number. Using these IAM conditions, you can grant or deny access to the data based on the presence or absence of a sensitivity level, tag key or tag value. There are several use cases, including automating access controls across various supported resources based on those attributes and classifications, restricting access to the support resources like cloud storage, bigquery and cloud SQL until those resources are profiled and classified by sensitive data protection and changing access to a resource automatically as the data sensitivity level for that resource changes. So I have access to this data until someone entered in a credit card number into the field that they shouldn't have and now I can't see it. And it's now flagged as an item that needs to be addressed by our SoC team. [00:39:44] Speaker A: For example, I can see it's been super useful for a lot of data engineering work, but the risk of something like this in production will be kind of a nightmare. Some customer accidentally puts something that looks like a credit card number in a text field or something, or a phone number that looks like a Social Security number. And also your app quits working because now it can't read the data from the data store anymore. [00:40:09] Speaker B: I would hope this is something you wouldn't necessarily use for machine accounts or services service accounts. This to me is a person who's getting this type of access. This is where you care about the primitives and the contacts and those things. And this is a context that you are caring about based on the data sensitivity. And the context is important to an end user, not necessarily to the machine. [00:40:31] Speaker A: No, it's great. [00:40:32] Speaker B: Yeah, I like it quite a bit. Glad to see it. They are also giving you the ability to now prevent account takeovers with new certificate based access stolen credentials are of course one of the most common attack vectors used by attackers to gain unauthorized access to user accounts and steal your information. Google is now providing certificate based access in the IAM portfolio to help combat stolen credentials, cookie theft, and accidental credential loss. Certificate based access uses mutual TL's to ensure that the user's credentials are bound to device certificate before authorizing access to the cloud resource. And CBA provides strong protection, requiring the X 509 certificate as a device identifier, and verifies devices with user contacts for every access request to the cloud resource. Even if attacker compromises user credentials, account access remain blocked as they don't have the corresponding certificate. I mean, that's a lot of words to just basically say we support X 509 certificates, but again, appreciated. [00:41:22] Speaker A: Yeah, MFA with X 509 certificates sounds great. [00:41:25] Speaker B: Yeah, again, another factor of identification to protect your users and your systems. [00:41:32] Speaker C: It's a great level up though to protect because you see all these articles online of like oh, somebody got in and twelve things went wrong, or in someone's personal account, somebody's launched $30,000 worth of bitcoin miners. A really good level up to see. [00:41:49] Speaker A: Yep. [00:41:51] Speaker B: And then finally from Google, they're announcing expanded CIEM support to reduce multi cloud risk in a security command center. Identities can be a major source of cloud risk when they are not properly managed. Compromised credentials are frequently used to gain unauthorized access to cloud environments, but often magnifies that risk since many users and service accounts are granted access to cloud services and assets beyond the required scope. This means that if there's one credential, stolen or abused, companies may have a risk of data exfiltration and resource compromise. To make this easier, Google is integrating cloud infrastructure, entitlement management, or CIEM, into security command center, their multi cloud security and risk tool, and announcing the general ability of expanded CIEM support for additional clouds identifiers, including AWS, entre ID, and Okta. And this is all about ensuring your entitlement is the right entitlement for you as the end user. [00:42:39] Speaker C: So does your Siem data go into your Siem? [00:42:42] Speaker B: I assume so, yes. A terrible name, Siem to Siem. Yes. Also plugs into CNAP. There's actually a couple of companies that build this as its own service on the entitlement. I was just learning about this last week. So curious to see how Google continues to expand it. The first version is still a little limited. I think what it really provides you from an identity perspective, mostly identification of risk, moving on to Azure, of course, OpenAI has zero one reasoning and so now Azure OpenAI has zero one reasoning. And those are available to you in the model garden. Because Azure and OpenAI have that close, tight partnership, you can now use o one series to enable complex coding, math, reasoning, brainstorming and comparative analysis capabilities. Setting a new benchmark for AI power powered solutions, the model garden. [00:43:43] Speaker A: It sounds so beautiful. Until you realize it's just a concrete building that uses millions of gallons of water a day. Yeah. [00:43:51] Speaker B: Yes it does. Apparently Azure is making their public ips redundant by default. This means that unless you specifically select a single zone when deploying your Microsoft Azure standard public ips, they automatically be zone redundant. Without any extra steps on your part. Zone redundancy will be at no extra cost and a zone redundant ip is created in three zones for a region and can survive any single zone failure, improving the resiliency of your application using the public IP. And I can just see the BGP nightmares now of something like this. [00:44:23] Speaker C: Well, no, think about it. From when I started in Azure, I didn't realize that these weren't set up. If you go try to attach a non multi zonal IP address to a multi zonal service, it just yells at you. So to me this is like one of those eips are all multi zonal by default. You don't even think about what zone, wait, are now I'm saying out loud, eips are, they are. [00:44:50] Speaker A: Yeah, yeah. [00:44:52] Speaker C: So you don't have to think about it where here you used to have to think about it. And then there was no migration path to say, hey, take this single zone IP address and move it to be multi zone. Even if you charge me more for it, there's nothing. So you like would have to completely change your ip address, which we all know customers never whitelist specific IP addresses. They never cause they problems. [00:45:13] Speaker B: Never do that change, they never, never have firewalls that don't support DNS for firewall rules. Never happens. Yeah, but, so my point on the BGP was their comment of zone red and IP is created in three zones for a region. And I was just thinking, so then how do you know which one is active at any time? And if you're using, basically if the networking between the sites is how you do the health check for that to then activate the public IP and that fails, then both public ips now go active and the BGP basically goes, what the hell's happening? All these AP's are now active in both regions or zones in the region. [00:45:47] Speaker C: No, they just take down the whole region at once. [00:45:49] Speaker B: That's probably what happens all. That's probably why Azure has so many outages. Makes sense now. All right, Microsoft and Oracle are enhancing Oracle database at Azure, which this database edit at AWS database at GCP is terrible naming, terrible marketing. I don't know who came up with this. You guys should all be fired. Oracle database at Azure is getting those updates from the open world and we'll talk more about a bit. But they're getting fabric integration so you can load all your Oracle data into Azure's very expensive fabric overlay for doing your data analysis and data display with power BI integration with Sentinel and compliance certificates to provide industry leading security and compliance industry leading, that's a stretch. And plans to expand to a total of 21 primary regions, each with at least two availability zones and support for Oracle's maximum availability architecture. [00:46:36] Speaker A: Out of all the companies who build Oracle database and resell it to cloud providers, Oracle is most definitely the industry leader. [00:46:45] Speaker B: Well, I agree, but they're not integrating with Oracle's industry leading Sentinel, they're integrating into Azure Sentinel. So yeah, okay, I get it. Microsoft Azure container team is giving you some gifts this week following the success of advanced networking observability, which provided deep insights into network traffic with AKS clusters. They're now introducing fully qualified domain name filtering as a new security feature and even showing you that new container settings do require IP addresses. Matt and don't support DNS until now. [00:47:18] Speaker C: Still baffles me. [00:47:19] Speaker B: Still baffles you. So Microsoft's going nuclear, guys. [00:47:26] Speaker A: That's great. [00:47:26] Speaker B: I know. They signed a deal to restart a power plant on three Mile island in Pennsylvania to power its rapidly expanding data center footprint for AI. The plan is for TMI unit one, which was shut down in 2019 for economic reasons, will be reopened and producing energy for Microsoft by 2020. For those of you who know three Mile island as the place where the separate, the reactor partially melted down in 1979, that is a different reactor from TMI unit one, I believe that is TMI unit two. The reactor generated more than 800 power when it was running in 2019. And Constellation Energy, the plant owner, is retrofitting the system to provide that power. And I somehow expect protests from people who are anti nuclear at some point around this announcement. But yeah, guess what, all of that AI that you have and GPU's, they need power. And if you want to also hit your green energy, you know, carbon outputs, nuclear is a pretty good way to do it. [00:48:22] Speaker A: Yeah, I'm surprised. [00:48:24] Speaker C: More amazing that if they're going to grab 800 mw themselves. [00:48:29] Speaker B: Yeah, by themselves. [00:48:30] Speaker C: Like just Microsoft's, like we need this much minimum. Like that's pretty much where they're at and we're willing to pay for it no matter what. [00:48:37] Speaker B: And they're willing to wait for it till 2028. So they have expectations that not only is this plausible and something they can get the Nuclear Energy Commission to approve, but that they will still have AI dominating this much of their power consumption, that they need 800 mw. [00:48:52] Speaker C: Either that or they're hoping that the nuclear commission says no and that they can say, well, we tried to go to green and it makes them look good too. [00:49:03] Speaker A: I think if they were smart, they'd build a data center right next to the nuclear power plant. [00:49:06] Speaker B: Yeah, I was partially surprised. They are not building a data center next to it. But we did talk about, was that Azure? There was somebody who's building data centers closer to some power generating facilities. Maybe that was Amazon. I don't remember for someone recently we talked about, but just too much. All right, Matt, this is your story. Azure is announcing the Ga of Azure SQL database hyperscale elastic pools, which is a terrible Azure esque name. They go say, while you may start with a standalone hyperscale database, chances are that as your fleet of databases grows, you want to optimize price and performance across a set of hyperscale databases. And these elastic pools offer the convenience of pooling the resources like cpu, memory and I o while ensuring strong security isolation between those databases. So buy in bulk, allocate where you want them to be and you'll save some money, which I guess is what this is. [00:49:58] Speaker C: Yeah, I mean, it's no different than back in the day when you would take all your vms on Prem and say, okay, cool, we had 100gb of memory. I'm going to allocate 200gb memory to all your servers and hope that none of them, not all of them blow up at once because you know your workloads. So now you're able to do this with hyperscale, which is equivalent to Aurora, but is actually with Microsoft SQL engine. And it also gets rid of the, they've increased the storage price, but they've gotten rid of the SQL licensing. So it's now trying to be more comparable to just like or MySQL or just MySQL in general at that tier, which. So it's interesting to see Microsoft kind of backing down from the license game and everything else to get with their cloud native SQL database. [00:50:49] Speaker B: Yeah, the other example here, they're showing like, you have eight. Eight databases with four vcores each. That would basically cost you $4,267 a month. And you're typically not using 100% of your cpu because if you did it on SQL server, you'll have a bad time. Basically by combining those eight database servers into an elastic pool with ten vcores, you're only spending $16.99 a month. And you're able to dynamically allocate those resources between the eight databases. It's virtualization in the cloud of SQL. [00:51:17] Speaker C: Basically they've had it for normal SQL Server, their managed SQL for a while. It just took them a pretty long time to add it to hyperscale, which has been ga'ed for a while. [00:51:28] Speaker A: On the back end it sounds like placement groups and not dedicated hardware. For VMS it sounds like you've got now you've got dedicated hypervisors running the database services virtually. But now there's nothing. That was the word. Oh, they're sharing. They're sharing. Cause they must be sharing. Cause so like going this pretty much what people doing on vmware 20 years ago. [00:51:57] Speaker B: Yeah, yeah, but in the cloud. [00:51:58] Speaker C: But that's one of the negative parts of the cloud is that you can over provision and that's where containers Kubernetes, you can then over provision and now. So here is a way to actually overvision with the SQL. But with hyperscale you're also killing the Microsoft licensing. So for just moving from business critical, which is so your standard SQL is $1, your business critical is four plus. Now you're probably at about $3 or so with the same compute that you were spending $4 for by killing off the licensing. So if you can move horizontally and it gives you gets rid of all the storage limitations. Same with the RDS. Used to have that like four gig, four terabyte limit. This is like 100 terabytes at least. You can't tell. I deal with this a little bit in my day job. So I've read a little bit and know a little bit about this, which is why I was excited for it because I've been waiting for like six months for them to ga this. [00:52:56] Speaker A: Yeah, I. I'm. It sounds good. It sounds valuable. I can't help but feel like they're solving a problem that they created themselves. [00:53:05] Speaker B: Though in this 100%. [00:53:07] Speaker C: Well it's equals 25 years. [00:53:10] Speaker A: That's a nice way to put it. [00:53:12] Speaker C: SQL is 25 years ago they screwed it. [00:53:14] Speaker B: They screwed you as a customer. Now they're screwing you as a customer less. That's what's happening. [00:53:18] Speaker A: Yeah, thank you. [00:53:19] Speaker C: Yeah, I mean, the big one's getting rid of licensing, getting rid of the licensing, and them trying to compete with the standard open source SQL platforms. Now. [00:53:30] Speaker B: Screwing your customers is an excellent segue to talk about Oracle open world. So there was a lot at open world. I've already filtered this down quite a bit before I even put it in the show notes and then the guy still said it's a lot. So I'm trying to save their sanity a little bit. We won't go into deep into all of these, but I'll try to combine a bunch of things together here and give you the Cliff notes version. And if you really care about Oracle that much, you can check it out in the show notes. But Oracle Kubernetes engine or oke Okey Okey has a bunch of features, including that is the new best platform to run AI workloads, means you can connect your Kubernetes engine to gpu based vms. Cool. They're giving you new container security with a limited beta release of a container governance model through Oracle's cloud guard container security. They'll tell you if your containers are unsecure running things they shouldn't be or whatever. They're giving you enhanced monitoring for Oke with logging analytics. Basically they now send your Kubernetes logs to the OCI logging service where you pay for every log it generates. Enjoy paying that bill. They now support OpenID connect in Oracle Kubernetes engine, which is always appreciated. And they're making it simpler to add OCI Kubernetes engine add ons, including things like the Kubernetes cluster auto scaler, the istio service mesh, the OCI native ingress controller, and the Kubernetes metric server. That is all the okey features announced at over one quite a few. [00:54:55] Speaker C: Okey dokey. [00:54:57] Speaker A: That's cool. Actually, the virtualization of GPU's is really valuable. I don't think anyone else is doing it as well as Oracle are. I know Nvidia only just released the open source version of the GPU virtualization software just in the last week or so. That's actually really impressive from Oracle. [00:55:19] Speaker B: Well, mark that out somewhere or Jonathan was impressed with Oracle. [00:55:23] Speaker C: You know, it's like, I don't think those two words have ever been said in the same sentence. Not a sentence in a positive manner. [00:55:30] Speaker B: All right, one other things, if you need a message bus that is an Oracle's proprietary one, they are now supported Kafka as a managed service. And if you like to keep your enemies closer than your friends. OCI database with postgres is now available to you with several new features including version 1314 and 15 support as well as extension support for postgres. You can also get asset and resource management for your OCI environment with their new centralized inventory and advanced troubleshooting and glean insights from the OCI resource dashboard, which is just a pretty dashboard for someone in security to know what your resources are. They are now supporting GPU's on the Oracle machine learning notebooks. These are Jupyter notebooks, so now you can get access to a gpu directly from that. For those of you trying to code Netsuite, which I don't know if that's anybody on the podcast, it's definitely not me and I will never do that. You'll be pleased to know that Oracle code assist now will help you write not only JavaScript, but also will help you write Netsuite sweet script support in addition to Python, JavaScript, Sweet script, Rust, Ruby, Go, PL, SQL, C, Hash, and C all available to you directly in the plugin for your ide of choice. [00:56:39] Speaker C: Did not know Netsuite was an Oracle product. [00:56:42] Speaker B: It was bought many years ago at this point and no one has risen from the ashes to create the next big ERP SAS. I guess workday is trying, but there's really not anybody that is sort of similar to netsuite. For your OCI object storage. You can now use private endpoints with private IP addresses because that's going to be fun to manage. Like to pass on that, but if you need that support and security for your highly compliant workloads like HIPAA, you do need this ability to make sure that none of your traffic crosses to the public Internet to get to your object storage. [00:57:20] Speaker C: Just like an s three endpoint it is. Yep, got it. Making sure I wasn't missing something. [00:57:27] Speaker B: And a terribly named service called OCI Fleet application manager is now generally available. And you think this has to do with managing your applications and you'd be wrong. Has to do with centralized management and patch operations across your entire cloud stack that is made up into an application. So it allows you to patch and manage your stacks for your application fleet, not manage your application. So I see what you did there Oracle, and I don't like it. They also give you high performance multipoint attached storage for all of your training operations for OCI storage. So if you're trying to do big large ML training, they now give you up to 40 megabits per second. Sorry, 40 gigabits per second of throughput to that. They have several other improvements coming for like an HDFS connector for Hadoop because people are still using that apparently. As well as they'll be scaling their storage ten x, they'll be giving you file storage with luster file storage users coded object storage support for multiple checksums, file storage with a 30 minutes RPO, and block volume support for customer managed keys for cross region replication, all coming soon. [00:58:34] Speaker C: RPO doesn't feel that good. [00:58:38] Speaker B: I thought it was not great either, but they did say that's the initial launch. They're going to try to speed up from there. [00:58:44] Speaker C: Okay, I know many companies that have an RPO of like 15 minutes. [00:58:50] Speaker A: I mean, if the use cases AI training data shared amongst massive training classes, then it's 30 minutes is perfectly fine. [00:58:58] Speaker B: Yeah, Oracle's building out standardized OCI landing zones and OCI with zero trust landing zones. So this gives you the ability to rapidly onboard your Oracle cloud journey with standard basic things like IAM, kms, cloud guard, vulnerability scanning bastions, logging and events notifications and security zones all configured for you. As part of that, as well as enabling ZTNA for your application and workloads. This also includes OCI access governance, OCI zero trust packet routing and networking enhancements with next gen firewalls like fortinets, fortigate and observability enhancements, as well as a very handy reference architecture for learning how to do all your things. This is all based on the CISA and UK government's National Cybersecure center guidelines for secure cloud computing, so you can get a much faster start to your OCI journey. [00:59:51] Speaker C: Cool. [00:59:51] Speaker A: Everyone should be doing this. [00:59:52] Speaker B: Yes, agreed. We talked about AWS, but Oracle has expanded all of their multi cloud partnerships, including AWS, Google Cloud and Microsoft Azure. We talked about AWS, but Oracle database Azure is now in six Azure data centers, with that increasing to 15 very soon. And Google is now GA and for Google Cloud regions with expansion to additional regions in progress. So if you need to get your Oracle database and your cloud of choice, it's now available to you. Oracle is announcing the first zetascale cloud computing cluster accelerated by Nvidia Blackwell platform. OCI is now taking orders for the largest AI supercomputer in the cloud available with up to 131,072 Nvidia Blackwell GPU's. And I just had to ask, was this built for Elon first? [01:00:42] Speaker A: Sounds like an XaI thing to me. [01:00:43] Speaker B: Yep. [01:00:44] Speaker C: What cost per second? [01:00:47] Speaker B: I don't know. They didn't release pricing, but the 131,072 Nvidia Blackwell GPU's deliver 2.4 zeta flops of peak performance. The maximum scale of an OCI supercluster offers more than three times many GPU's as the frontier supercomputer and more than six times that of other hyperscalers. They support both the H 100 and H 100 tensor core GPU from the Nvidia Blackwell GPU family. There's a quote here from Mahesh Thiagarjan, executive vice president of Oracle Cloud infrastructure. We have one of the broadest AI infrastructure offerings and are supporting customers that are running some of the most demanding AI workloads in the cloud. With Oracle's distributed cloud, customers have the flexibility to deploy cloud and AI services wherever they choose, while preserving the highest levels of data and AI sovereignty. That's a big server. [01:01:34] Speaker A: When they see taking orders, I assume they're building one of these things and you can rent time on it. [01:01:39] Speaker B: That is what I would take, too. So they originally were building this for Elon. Then Elon got mad and he said, I'm gonna go build my own thing. And they was like, well, we're already building it. We already have. The GPU's coming. Let's figure out who else wants this. That's what this feels like to me. [01:01:53] Speaker A: Wow, that's insane. I saw a graph the other day. It was actually a YouTube video I was watching. Bishop was talking about the kind of limits of training and how there seems to be this limit of, you know, how much compute you have to put into reducing the errors in these large language models and the way they measure how much computers required for training is petaflop hours or petaflop days. And this is just zeta flop is. [01:02:21] Speaker B: What, a thousand pedophiles? [01:02:22] Speaker A: Yeah, another thousand on top of that. [01:02:24] Speaker B: That's a lot. [01:02:25] Speaker A: I think it was like 150 days or 100. Don't quote me on it. I'm going to say something which went like 150 or 180 days worth of compute time to train GPT for. So like this, something like this. Obviously there's other constraints and just floating point operations like this, memory access, everything else to go into it. But something like this, we could be looking at training a GPT like system in weeks instead of months or years. [01:02:57] Speaker C: Yeah. If you're fully utilizing it. [01:03:00] Speaker A: Yeah. I imagine governments will be interested in renting this as well. [01:03:07] Speaker B: Yes. Maybe three letter government agencies. [01:03:11] Speaker A: I would think so, yeah. Or a palantir may want to run some time on it, too. [01:03:16] Speaker B: All right. And our last Oracle open world announcement. I got you through it. Pretty quickly. Guys. [01:03:20] Speaker A: Thank you. [01:03:21] Speaker C: We appreciate that. [01:03:22] Speaker B: You're welcome. Oracle announced generative development, or Gen dev, for enterprises, a groundbreaking AI centric application development infrastructure which provides innovative development technologies that enables developers to rapidly generate sophisticated applications and make it easy for applications to use AI powered natural language inference interfaces and human centric data. Gendev combines technologies in Oracle database 23 AI, including JSON, relational duality views, vector search, and apex to facilitate development using Gen AI. Some of the features of this include Oracle autonomous database select AI with rag and other enhancements which basically around broad support for LLMs and leveraging things like Gemini, anthropic clod and hugging faced LLMs. Autonomous database Nvidia GPU support allowing you to access Nvidia GPU's to accelerate performance of certain AI data operations. Data Studio enhancements Graph Studio enhancements autonomous database for developers autonomous database for developers container images and autonomous database select AI synthetic data creation, which is all a bunch of googly gook to me. So I'm hoping that Juan Loeza, executive vice president of mission critical database Technology Oracle's quote will help us determine what this is. Just as paved roads had to be built for us to get the full benefit of cars, we have to change the application development infrastructure to get the full benefit of AI app generation. Gendev enables developers to harness AI to soothe, to generate modular, evolvable enterprise applications that are understandable and safe, and users can interact with data and applications using natural language and find data based on its semantic content. Oracle Database 23 AI provides the AI centric infrastructure needed to dramatically accelerate generative development for enterprise apps. Well, he didn't help. I think I can summarize this down to this is a lot of good marketing for nothing. [01:05:05] Speaker A: I don't know. I don't know about nothing. I mean, there's using GPTs and claws and things to write code prompt by prompt, and then there's building a system whose purpose is to enable developers to build an application. So I can see how there's value in things like document stores, reference information, technical decisions, a way of organizing the structure of projects so that a developer can better use a tool to reach the end goal. So I actually think this is probably a really good product aimed at helping kind of organize and shepherd the whole process through. Because I mean, sure, you can sit down in front of chat GPT and ask you to write some code, but with limited context window. You have to keep copying stuff out or restarting the chat. You have to keep referring back to original design documents, which is cumbersome and so solving the usability of these systems to actually deliver applications is great and I hope I wish them well with it. I'd really like to play with it. [01:06:08] Speaker B: Yeah, it might be not all marketing, but it definitely feels like how do we give you Oracle database 23 AI and then give you something that makes you more locked into an oracle database than ever before with your new application development? That's why I felt it was a little marketing. [01:06:24] Speaker A: Yeah, that's fair. I mean, I take it without the Oracle database kind of requirement, I assume it's not a requirement. I'm sure you could use write any code you like, but yeah. Yeah. [01:06:36] Speaker B: Well, that's it for a massive show once again this week. Guys, any last minute thoughts here before we head out about Oracle open world? I'm glad I didn't attend. I would have been very bored. [01:06:49] Speaker A: Are you doing reinvent this year or. [01:06:51] Speaker B: No, I am not going to do reinvent this year. If you saw my travel schedule between now and the end of November, you wouldn't go to reinvent either. [01:07:01] Speaker A: I wasn't interested in going this year even if it was an option. [01:07:06] Speaker B: I actually really enjoyed Google Next the last two years it's been nice to see a different perspective. I mean, I definitely we should talk about if we're going to do a live stream during reinvent keynotes again, I think this year because I think that was fun, number one. And number two, I don't have to be there for that. I can be at my house. [01:07:21] Speaker A: Yep. [01:07:22] Speaker B: And live streaming it is fun. So yeah, I think that's definitely probably a possibility for us. We'll have the new CEO this year on main stage, I assume. I assume Werner hasn't quit yet from going back to the office five days a week. So I assume he'll be there by then and it'll be good. [01:07:39] Speaker A: Yeah, I'm just concerned it's going to be AI, AI, everything. [01:07:44] Speaker B: I worry. But I do feel like there is sort of some noise actually. I'll take it back. Let's go back to Oracle real quick. I was sort of pleased that it wasn't all AI. Like there was hardware things in there. The zetascale thing, I guess it's train AI models, but still it's hardware. Some of the things they did around security for Kubernetes engine and the Kubernetes enhancements, yes, some of them support OpenAI, but there was more than just AI from that perspective. Oracle open world had the least AI focused keynotes I've seen yet. I appreciated that. Maybe AWS and Google or Microsoft will take a hint that, you know, the AI stuff is cool and you can be doing cool hardware things that we can use for other use cases than just AI. [01:08:34] Speaker A: Yeah. [01:08:35] Speaker C: Or is it that they're so far behind that they're still catching up on the basics? [01:08:39] Speaker B: I mean, that's a possibility, too, but that's the one that means that we're gonna get nothing but AI at re invent. So I'm not. I'm rejecting that idea. [01:08:45] Speaker C: I would also like to reject that idea. But, you know. Yeah, I think Microsoft ignites coming up first, so that might be a good insight to what AWS is going to be, too. [01:08:58] Speaker B: Yeah. When is it? When is insights? [01:09:00] Speaker C: November. [01:09:01] Speaker B: Like ignite, right? [01:09:04] Speaker C: The 18th, I think, or so? [01:09:08] Speaker B: Yeah. November 19 through the 21st. So it's, it's basically. Well, it's in Chicago before sold out. [01:09:15] Speaker C: Yeah, it actually sold out pretty quickly. [01:09:19] Speaker B: AI is a big topic, so, you know, but, yeah, it'll be interesting to see, you know, kind of that it won't be enough to change the direction of reinvent. It's only two weeks before reinvent, basically, and one of those weeks in between is Thanksgiving, so. [01:09:34] Speaker C: Yeah, I feel like they all kind of march in the same direction. [01:09:39] Speaker B: Yep. All right, guys, well, we will see you next week here at the cloud pod. [01:09:44] Speaker A: See you later. [01:09:45] Speaker C: Yeah. [01:09:49] Speaker B: And that is the week in cloud. Check out our website, the home of the cloud pod, where you can join our newsletter slack team. Send feedback or ask [email protected]. or tweet us with the hashtag hash poundthecloudpod.

Other Episodes

Episode

August 29, 2019 55m38s
Episode Cover

Amazon triggers a cloud pod panic – Ep 36

AWS introduces new kernel panic API trigger, Azure storage gets complicated, and Google’s big query gets a terraform module. Sponsors: Foghorn Consulting – fogops.io/thecloudpod...

Listen

Episode 177

August 20, 2022 00:56:34
Episode Cover

177: The Cloud Pod Hopes That Amazon Knows the Three Laws of iRobots

On The Cloud Pod this week, the team gets judicial on the Microsoft-Unity partnership. Plus: Amazon acquires iRobot, BigQuery boasts Zero-ETL for Bigtable data,...

Listen

Episode 224

August 25, 2023 00:54:46
Episode Cover

224: The Cloud Pod Adopts the BS License

Welcome to episode 224 of The CloudPod Podcast - where the forecast is always cloudy! This week, your hosts Justin, Jonathan, and Ryan discuss...

Listen