275: I SQream, You SQream, We All SQream for AI Ice Cream

Episode 275 September 18, 2024 00:47:03
275: I SQream, You SQream, We All SQream for  AI Ice Cream
tcp.fm
275: I SQream, You SQream, We All SQream for AI Ice Cream

Sep 18 2024 | 00:47:03

/

Show Notes

Welcome to episode 275 of The Cloud Pod, where the forecast is always cloudy! Justin, Matthew and Ryan are awake and ready to bring you all the latest and greatest in cloud news, including SQream, a new partnership between OCI and AWS (yes, really) Azure Linux, and a lot of updates over at AWS. Get comfy and we’ll see you all in the cloud! 

Titles we almost went with this week:

A big thanks to this week’s sponsor:

We’re sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You’ve come to the right place! Send us an email or hit us up on our slack channel for more info. 

AWS

00:28 Stability AI’s best image generating models now in Amazon Bedrock 

02:46 Justin – “I do notice more and more that, you get it, you get the typical product shot on Amazon, but then like they’ll insert the product into different backgrounds and scenes. Like, it’s a, it’s a lamp and all of a sudden it’s on a thing and they’re like, Hmm, that doesn’t look like a real photo though. It looks like AI. So you do notice it more and more.”

04:13 AWS Network Load Balancer now supports configurable TCP idle timeout AWS Gateway Load Balancer now supports configurable TCP idle timeout

04:53 Ryan – “Yeah, we’ve all worked at that company with that one ancient app that, you know, couldn’t handle retries.

05:44 AWS Fault Injection Service introduces additional safety control

06:22 Ryan – “ …in my head I immediately went to like, something bad happened that caused this feature to exist. Like, I feel bad for whoever that was. Because you know it wasn’t good.”

07:14 Use Apache Spark on Amazon EMR Serverless directly from Amazon Sagemaker Studio 

07:40 Ryan – “Yeah, is it the query that’s terrible or the underlying data? The world may never know. Or both. It’s both.”

07:57 Bedrock Agents on Sonnet 3.5  

08:32 Justin – “It’s just an AI bot you put onto your Slack team that, you know, answers questions based on data you’ve fed it basically. Yeah. Agents is really just a chat interface to an AI of some kind that you’ve fed data to.”

08:58 Amazon WorkSpaces Pools now allows you to bring your Windows 10 or 11 licenses

09:28 Ryan – “I doubt they’re talking about a single user. I think it’s like if you’re an IT department, you have to manage both..”

10:45 Amazon ECS now supports AWS Graviton-based Spot compute with AWS 

Fargate

11:13 Ryan – “All this means is that they finally got their inventory up on Graviton hardware in the data centers where they can start allowing it to work.”

12:33 AWS GenAI Lofts 

14:36 Justin – “I think it’s nice to be able to go someplace and get, you know, A) talk to people who are trying to do the same thing you’re trying to do. And number two, if they don’t know, then you can ask the expert who’s there and you can, then he can get the answer for you. Because they’re the experts and they have access to the product managers and different things.

15:31 Amazon MSK enhances cross-cluster replication with support for identical topic names

15:56 Ryan – “I’m sure people have just been working around this with application config, based on where the workload is hosted.”

17:22 Amazon SageMaker HyperPod introduces Amazon EKS support 

18:00 Ryan – “Historically these, types of jobs haven’t been really designed with resilience, right? It’s like, it could have a failure and then you have to restart a job or a series of jobs. going to take hours to complete. So it is kind of nice to see this…but it is kind of funny.”

GCP

18:41 Google named a leader in the Forrester Wave: AI/ML Platforms, Q3 2024

20:20 Justin – “Apparently Google is the best positioned hyperscaler for AI. Take that Azure.”

20:55 Matthew – “Okay, so C3AI, I haven’t actually done any research, but their stock symbol is just AI. I think they win… just hands down they win. Like game over, everyone else should just not be on the leaderboard.”

22:00 BigQuery and Anthropic’s Claude: A powerful combination for data-driven insights  

20:27 Justin – “If Jonathan were here – and not sleeping / napping – he would tell you that cloud’s pretty darn good. And so, this is actually pretty nice to get an alternative that’s pretty decent to Gemini, to give you some additional BigQuery options for your summarization and advanced logging analytics. Apparently.”

23:50 Cut through the noise with new log scopes for Cloud Observability  

24:35 Ryan – “ …that second one is the one I’m most interested in just because it’s, you know, for all kinds of reasons, we’ve separated workloads out and put them into different projects and for blast radius and security concerns and all those things, but it becomes much more challenging to sort of correlate a transaction through many, many different services spread out through multiple projects. And so there’s sort of two ways you tackle that. One is just re-consolidate all the logs together, and that can get expensive and generate this condition where you’re sorting through a whole bunch of noise. Or it’s like you just look it up everywhere and you manually construct it back together, which just doesn’t work and no one does. That’s what we used to do when all the logs were on server hard disks. So this is really neat to be able to tag them all together, really, and then search on them from that tag, which I think is pretty neat.”

25:59 Introducing backup vaults for cyber resilience and simplified Compute 

Engine backups

26:26 Ryan – “Yeah, I mean, the backup policy is specifically when VMs are created is definitely something that, you know, I would like to see more features in that direction.”

Azure

28:31 Azure CLI docker container base Linux image is now Azure Linux

30:05 Justin – “…it’s a supply chain problem. It’s – how do you tell the government that you’re sure that nothing in your, you know, in your Linux operating system is compromised by a third party nation state? The answer is, well, we own all of the source and we build our own version of Linux from that source and we review it all. And that’s how you solve this problem.”

33:45 General availability of Prompt Shields in Azure AI Content Safety and Azure 

OpenAI Service

34:15 GA release of Protected Material Detection in Azure AI Content Safety and 

Azure OpenAI Service

34:33 Ryan – “I mean, it’s not really for its accuracy. It’s about the mitigation of risk when you get sued. Like, you can say, well, I tried, I turned all the checkboxes… I do think these kinds of features… will be in every product eventually.”

37:02 M-Series announcements – GA of Mv3 High Memory and details on Mv3 

Very High Memory virtual machines   

Oracle

39:00 Breaking boundaries in ML development: SQream on OCI 

40:46 Ryan – “I also think that every one of their claims is complete nonsense. I, cause it’s Oracle and it’s like, there’s no way.”

42:11 Oracle and Amazon Web Services Announce Strategic Partnership

44:47 Matthew- “Half of these features already existed between just RDS Oracle and AWS I feel like, and the other half just use are a good way to kill all your EDP pricing – EDP that you have to finish by the end of the year.”

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod

View Full Transcript

Episode Transcript

[00:00:07] Speaker A: Welcome to the cloud pod, where the forecast is always cloudy. We talk weekly about all things AWs, GCP, and Azure. We are your hosts, Justin, Jonathan, Ryan, and Matthew. [00:00:18] Speaker B: Episode 275 recorded for the week of September 10, 2024. I scream, you scream. We all scream for AI ice cream. Good evening. [00:00:28] Speaker C: Is it delicious? [00:00:29] Speaker B: I don't know. It tastes like hopes and dreams of AI startups, so possibly not. I don't know. All right, well, we got a busy show this week, so we should jump right into it. First up, AWS. Once again, if you are like the cloud pod host and you like to generate lots of funny images and memes using AI, which is our primary use case here, luckily, AWS has your back this week with the latest image generation capability. With three models from stability AI, the stable image Ultra, which produces the highest quality photorealistic outputs, perfect for professional print media and large format applications. The stable image ultra excels at rendering exceptional detail and realism. The stable diffusion three large strikes a balance between generation speed and output quality. Ideal for creating high volume, high quality digital assets for websites, newsletters, and marketing materials, and stable image core, optimized for fast and affordable image generation. Great for rapidly iterating on concepts during ideation. A couple details with these the ultra has 16 billion parameters, the large has 8 billion, and the stable inch core has 2.6 billion. Parameters available to inputs for all three of these are text. The stable diffusion three large also accept an image. So if you're trying to give it an image to base your new image off of, like Ryan, for example, is a profile photo you might use for that. You could pass that into it and then have it modify Ryan into becoming a, you know, Frankenstein type creature because it is Halloween season, you know, very soon. So one of the key improvements for stable image ultra and stable diffusion three large compared to stable diffusion Excel, which is the prior version, is text quality and generated images with fewer errors in spelling. A typography, which all I can say is, thank you, goodness, because that's my number one pet peeve, is trying to add text to generate an image and have it literally take the word I gave you and then misspell it. [00:02:17] Speaker A: Yeah. [00:02:18] Speaker B: So I appreciate that that has been improved. [00:02:22] Speaker A: What do they mean by, like, high volume, high quality digital assets? Like, are they. Is the expectation you're going to, like, generate a different one for, like, so many requests like, that seems kind of a crazy. [00:02:34] Speaker C: Yeah, maybe you want something for like, you know, the web page versus a billboard, you know, so you want more pixels in one versus the other. [00:02:43] Speaker B: Well, I could see maybe perhaps in the Amazon world where you would have, you know, you're uploading a large number of skus that you'd like to get AI images generated for very quickly that you might want to be able to do them in high volume and get high quality assets out of it. Yeah, I do notice more and more that you get the typical product shot on Amazon, but then they'll insert the product into different backgrounds and scenes like, oh, it's a lamp and all of a sudden it's on a thing. And that doesn't look like real photo, though, looks like AI. So you do notice it more and more. [00:03:17] Speaker A: You know, I do find it funny every time we're making it generate a picture of something silly or one of our co hosts, you know, we'll have to do one for Jonathan to shame him properly. [00:03:30] Speaker B: From his nap that he took 20 minutes before the show. He's like, I take a quick nap and then lost to us forever. [00:03:36] Speaker A: It's gone forever. [00:03:38] Speaker B: Mattress God sickle. [00:03:42] Speaker A: I do laugh the most I use AI for is that. So it's try not to think about all the hundreds of thousands of servers turning over. [00:03:53] Speaker B: Yeah, you think about all the carbon emitting to the atmosphere to generate that image. You're like, well, there's these things that aren't so great. [00:04:02] Speaker C: It's like the olden days when there was like every Google search was x number of CO2 emissions into the atmosphere and now you're like, every AI is twelve x that and we still do it every day for fun. [00:04:15] Speaker B: I see you, Amazon trying to get two press releases for basically the same darn thing. Not today, sir, not today. Both the AWS network load balancer and the Gateway load balancer have received configurable tcp timeouts. And they're so identical in fact, that you could just do a simple words replacement for this press release. They both have a fixed value previously of 350 seconds, which could cause apparently tcp handshake retries for long live traffic flows on the network load balancer or on the gateway load balancer could cause other disruptions to your application. But now you can vary between 60 seconds and 6000 seconds with the default remaining at 350 for backwards compatibility. Again, like 6000 seconds seems like a really long time. But you do you. [00:04:59] Speaker A: We'Ve all worked at that company with that one ancient app that, you know, couldn't handle. [00:05:05] Speaker B: Yep. Retries. 100 minutes though. That's quite a, quite a while. [00:05:12] Speaker A: Yeah, yeah, it sure is. [00:05:15] Speaker C: The one thing it really does show you if you didn't know in advance was the network load balancer. Sorry, the gateway load balancer is built off the network load balancer. So at least it's in some ways Amazon eating their own dog food and you know, actually building stuff on top of other services. [00:05:31] Speaker B: Yeah, but some product manager got two press releases out of this and was like, yes, I'm two press releases closer to my goal for the year for basically the same darn thing. [00:05:41] Speaker A: Hopefully it's also two bullet points on their end of the year review. [00:05:45] Speaker B: No, no, it's going to be two full okrs, I'm sure. The AWS fault injection service now provides additional safety control with a safety lever that when engaged, stops all running experiments and prevents new experiments from starting. You can also prevent fault injection during certain time periods such as sales events or product launches or in response to application help alarms. I was thinking about this, I was like, my operational side says, yes, I'm going to set that to not run any fault injection at night so it doesn't wake me up when it fails. Then my executive side said no, no, no, you can't do that at night, you can't do it during the day and impact production. So my two halves of my operational side and my executive side are conflict on this one. But I appreciate the capability. [00:06:26] Speaker A: Yeah, I was in my head, I immediately went to like oh, something bad happened that caused this feature to exist. Like oh, I feel bad for whoever that was because, you know, it wasn't good. [00:06:38] Speaker C: Yeah, that's for my head. [00:06:40] Speaker B: But too, hopefully it was Amazon during testing for prime day and not a customer and then I won't feel so bad about it. [00:06:46] Speaker A: That's true. Prime day went off without a hitch. [00:06:48] Speaker C: So yeah, I'm just imagining someone being like cool, let's enable this. And not really actually looking at what they enabled and taking down production and like screaming at Azure AWS for taking down an ECS cluster or something. [00:07:04] Speaker B: You type out something in the terraform code and created 3000 copies of the same injection attempt and then you're causing so stop all of them. Yeah, but I do appreciate it. You can now run petabyte scale data analytics and machine learning on EMR serverless direct from Sagemaker studio notebooks. Serverless automatically provisions and scales the needed resources to take care of your terrible query, allowing you to focus on data models without having to configure, optimize, tune or manage your cluster, which is very appreciated by me because anytime I get into doing heavy duty spark and EMR work I'm just like oh yeah, data big data. Kill me now. [00:07:42] Speaker A: Yeah. Is it the query that's terrible or the underlying data or both? Or both. [00:07:49] Speaker B: That's Schrodinger's query. You have no idea. I give a good show. Title. [00:07:55] Speaker A: Yeah. [00:07:57] Speaker B: Agents for Amazon Bedrock enable developers to create and generative AI based applications that can complete complex tasks for a wide range of use cases and deliver answers based on compatible company knowledge sources using Sonnet 3.5 and the bedrock agent rag capabilities. [00:08:13] Speaker C: Yay. [00:08:14] Speaker A: Yeah. Not having using bedrock or bedrock agents like I'm a little like cool. What? You know, but I'm trying to figure out like what the equivalent is. But I imagine it's just an AI. [00:08:30] Speaker B: Bot you put onto your slack team that answers questions based on data you fed it basically. [00:08:35] Speaker A: All right. So it's not agents. [00:08:37] Speaker B: Agents is really just a chat interface to AI of some kind that you've fed to. [00:08:41] Speaker A: Okay. [00:08:42] Speaker B: All right. [00:08:43] Speaker A: Yeah. So it's cool because I was you. [00:08:46] Speaker B: Three weeks ago so I had to do some googling and like some research. [00:08:52] Speaker C: That's why we keep you around. [00:08:53] Speaker A: Yeah. [00:08:54] Speaker B: Thanks. Appreciate it. If you're leveraging Amazon workspace pools powered by either Windows ten or eleven, you can now bring your own license assuming you meet certain Microsoft requirements which include handling over buckets of money to support your eligible M 365 apps for enterprise. Providing a consistent desktop experience to their users when they switch between on premise and virtual desktops, which is I never worked in a place where I had a virtual desktop and an on premise desktop that I could switch between but for the same use case. But I have one for Dev, that's a different thing. [00:09:25] Speaker A: I doubt they're talking about a single user. I think it's like if you're an IT department you have to manage both. [00:09:30] Speaker B: Yeah, that makes sense. [00:09:30] Speaker C: Yeah. [00:09:31] Speaker A: Yeah. [00:09:31] Speaker B: Makes more sense than my interpretation, so. [00:09:33] Speaker C: And just, just the fact that like they're still adding, you know, all these licensing things shows how complicated Microsoft licensing is and how you're, I'm sure violating it even if you're actually trying not to. [00:09:45] Speaker B: But that's the thing about Microsoft licensing, you never really know if you're violating it or not. You like you think you got it right, but you're, that audit paperwork comes, you're still like, oh no. Even though I think I did it completely right. I'm not sure. Cause something changed that you didn't know about, which is also the worst. [00:10:01] Speaker C: I almost feel like there needs to be like I am try. Does that count? Like give me 20% off this. [00:10:08] Speaker B: Well normally that's like I think I think when you get caught, and I mean I've never been audited that way and I don't ever hope to be. But you know, if you, if you are in that situation and you get, you know, I think you get a chance to try to remediate it and address the issue before it becomes a big problem. I don't know. As long as you're willing to pay money to Microsoft, I'm sure they'll make it go away. It's basically the net of that conversation. [00:10:29] Speaker C: Microsoft, Oracle. Sorry, I got the rough name. [00:10:31] Speaker B: Yeah. Yes, either one is fine. And then, you know, Oracle case will throw in OCI credits for you and say, oh, you're now an OCI customer. [00:10:38] Speaker A: Woohoo. [00:10:39] Speaker B: Woohoo. Amazon ECS now supports AWS graviton based compute with AWS Fargate spot. This capability helps you run fault tolerant arm based applications up to a 70% discount compared to fargate prices. And like now we're really getting complicated. So now we have Fargate, which is only runs occasionally with your task on top of a spot instance that can only run sometimes on a graviton, which is a proprietary chip from Amazon. So I appreciate this, but man, it's turtles all the way down. [00:11:09] Speaker A: I mean all this means is that they finally got their inventory up on graviton hardware in the data centers where they can start. [00:11:16] Speaker B: I was actually thinking about that the other day and I was like, it's interesting because all these new gpu based servers have all these processors that they attach to it, 192 vcpus, but you get 100 h 100 or a 100 or whatever Nvidia chip of the day it is. And I'm wondering like I have at spots a lot of those compute nodes that just are doing nothing because everyone's using the gpu and not using the cpu on those boxes. So thanks. Thanks Nvidia and AI craze, you've really helped out the spot market for us. [00:11:45] Speaker A: I appreciate that. I mean I haven't seen the spot market go down enough to make that make sense, but hopefully I have seen. [00:11:53] Speaker C: The spot market be a little bit tighter where I've definitely lost more things now than I have in the past just because I think they're also running the regions a little bit tighter. You know, like the airlines not flying half empty planes anymore, they are 95% full. [00:12:10] Speaker B: No matter whatever their use, their previous threshold was. You know, we want to have this percentage of spare capacity that was, that was chopped down just a little bit to tighten up capex, I'm sure. [00:12:20] Speaker A: Yeah, well I mean, it just never really came back after the supply chain issues, right? [00:12:26] Speaker B: AWS announced about a couple months ago. Jenny, I lost and I kept meaning to talk about it with you guys. So, pre pandemic, in the before times when Amazon cared about lift and shift data center migrations, you could go to the Amazon loft in San Francisco. I think there's one in Seattle, maybe one in New York as well, where you go and you could talk to an expert. They held community events, startup events, and it was really nice thing to get one on one assistance on anything you're trying to do in the cloud project. And then during pandemic, they all disappeared. But they've been brought back sort of with the genai loft. These are not permanent lofts though. They're just pop up events typically hosted out of Amazon offices. Currently there's one in San Francisco and Sao Paulo, Brazil, both London, Paris and Seoul, opening up in October through the month of November. So you can get out there to learn all your AI things. And these are pretty cool. Actually, I was looking at it yesterday when I was prepping this part of the article for us in San Francisco, which goes through the end of September. They have a two day Genai bootcamp that they're doing there. They've got a startup event coming up really quick. So if you're looking for some community around cloud and genai efforts, check out one of these lofts. If they're coming to a city near you, I assume we'll hear about new ones for next year coming from re invent. If this was successful. If not, then we'll never hear about it again. But appreciate that it's an option. [00:13:42] Speaker A: Today I learned that the previous lofts weren't a, were permanent. I thought they were just sort of pop up events in the same way, just in a more dedicated space. [00:13:53] Speaker B: I mean, they were definitely leased to Amazon for at least a year. So I mean, I think most of them were there for multiple years. They moved the one in San Francisco from one place to another at one point. But like, the one that's in Amazon right now is in 525 market, which is their primary AWS office in San Francisco. So they're taking advantage of the office space is not being filled by employees. [00:14:14] Speaker A: Yeah, I just thought it was like, I didn't realize you could walk in like and get. I thought it was just for those events. That's pretty crazy. That's cool. They should do that more. [00:14:24] Speaker B: They should. I think they're. I think Google and Azure should get on it too, because I think it's nice to be able to go someplace and get, you know, a talk to people who are trying to do the same thing you're trying to do. And number two, if they don't know, then you can ask the expertise who's there and then he can get the answer for you because they're the experts and they have access to the product managers and different things. And so the Amazon pop up loft was a lot of fun, especially when I worked in the city, because it wasn't that far from my office. And so you could go over there after work, they had a beer. You could talk to a person about what you were trying to figure out all day at work that wasn't working. Like, I'm trying to do this thing and it doesn't work and what am I doing wrong? And they'd be like, oh, you need to do this or that. It was handy. Amazon MSK Replicator now supports a new configuration that enables you to preserve original Kafka topic names while replicating streaming data across Amazon. Managed streaming for Kafka clusters. Amazon MSK Replicator is a feature of Amazon MSK that lets you reliably replicate data across MSK clusters, the same or different regions with just a few clicks. And the fact that you couldn't use the same topic name between clusters in different regions, that's bad. [00:15:25] Speaker A: Yeah. [00:15:26] Speaker B: So glad they fixed this particular problem. Don't know how you missed the memo on that because that's the number one use case of Miramaker between different regions. [00:15:34] Speaker C: I just shut down a little bit when you said Miramaker. Cried a little bit on the inside. [00:15:40] Speaker A: Yeah, this is one of those things that they announced it and you're like, oh, you couldn't use them. So, you know, I'm sure people have just been working around this with, like, application config based off of, you know, where the workload's hosted. [00:15:52] Speaker B: Yeah, you like, do a lookup. Like if you're looking into the AZ region, do az dash topic name or topic name? Dash az. I'm sure people had terrible workarounds for this. [00:16:00] Speaker C: I've done that work around so many times with so many different, like, bucket names and instance names and everything to make things more cross region and not hard coded and stuff. [00:16:11] Speaker B: When you realize the bucket, the bucket namespace is global and not just your company, it does, does burn you pretty quickly. [00:16:18] Speaker C: Luckily, there's very few services that are global like that. [00:16:22] Speaker B: Yeah, but, yeah, yeah, you're not going to get away with a bucket called public. That's not going to happen for you. [00:16:29] Speaker C: I kind of want to try that now. Like, try, like, bucket one. Nice public bucket one. [00:16:35] Speaker A: Just do it in a for loop and increment one. See? See, what number is the first one? [00:16:40] Speaker B: If you had really good bucket names, you think you could auction them off, like, sell them, like, domain names, coordinate. [00:16:45] Speaker C: Deletion, and just tell them to retry every minute. You know, most of the time you can get them back within, like, a couple minutes. But I have seen it take up to, like, a day for it to reallocate the name. [00:16:56] Speaker B: Yeah, maybe you could just support the transfer. Different account. Might be easier, safer. [00:17:00] Speaker C: Well, then you end up with the issue that the guy had a few months ago where he used the name that everybody uses and got charged. [00:17:06] Speaker B: That's right. The guy who. [00:17:08] Speaker C: Yeah, charged the boatload. [00:17:10] Speaker B: AWS is announcing that EKS is now supported in Amazon Sagemaker hyperpods. This purpose built infrastructure is engineering with resilience at its core for foundational model development. This allows customers to orchestrate hyperpod clusters using eks, combining the power of kubernetes with hyperpods resilient environment designed for training those large models. This was announced, apparently in two years ago at re invent. And I went in one year and out the other because I did not remember what hyperpods were. Saved my life. But it's great that you built something super resilient for foundation model development. Couldn't build that for something else. Huh. [00:17:46] Speaker A: It's interesting because, yeah, you know, historically, these. These type of jobs haven't been really designed with resilience. Right. It's so. It's like you could have a failure, then you have to restart a job or a series of jobs. It's gonna take hours to complete. So it is kind of nice to see this, but it is kind of funny. I mean, although I. I hope that this was caused by a whole bunch of customer interruption. Sounds bad. [00:18:14] Speaker C: Where are they going to run? [00:18:16] Speaker B: Spot your foundational model and spot what could go wrong. [00:18:20] Speaker C: What could go wrong? [00:18:22] Speaker A: Yeah, perfect. [00:18:24] Speaker B: Right. Let's move on to GCP. First up, Google was apparently named a leader in the Forester wave, which is cool. And I was looking at this how they. What Forester had to say about them, and then I noticed the number one leader in this Forester wave is Palantir. And I had to ask the question, should we be concerned that Palantir is number one? According to the Forrester wave, Palantir apparently has one of the strongest offerings in the AI ML, space vision and roadmap to create a platform that brings together humans and machines and a joint decision making model, which sounds bad. Box. [00:19:01] Speaker A: It's more confusing to me because I just don't hear about them ever. And so it's in the AI ML space. Like, that's even crazy to me. [00:19:10] Speaker B: Like, maybe it's because we're not politically aligned to their customer base. Like maybe. Maybe. I don't know. [00:19:17] Speaker A: I. Yeah, it could also be that they're the ones paying for this, you know, because Gardner is sort of. [00:19:22] Speaker B: This is Forrester, though. [00:19:23] Speaker A: Forester or, sorry, Forrester. Oh, you're right. This is Forrester, not Gardner. I'm confused. [00:19:27] Speaker B: Yeah, I mean, you pay for Forester too, so you're right. In context. [00:19:30] Speaker A: Yeah. [00:19:32] Speaker C: Who's c three AI? [00:19:36] Speaker B: C three AI. [00:19:37] Speaker C: They're in the leader group too, and I've never even heard of them. [00:19:40] Speaker B: Yeah, at least I've heard of them. Google is the leader. Databricks in the leader, and SAS was all the leader. And then strong performers. IBM do take you aws. Microsoft and Datarobot. And then contenders, Cloudera, Altair, domino Data, h two a h two o AI, which I've never heard of. And so, yeah, this is a bunch of companies I never heard of in a lot of ways, but in the area, Google, which is what we're talking about, apparently Google is the best positioned hyperscaler for AI. Take that, Azure. Google Vertex. AI is thoughtfully designed to simplify access to Google's portfolio of AI infrastructure. At planet scale AI models and complementary data services, the company continues to outpace competitors and AI innovation, especially in Gen AI. No one tell Gemini that. And has a strong roadmap to expand tooling for multirole AI teams. So, interesting. Check that out if you want to. There's a link to a reprint of the entire forester wave if you want to read all the details, which we cannot talk about here because they're copyrighted. But good to know that there's a leader quadrant for this. [00:20:40] Speaker C: Okay, so C three AI, I haven't actually done any research, but their stock symbol is just AI. I think they win. They winden, like, game over. Everyone else should just not be in leaderboard. [00:20:52] Speaker B: I'm like, what did the ipO? C three AI hasn't been that big, so we're that long. December 2020. Wow, that's. That's early. That's an early investor right there. If you got into that one, if. [00:21:09] Speaker C: You look at it, they IPO'D, and we're at like 150 now. They're at like twenties to 30. [00:21:14] Speaker B: So maybe you lost your shorts. [00:21:16] Speaker C: Yeah, not really the best time to invest in them. [00:21:19] Speaker B: They opened at 42 a share, so on their IPO, and it looks like. [00:21:24] Speaker C: For a couple of months they were in like the 150 ish range, 130 range, and then just downhill after. [00:21:32] Speaker A: Perfect. [00:21:33] Speaker B: That's great. Again, I don't know anything they do, so apparently it doesn't hit the market. Google Cloud is extending their own open platform with the preview of Bigquery's new integration with anthropic cloud models on vertex Aihdeendeh that connects to your data in bigquery with powerful intelligence capabilities of cloud models. Bigquery's integration with anthropic cloud models allows organization reimagined data driven decision making and boost productivity across a variety of tasks, including things like analyzing log data, which is the worst use case of AI marketing optimization, document summarization and content localization. And if Jonathan was here not sleeping and napping, he would tell you that cloud's pretty darn good. And so this is actually pretty nice to get an alternative that's pretty decent to Gemini to give you some additional bigquery options for your summarization and advanced log analytics, apparently. [00:22:20] Speaker A: Yeah, he's been talking that up quite a bit, and we were going over a couple of things that he's been playing with just at his house, and it's pretty great, some of the results he's getting and some of the things that he's finding new and interesting ways to use it on, which is kind of neat. So it's definitely something I definitely look forward to trying and I keep getting inspired by some of that and then my laziness takes over. [00:22:47] Speaker B: You mean your other thousand things you have to do? [00:22:50] Speaker C: Yeah, laziness, 10,000 things doing roughly the same. [00:22:55] Speaker B: Yeah, well, it's the laziness of hobby projects. It's the like, we're all good workers, we just, we have no time because we're, you know, to do any of the fun, lazy projects we'd like to do. [00:23:05] Speaker A: I have eleven in flight projects and I started writing them down because I was realizing that I can't start one without finishing another one and it was just getting out of control. And so eleven is a lot. [00:23:18] Speaker B: My favorite Jason Calicanis quote, starting as easy finishing is hard. All right, for those of you who are using GCP observability tooling, they're introducing log scopes for cloud logging, a significant advancement in managing analyzing your orgs logs, so you don't need that fancy cloth thing. Log scopes are a named collection of logs of interest within the same or different projects. There are groups of log views that can control and grant functions to a subset of logs in a log bucket and combined with the metric scopes. Log scopes let you define a set of correlated telemetry for your application that can then be used for fast troubleshooting or referencing of insights. Some example use cases to help you understand how to use this. Correlating metrics of logs from the same application when an organization uses a centralized log storage architecture, or use case number two, correlating metrics logs for isolated environments such as development, staging and production across various projects. [00:24:09] Speaker A: Yeah, that second one is the one I'm most interested in just because it's, you know, for all kinds of reasons, we've separated workloads out and put them into different projects and for last radius and security concerns and all those things. But it becomes much more challenging to sort of correlate a transaction through many, many different services spread out through multiple projects. And so there's sort of two ways you tackle that. One is reconsolidate all the logs together and that can get expensive and sort of generate this condition where you're sorting through a whole bunch of noise. Or it's like you just look it up everywhere and you sort of manually construct it back together, which just doesn't work and no one does. That's what we used to do when all the logs were on server hard disks. So this is really neat to be able to sort of, sort of tag them all together really, and then sort of search on them from that tag, which I think is pretty neat. [00:25:09] Speaker C: Or you could go the terrible route and use elasticsearch. How you felt that, right? [00:25:13] Speaker B: You had to bring an athlete. [00:25:14] Speaker C: I had to do it. It was too easy. [00:25:16] Speaker A: It's just me really. It's just. Why would you do that? Like we were having a good time. [00:25:22] Speaker C: I'm sad because we started so late. That noise. [00:25:30] Speaker B: All right, move on to happier things. Google is enhancing Google Cloud backup NDR services with some new capabilities this week. First, there's a new backup vault storage feature which delivers immutable and indelible backup, securing your backups against tampering and unauthorized deletion, which is basically word for don't let your hackers delete your backups. A centralized backup management experience which delivers a fully managed end to end solution, making data protection effortless and supporting direct integration into resource management flows. And finally integration with compute engine vm creation experience, empowering application owners to apply backup policies when vms are initially created. All good quality of life improvements. [00:26:09] Speaker A: The backup policy specifically when vms are created is definitely something that, you know, I would like to see more features in that direction. Typically your backup policies tend to be just a big hammer approach to everything and that gets expensive and difficult to manage. If you have to audit on the compliance of your backups then it becomes more complicated there. And so separating that out to where each application business owner can distinguish what their policy is and what their application actually needs I think is great. [00:26:42] Speaker C: And even the first item about like the storage and everything, you know, AWS came out with a few weeks ago and similar model but it really just helps a lot of the compliance and audit and also trusting that, you know, your backups are there, they're not going to be touched and you are going to have them. So in an event that somebody does hose your entire account this still will exist. [00:27:02] Speaker A: I'm still very afraid of using the indelible backups just because like what happens when I screw it up and I always screw it up like, and I always have to start over? [00:27:12] Speaker B: Well I mean looking at the, looking at the screenshots of the feature it looks like you can indelible, in fact you can say I don't want to be able to delete them for 14 days or 365 days, whatever the number is. So just wait till, you know, after the number of days you set and then you can delete it. [00:27:26] Speaker A: I don't know if you've met me but remembering to do things 14 days later, that's not going to fit. [00:27:30] Speaker B: Yeah, that's true. Eleven projects. Eleven projects you said earlier. [00:27:34] Speaker A: Yeah, exactly. That's down from like 20 some. [00:27:38] Speaker C: I feel like we need to give you like ten postus notes or something and just say these are the only post notes you can have. You can erase items on it, can't add more. [00:27:47] Speaker B: We try to get them in just three projects but that never work out either. Not because of him, because of me, but that's fine. All right, Azure. For those of you using the ClI, Azure is about to break something for you. With version 2.64 of the Azure Cli, the base Linux division of Azure Cli is now going to be Azure Linux and no longer Alpine Linux. There's no impact to your AZ commands, but shell commands specific to Alpine will now break things like APK which you might have needed to install modules, and GitHub actions that use specific alpine components or commands. You also have to trust that Microsoft Azure Linux is secure and as great as Alpine which I have serious doubts about. [00:28:29] Speaker A: Yeah, that's like the whole reason Alpine exists is because it's so lightweight and easier to manage so that then you use Apk to tailor it exactly what you need. Doesn't have it otherwise. And I'm not familiar with Azure Linux but I doubt it. Really doubt it. [00:28:50] Speaker B: Yeah it's basically, it's a slim version of Linux as well. So it is like similar, it is very similar to Alpine. But again you have to change your GitHub actions or you have to go change your commands. You run in the shell script to non alpine ones now and it breaks at this 264 version. So I'm not sure how the upgrade of that happens if that's something I have to manually do. But I try to pin that version to 263 so I never have to deal with Azure C. This new Linux version. [00:29:16] Speaker C: Yeah, I mean I just don't understand why they tried to, you know, build their own, another version of their own lakes. They already had their version. Like let's use what the community has versus build our own in this case. [00:29:28] Speaker B: Oh I can tell you why. It's a supply chain problem. It's how they, you know, how do you tell the government that you're sure that nothing in your, in your Linux operating system is compromised by a third party nation state? And the answer is. Well we own all of the source and we build our own version of Linux from that source and we review it all and that's how you solve this problem. [00:29:46] Speaker A: I mean there's so many ways that you have to validate that. With all the salsa, three checks for all the upstream dependencies. [00:29:54] Speaker B: You don't think Microsoft could do that? [00:29:55] Speaker A: I bet could do it or does do it? [00:29:59] Speaker B: No, two different questions. I think they could do it. [00:30:03] Speaker A: I think everyone could do it. But it's hard and annoying. [00:30:07] Speaker C: Do you trust them to do it? Third question. [00:30:10] Speaker A: Well I think it's more about their trust, right? Like they're not trying. It's nothing really the consumer trust because it's the same sort of problem if you're running on Alpine. It's more of what Azure can attest to. [00:30:22] Speaker B: Isn't red hat have a small version of Linux too? [00:30:26] Speaker A: Yeah, most of them do now because Alpine got so popular. Right. And then Docker. [00:30:31] Speaker B: Alpine's based on Ubuntu or Debian. I don't. [00:30:35] Speaker A: Yeah it's been a while but I think it's Debian. [00:30:39] Speaker B: I think it's Debian too. Well good luck to you Matt, as they break your Cli. Look forward to hear complaints on that. [00:30:47] Speaker C: It's not the first thing they're gonna break. Alpine's based on Gentoo. [00:30:51] Speaker B: Oh, Gentoo, really? [00:30:53] Speaker C: I like Gen two. I'm a little bit of a masculine, but I like Gen two. It was the first Linux distro I learned. You want to talk about learning Linux the hard way? Oh, that is, yeah. My first internship, I was given a computer. It said, go install Gen two from a stage three bundle. Good luck. I learned a lot real fast. [00:31:15] Speaker B: I think my first one was Linux mandrake. I had a bottle of CD at Fry's. [00:31:20] Speaker A: It's cool. [00:31:21] Speaker B: Brought it home, installed a bunch of stuff. Didn't work. Had to figure it out, hack the kernel and make my drivers work. It's terrible. [00:31:29] Speaker C: Welcome to Gen two. If you're really, if you don't use Jenny Pearl. [00:31:33] Speaker B: I basically all Linux back in the nineties. Washington was difficult. [00:31:37] Speaker C: Yes. [00:31:38] Speaker B: Unless you had a computer perfectly built with exact specifications. You're going to run Linux on it with drivers like you were your host. [00:31:46] Speaker A: You're going to have to put some sweat equity into that for sure. [00:31:50] Speaker B: It's good that mistakes were hiring people like, hey, you think you're good at computers? Good. Get Linux working on this box that has uncompatible hardware. Good luck to you. [00:31:57] Speaker A: And the only passing answer is no. [00:32:00] Speaker B: Yeah. [00:32:04] Speaker C: It was just installing all the flags on Gen two. And if you missed a flag, you to recompile everything. And then, God forbid you wanted to recapile open, what was it? Open office. Back in the day before they became Libra office that took at least 12 hours by itself. So you just kissed everything? You were doing it by tool. [00:32:25] Speaker B: Good productivity play. Well done. Gen two. [00:32:27] Speaker C: Yes, it's compiling. I'm just thinking of the XKCD. [00:32:31] Speaker A: You guys got a lot farther than I did. I was just trying to do like, Wi Fi Nick drivers. That was where I got stuck. [00:32:38] Speaker B: Yeah, Wi Fi Nick drivers is a pretty common one. Uh, the other one that video car drivers typically get. You messed up with X and X windows, that was always a disaster. That's why I learned to love the command line and VM and vim. I'm like, screw the UE. No one needs that crap. [00:32:51] Speaker C: No one needs it. Why do you want to try to figure out how to make x work? Oh, you want two screensh really? Good luck now. [00:32:58] Speaker B: Yeah, screw that noise. I had to figure out the mathematics of my screen. Like how many pixels wide. [00:33:04] Speaker A: Yeah. Nope. [00:33:06] Speaker B: All right, let's move on to other things. Azure AI content safety has two announcements for us this week. First is the general ability of prompt shields in the Azure AI content safety and Azure OpenAI service, which was introduced in 2024. Prompt shields seamlessly integrate with Azure OpenAI service content filters and are available in Azure AI content safety, providing a robust defense against different types of prompt injection attacks and by leveraging advanced machine learning algorithms and natural language processing, prompt shields effectively identify and mitigate potential threats. User prompts and third party data and the second announcement this week is the inclusion of a filter that detects potentially violated copyrighted data. Many customers and end users are apprehensive about the risk of ip infringement from AI. And to address this, the feature specifically targets model completions and scans for matches against an index of third party text content to detect the usage of third party text content, including songs, news articles, recipes, and selected web content, which is appreciated. But like, man, every day so much content gets created on the web. I don't know how you keep that up to date. [00:34:09] Speaker A: Oh, I mean, it's not really for its accuracy. It's about the mitigation of risk when you get sued because like you can say, well, I tried, I turned all. [00:34:19] Speaker C: The checkbox and that's a lot of what this is, even the first one, the prompt injection, you know, as my day job has added some LLM features to our product or features are based on LLM, there's a lot of questions from internal people, external people, clients, et cetera, that are all wondering how are we securing, what are we doing around this? To make sure that, you know, we have proper controls in place so be able to then point your finger. Saying we're using all the Microsoft features that they have, you know, is going to be a really nice way to say we're doing everything we can. It's evolving so fast, there's only so much we can do type of thing. [00:35:02] Speaker A: Yeah, no, I mean, I do think that these types of features where it's built in, like, are going to be, it'll be in every product eventually, because I do think it's necessary. Right. It's, it'll be like, you know, managed web, web access, firewall rules, right? You can write your own rules, but then you can also subscribe and sort of use the default ones and which do catch a ton of stuff. And so like, you think about, you know, the early days of AI when everyone was trying to get out of the AI sort of model, like, you. [00:35:32] Speaker B: Know, everyone was trying to break the model. [00:35:34] Speaker A: Yeah, break the model. Or, you know, like get it to tell you data that it shouldn't have and that kind of thing. And so like having this sort of be click box and just a configuration item is great, but it doesn't, you know, it doesn't replace security, it just, it's a, an augmented accelerator, it's an add on. [00:35:51] Speaker C: You still have to do your general due diligence to try to make it protect it yourself. And this is a good add on. It's a belt and suspenders model. And pricing of it's not great, but not horrible. [00:36:04] Speaker B: Yeah, I mean, Cloudflare, we talked about when they announced their similar prompt shield capability in their solution. So it's good that this is coming into the cloud native side too. But again, you saw the Cloudflare option too. If you need multiple providers and multiple AI's are using the backend that can. [00:36:19] Speaker A: Filter all of that. [00:36:21] Speaker B: Microsoft has released the third version of the m series. Powered by fourth generation Intel Xeon processors or Sapphire Rapids across the board, these high memory VMS give customers faster insights, more uptime, lower total cost ownership, and improved horizon performance for the most demanded workloads. And what were those? You might ask? SAP HAna, of course, because who else needs that much? Memory systems can scale workloads from six terabytes to 16 terabytes of memory with up to 40% throughput over the mv. Two high memory, the old one, you get 416 vcpus with six terabytes of memory and a max of 64 data disks for the smallest. And the largest configuration includes 832 vcpus and 16 terabytes of memory. Again, all that spot instance capacity right there for you. [00:37:01] Speaker A: Yeah, exactly. [00:37:05] Speaker C: One day I'm just going to launch one of these 30 terabyte servers just for fun. [00:37:13] Speaker B: Interest your last day? [00:37:15] Speaker A: Yeah, it's going to be like, it would be so anti climatic because it'll just run and, you know, no lights will dim, there's been like noises and then you'll try to use it and it'll just work and you'll be like. Because, yeah, getting a workload that can stress test this is, is hard, right? Like it's, you can build stuff for sure. SAP has done a great job, but uh, yeah, I, it's annoying. [00:37:40] Speaker C: I just so remember when the X one instances were launched, I think they were like two terabytes or something. And I was like, oh my God, and now this is like 30 terabytes of memory. It's like its own level of crazy. [00:37:53] Speaker A: I mean, especially since I spent the last like ten years really chasing developer teams going, you don't really need 8gb. [00:38:00] Speaker C: Yeah, until you launch windows, but you. Besides that, you don't really need 8gb. Please don't use a gigabyte. [00:38:08] Speaker B: All right, I have Oracle stories this week. Oracle, I know, I know. Oracle says that now is an exciting time to be developing AI and ML solutions. With investors and customers expecting AI and ML innovation at a dizzying pace, companies struggle moving from AI proof of concepts to production workloads, with the issue quite often being the efficient handling and preparing of massive amounts of data, a critical step that bottlenecks everything else in the development process. So Oracle is pleased to share breakthrough technologies like scream on OCI to improve the outcomes of outcomes. By transforming legacy processes, by accelerating the data preparation and reducing development cycles by over 90%, these advancements organizations can streamline their data workflows and expedite AI deployments, ultimately enabling them to achieve the strategic objectives more effectively. They talk about data preparation being something very labor intensive, but manual processes are time consuming, prone to errors, and often require multiple iterations, from manual scripting for data collection to painstaking efforts and data cleaning, and complex custom scripting for integrating and transferring disparate datasets. I clearly have seen my data model. Manual processes can lead to significant delays. So scream on OCI dramatically impacts these tasks, streamlining and on the process by leveraging GPU accelerated technology, as well as giving data scientists ability to quickly experiment with different feature sets and validate their effectiveness faster. Scream on OCI revolutionizes your team dynamics by enhancing collaboration, boosting morale and productivity, and optimizing human resource allocation, which I love to be considered a human resource allocation. Really appreciate that. Scream also optimizes your hardware utilization, leading to reduced operational costs. [00:39:41] Speaker A: We should have you read that again. And every time you say scream we. [00:39:43] Speaker B: Go ah, yeah, how do you say it? SQ, like SQL and then ream. So scream. [00:39:50] Speaker C: I think it's scream. Yeah, I think I got it. [00:39:52] Speaker B: Yeah. [00:39:52] Speaker C: I just really struggled to take this article seriously because every time you said scream about, you just wanted my head and wanted to go, oh yeah. [00:40:01] Speaker A: I also think that every one of their claims is complete nonsense, like 100% because it's Oracle and it's like there's no way. [00:40:09] Speaker B: Sometimes you choose a topic for the, you know, you choose a show topic on here, you just have the scream opportunity for the show title of the show. So, yeah, I get it. Yeah, it was, yeah, it's a lot. Buzzword bingo. [00:40:19] Speaker A: But, well, I mean, it's just they, I so like so many of their press releases in Oracle, like they're, you know, they're unbreakable Linux, they go. They have a very, I don't know, theme about them where it's like there's, and the promise is always there and then it never comes through. Like, I just, I've never really been a big fan of the way that they advertise their service, the way they talk about their service and then the reality of using them is sort of lackluster. [00:40:46] Speaker C: 90%, Ryan, over 90%. You read the rest of it, just over. Anything over 90% just feels not accurate. [00:40:54] Speaker A: These labor intensive manual processes that are all time consuming and prone to errors. Yeah. You just make an application magically do that. No. [00:41:03] Speaker B: All these things built an Oracle database. I mean, this is a way to get your data out of Oracle database into other ML things faster. That's a whole. [00:41:10] Speaker A: Is. [00:41:10] Speaker B: Yeah, but that makes sense. The thing about Oracle is everything goes back to the Oracle database somewhere. So they have to somehow get it back to Oracle so they can get more licensing out of you. That's the key. Well, open world is actually happening this week and they dropped a ton of announcements today that I said we're not going to cover those this week because I don't have that kind of patience to go through them. I appreciate that, but sometimes a story is so important that we must talk about it immediately. And that story happens in todaydegh, folks. Hell has not frozen over, nor are pigs flying, but Oracle and AWS have announced the launch of Oracle database at AWS, a new offering that allows customers to access Oracle autonomous database services within AWS cloud. Oracle database at AWS will provide customers with a unified experience between OCI and AWS, offering a simplified database administration, billing and unified customer support case system. Additional customers will be able to seamlessly connect enterprise data in their Oracle database to apps running on EC two AWS analytical services or AI ML services, including bedrock direct access to Oracle X data database services on AWS, including Oracle autonomous database on dedicated infrastructure and workloads running on RAC clusters. Oracle database AWS allows customers to bring together all their enterprise data to break through innovations which I am kind of, you know, like, I guess the days of Jassy poking the finger at Oracle on stage are over officially now, I do have a couple quotes here, one from Larry Ellison and one from Matt Garmin, CEO at AWS. Larry says, we are seeing huge demand for customers that want to use multiple clouds to meet this demand and give customers the choice and flexibility they want. Amazon and Oracle are seamlessly connecting AWS services with the very latest Oracle database technology, including the Oracle autonomous database. With Oracle cloud infrastructure deployed inside of AWS data centers. We can provide customers with the best possible database and network performance. And Matt Garmin had to say as far back as 2008, customers could run their Oracle workloads in the cloud. And since then, many of the world's largest and most security sensitive organizations have chosen to deploy their Oracle software on the this new, deeper partnership will provide Oracle database services within AWS to allow customers to take advantage of the flexibility, reliability and scalability of the world's most widely adopted cloud alongside enterprise software that they rely on. And that was in the Oracle press release, by the way, that they got away with saying that is impressive. Customers can also benefit from other things from the Oracle database at AWS offering, including zero ETL integration between the Oracle database and AWS analytics services, flexible options to simplify and accelerate migrating your Oracle database to the cloud. Simplify procurement experience via the AWS marketplace that enables customers to purchase Oracle database services directly through the marketplace and including bring your own licensing and discount programs such as Oracle support rewards, a fully unified support experience for both AWS and Oracle, as well as guidance through reference architectures and seamless integration with the Amazon simple storage service. S three for an easy and secure way to perform database backups and restores and to aid with your doctor. [00:43:56] Speaker A: Wow. [00:43:57] Speaker C: After these features already existed between just rds, Oracle and AWS, I feel like. [00:44:03] Speaker B: That'S true, they did. [00:44:04] Speaker C: And the other half just use a good way to kill all your EDP pricing, EDP that you have to finish by the end of the year. [00:44:11] Speaker A: Well, I assume that contract that Amazon had made with Oracle back and forever ago had just expired. So that's what's driving this or was like, because this is a, like, you know, why would AWS do this? [00:44:24] Speaker B: First question, I mean, it's a rev share situation. They get, you know, you get basically customers run AWS. You want to run their Oracle workloads beyond just the Oracle database. And that could be people using Oracle financials or marketo or other products they own, not marketing product. But you know, they can now get access to those things because they're now on a supported version of Oracle, because Oracle didn't necessarily certify those products on Oracle rds. So that's what you're getting here is basically you're getting a fully supported Oracle offering if you're buying other Oracle application software things and Oracle is going to take care of it for you, which is even better. [00:44:57] Speaker A: Oh, don't get me wrong, I understand why customers are going to love this, right? Because this has been a challenge because so many services are backed by Oracle databases and support is always a nightmare if you're not running on supported thing and then even if you are running your, your Oracle workload in OCi, you're trying to use AWS services for it, you're paying for egress and all this, all this craziness. And so this is fantastic, you know, for the, for the customer set. I'm just still like good on Amazon because they're, you know, they have everything to lose in this and, you know, hopefully, I don't think it's going to be like a revenue gaining. They are going to make customers happy, which is cool. [00:45:36] Speaker B: Very good. All right, that's it for this week. Like I mentioned, next week, tune in for a bunch of oracle stuff. I assume this morning they dropped like 35 announcements and I was like, yeah, that's not happening on record day. And then I assume they'll have a bunch more tomorrow and Wednesday before Oracle Open world closes out this week at reinvent or, sorry, this week there's actually a 01:00 interview with Laurie and Matt on stage. So I'm going to check that out for next week and then I'll be able to tell you what they said on stage and if they played nice or nothing. Be interesting to see. [00:46:08] Speaker C: I feel like it's got to be pre written and they just have to like recite what they were told. [00:46:13] Speaker A: Yeah, yeah. I'm sure it'll be just fine. There's no surprises on stage. [00:46:17] Speaker B: Yeah, it's like, it's a fireside chat, right? So I mean, they can say whatever they want to, right? [00:46:21] Speaker A: Uh huh. Sure. [00:46:22] Speaker B: That's what I believe. [00:46:24] Speaker A: Yeah. [00:46:24] Speaker B: I also could sell you oil from country. Anyways, that's it. I talk to you guys next week here at the cloud podest. [00:46:33] Speaker A: Bye bye, everybody. [00:46:38] Speaker B: And that is the week in cloud. Check out our website, the home of the cloud pod, where you can join our newsletter slack team. Send feedback or ask [email protected]. or tweet us with the hashtag hash poundthecloudpod.

Other Episodes

Episode 160

April 15, 2022 00:50:59
Episode Cover

160: The Cloud Pod Goes Fishing on Google BigLake

Google Biglake takes the feature of the week with the ability to federate data from multiple data lakes. On The Cloud Pod this week,...

Listen

Episode 75

June 17, 2020 00:38:20
Episode Cover

Episode 75: The Cloud Pod Deletes Everything (But Keeps Copies)

Your co-hosts announce parity with the leading cloud-computing podcast hosts on this week’s episode of The Cloud Pod. A big thanks to this week’s...

Listen

Episode 117

May 20, 2021 00:44:55
Episode Cover

117: Justin is out, Peter’s distracted by his parents, Jonathan is just British and Ryan is probably tipsy…. But we had one job and we’re recording!

This week on The Cloud Pod, Justin is away so the rest of the team has taken the opportunity to throw him under the...

Listen