[00:00:07] Speaker A: Welcome to the cloud pod, where the forecast is always cloudy. We talk weekly about all things aws, gcp, and azure.
[00:00:14] Speaker B: We are your hosts, Justin, Jonathan, Ryan, and Matthew.
[00:00:18] Speaker C: Episode 294, recorded for the week of February 25, 2025. Ding, Chime is dead.
[00:00:25] Speaker B: Yes.
[00:00:26] Speaker C: Good evening, Matt and Ryan.
[00:00:28] Speaker D: Sorry, I get too excited by the show title.
[00:00:30] Speaker B: Yeah.
[00:00:30] Speaker D: Good evening, good morning, good afternoon, and good night.
[00:00:33] Speaker C: Yeah, it's morning for me because I'm in beautiful Bangalore, so we'll see how this works out. That's the first time I've ever recorded the show from India, and Internet in India normally works. Okay, we'll see if it works out this time.
Yeah. Yeah. While you guys are ready to go to bed and have a drink, I have to go to an office, so I can't drink today, which is just a bummer.
[00:00:53] Speaker B: Lame.
[00:00:55] Speaker C: Lame.
[00:00:57] Speaker B: But you could.
[00:00:58] Speaker C: All right.
I mean, there's nothing that stops me personally, you know, but.
Although I do think they locked the bar at the hotel, so I don't know if they even have access until later, but. And there's no mini bar in this room.
[00:01:13] Speaker B: Yeah.
[00:01:13] Speaker C: Sadness all around.
[00:01:14] Speaker B: Yeah.
[00:01:16] Speaker C: I already gave my. I already brought booze for my manager here, and I already gave it to him, so I have that booze. It's all bad.
[00:01:24] Speaker D: You really have to plan better.
[00:01:25] Speaker C: I know, I know. Yeah. Like, yeah. First time in India during. Because I tried to do it last time I came to India and I slept through the alarm, and you guys recorded without me, as you should have.
[00:01:36] Speaker D: Which is impressive. Let's. Let's start with that. We actually recorded without you, and you.
[00:01:41] Speaker C: Guys have gotten much better at it. I mean, I used to make a lot of fun of you guys about it in the early days, but now it's because.
[00:01:47] Speaker B: Mad carousals. It really is.
[00:01:49] Speaker C: That's really what it is. When it was Peter, Jonathan, and Ryan, there was no hope whatsoever. And then we brought Matt in, and Matt, you know, has principles and things that, you know, cares about executive functioning the show. Yeah, exactly.
[00:02:04] Speaker D: Well, now I just keep messaging them, and at one point, I'm just gonna write a bot that sends Ryan a text message every five minutes. Hey, comic. Hey, are you coming? Hey, you're coming.
[00:02:13] Speaker C: Yeah. That is. Yeah. Assembling them in our Slack channel, saying like, hey, are we recording tonight? And then you hear nothing for 12 hours. You're like, I know. They thought some other efforts.
They're just not answering me.
And then we had recording times with Matt, and I start texting people or.
[00:02:31] Speaker D: Calling, welcome to how the saucers has made everyone.
[00:02:35] Speaker C: Yeah, exactly. It's fun, but it's always good to talk to you guys. I always like talking about tech with the three of us. Well, four of us when Jonathan's here, but. All right, well, we do have some tech to talk about, so let's get into it. First up, AI is going great.
So basically it's been a little while since we checked in with our former OpenAI employees who've left to build their own startups. And so there was a couple articles this week. Ilya Sestover, his startup is in talks to raise financing at a $30 billion valuation. That's a lot of money.
And so he basically has a startup called Safe Superintelligence. They want to raise a billion dollars and around that could value start up at 30 billion. The company has yet to release a product, so, you know, buy on the rumor. But based on the name, I would say that they're probably working on something around Super Intelligence, perhaps, or AGI, if you would. And I imagine we'll be hearing more about that probably later this year. We're still going to need funding before they launch because it's probably going to need a lot of GPUs.
[00:03:38] Speaker B: So nuts to me they can raise that much without.
It was just an idea that doesn't have to. I don't, you know, doesn't have to have any proof or, or any kind of poc. Like, it's crazy.
[00:03:50] Speaker C: I mean, you're, you're buying on your hope that Ilya is a genius and can deliver something amazing and. But yeah, this is. It does feel a little.com commie, like, you know, like this company makes no money, but we're going to give you a ton of money on a dream that, you know, we can deliver groceries to people over the Internet.
[00:04:11] Speaker B: Exactly. I mean, I, I get some amount of funding.
[00:04:15] Speaker C: Right?
[00:04:15] Speaker B: Like, I get that 30 billion.
[00:04:17] Speaker D: Yeah, no, it's, it's only, it's only a.
[00:04:20] Speaker C: Valued at 30.
[00:04:21] Speaker B: It's. Yeah, but still a billion.
[00:04:22] Speaker D: One billion.
[00:04:23] Speaker B: Yeah, sorry, one.
[00:04:24] Speaker D: Well, that's what's crazy. I mean, the problem is, is the. You know, even with the cloud and everything, you know, the whole point and the advantage of the cloud years ago was like the barrier to entry for a new company to start and to scale was low. Right. We spin up what we need and we kind of go from here. And now we're back to the old model of like, well, we need a whole data center in order to run these things. So we need, you know, a billion dollars in GPUs in order to, to go start a business. And it's like you kind of, you know, pendulum is swung back to okay, we need a large investment to start which is like kind of anti cloud but you know, is what it is.
[00:05:00] Speaker C: That's literally what the dot com era was. We have these really good ideas for these disruptive things but it's going to take a lot of money and infrastructure because that's how we built apps in those days and the cloud didn't exist. And so yeah, you wonder if like maybe something similar will happen in the AI world GPUs at some point where we come up with better models for.
[00:05:19] Speaker B: How to a new version of scaling.
[00:05:21] Speaker D: Yeah, I feel like they need to get it to be able to run just on normal like to train on normal CPUs. But I obviously know that's a lot harder and a lot very far away. But if they could do that then you could get back into okay, it's cheaper to start these things.
[00:05:38] Speaker C: Yeah, I mean you would think it's not really in Nvidia's best interest to make it cheaper to make them because then they have to lower the profits they're making even though that's the revenue they're making, even though that doesn't. It would make profits better. So their goal would be to keep the price really high and then make it cheaper for them so they can make more margin. So it's.
But we'll see. You know, it's interesting even tell could ever get their act together and any other chip manufacturer to compete with TSMC and some of these others in Nvidia. It would be kind of nice. It's feeling a little bit like a duopoly at this point.
[00:06:13] Speaker D: I feel like ARM is going to have to be where, like where it's going to come out of, you know, is some sort of like custom built.
[00:06:20] Speaker C: ARM process that's, I mean Trainium we're seeing the other, you know, stuff that's built custom to do these specific GPU things. So yeah, that is how it will get, you know, taken care of. But still the GPU processing is not anywhere close to what ARM can do.
[00:06:36] Speaker D: No, but Trainium I felt like maybe I'm wrong. I'm doing this from memory here. Felt like there was like trainium v1 or v2 that was like five years ago and then there was nothing for years. And then like there was like a V2 like last year or something that came out. Like there was a very long gap between.
[00:06:52] Speaker C: I don't think. I mean, I realize why you failed. It was five years, but I don't think it was quite that long. It was like three years ago and then it was like 18 months later. It came training too. Okay.
[00:07:02] Speaker D: All right, well, my life's timelines are about to get well.
[00:07:05] Speaker C: I've been having children. You know, it starts all your time understanding.
[00:07:09] Speaker D: Don't worry, it's all going to get completely skewed again. I'm going to be babbling nonsense in a few months. It'll be fun.
[00:07:14] Speaker B: Yeah.
[00:07:15] Speaker C: Sleep deprivation is a crazy drug.
[00:07:17] Speaker B: It really is.
[00:07:20] Speaker C: All right, well, our other OpenAI hero, Amira Muradi, has confirmed the worst kept secret in AI that she has a lab that she's working on called the Thinking Machine Labs, which again I assume makes Thinking Machine AI or AGI. Moradi has lured away apparently two thirds of her team from OpenAI to join her in her new genter. And we'll wait to hear about about their future funding of, you know, bajillions of dollars and their product when they launch. Sometime probably next year I would be my guess.
[00:07:49] Speaker B: I'm just going to be upset if they can get that domain name. That's an awesome name. I like it.
[00:07:53] Speaker C: Thinking Machine Labs. I mean, yeah, that's a big one. There was a game manufacturer had a similar name. They made a game called Incredible Machine back in the 90s. I wonder if that they Whoever owns that artifact of a company probably owns that domain. Yeah, well we talked about it maybe a few weeks ago that anthropic was starting to feel like they needed to get some stuff in gear and they're here with two announcements this week. First is the Claude 3.7 sonnet, their most intelligent model to date and the first hybrid reasoning model on the market. Claude 3.7 Sonnet can produce near instant responses or extended step by step thinking that is made visible to the user. API users have also fine grained control over how long the model can think for, and Cloud 3.7 shows particular strength in coding and front end web development, which thank goodness because I'm a terrible front end web developer, so definitely appreciate that.
In addition to the new model, they've introduced a command line tool for agentic coding called CLAUDE Code. CLAUDE Code is available as a limited research preview and enables developers to delegate substantial engineering tasks directly from the terminal. Although I'd really like this to be in VS Code folks. The extended thinking model is not available in the free tier of anthropic subscriptions, but all other paid plans are covered as well as through your various cloud providers. You can access those and we'll talk about those when we hit the cloud provider sections. I did download Claude code yesterday, thinking I was going to try to build something. I then quickly hit a wall saying, oh, we have too many users. You all do get, you're on the wait list. And then while I was sleeping, I got added in. So right as we were doing show notes, I did my first app, which was basically giving it instructions to create me a Golang app with a login page. Then once you're logged in, I like to see a list of blog posts and it said okay. And it did a lot of work and wrote a lot of code and created directory structures for me at the command line and then produced an application and instructions on how to start my app, which is cool. So you know, basically navigate to this directory we created it in, you know, do a Go mod, download to download dependencies and then go run Mingo. And then here's the web URL you should use to access the test app. And then whereas Fringe is a little bit goes, here's the login credentials for your new app, admin and password, always good.
And then yeah, that's what it did. And I didn't give it what blog post put in the thing, but it used a bunch made up about Go getting started with Go, a web development in Go and building restful APIs in go, and they're not very good articles, but it got the gist what I wanted, which I was pretty happy with. So I could now take this, I can keep talking with Claude and I can get through a bunch of questions and things. My only complaint about this cloud code is that it required me to install the poison of Node JS on my laptop, which made me mad.
And so now I have Node js, which if I get into JS development again, I'm going to scream. So anyways, Node JS is on my laptop so I can run this cloud bot and hopefully that doesn't drive me crazy. But there you go.
[00:10:48] Speaker D: I was about to just go to a Docker container with Node JS to do it versus installing it locally.
[00:10:56] Speaker C: Yeah, I was thinking the same, but then I was like, yeah, how does Brew install home, you know, Node JS and hope for the best? And it worked out just fine. But it was. I also had similar. But then I was like, how much time do I want to spend on this quick experiment? Plus I wasn't really sure how it worked. Authenticate to Claude to get the API and Like is that going to be a problem inside of a container? And actually wasn't a big deal. But yeah, so I will probably move it to a container so I don't have to do this.
[00:11:24] Speaker B: Pretty cool, you know, experience though. Like I, I like that I'm. This is the first I'm hearing about it and I, you know, I'd love to not use an UI or a purpose build app and while I would also like VS code native integration because I've become completely addicted to that, it's nice to have like just sort of a quick shoot, a prompt off, get an answer, type response. I like it.
[00:11:51] Speaker C: Yeah. I assume this is. Again, it's an experimental. There are wait lists so when you try using the first time you'll get put on a waitlist. Although it only took me a day to get off the waitlist, so that's a win.
Yeah, I suspect this is. We're trying it out, we're seeing how it works and then we'll start building native plugins. But I mean if you want something built into VS code Client is still fantastic and plugs right into anthropic and other APIs that you can set it up with. And I did this actually that's how I did the Terraform project. Talking about here on the show in the past where I refactored Terraform for the website was all through Klein using the Anthropic backend.
[00:12:25] Speaker B: Ah, well, I will check that out.
[00:12:28] Speaker C: Yeah, so that's a. Yeah, I'll have.
[00:12:29] Speaker D: To use that one.
[00:12:30] Speaker C: Yeah, Klein is a C L I N E is the plugin that I used. And then yeah, you literally just plug in your API keys, you know, throw 20 bucks at anthropic to get some tokens and then yeah, I was, I was able to do a bunch of stuff before I ran out of tokens at 20 bucks. So yeah, experience may vary, but it's AI is great.
Definitely. I can see how it makes good coders better and bad coders worse. Yes.
And your ability to be a good debugger is going to be the make or break for you in the AI coding world. Because that's where I think the most of the stuff is. Like, okay, it gave me 90%, 95% of what I needed, but it's not quite right. And then you read some weird errors and you can go back to Anthropic. You can say, hey, I got this error when I tried to run this part of the app and Anthropic will help you figure out. But also if you can debug it, you can just fix it yourself. Because a lot of times it's just very simple things like, oh, they didn't pass the variable into this part of the application because they forgot again, Anthropic's breaking these things down into chunks. And so it's doing this chunking of work because when I was, you know, lose a dependency or lose the connection sometimes, so you have to remind it of those things is what I've seen typically.
And so I think again, if you can debug, you can see that, oh, very clearly it's missing the. It's missing its dependency because it didn't get inherited properly. And so you can fix it in like three seconds because I know how to code, whereas if you don't, you're relying on that back and forth with Claude, hoping that you can figure it out with Claude, which I think is why you can get lazy as a developer quickly.
[00:14:02] Speaker D: Yeah, I still use it a lot for just like starting general scripts or getting me like, you know, 80% of the way there. Particularly, like, I'm not as familiar with the Azure API still. So I'm like, hey, write me a PowerShell script that, you know, does A, B, C, D. And you know, I found that just walking it through what the logic is even in there, you know, helps really kind of get it to a point. So I've definitely had it be like, hey, grab this variable, like this, this API call and you're like. And like, you should go to run the code. It like it just plainly made it up still. So, you know, I still get a good amount of that, but that's kind of the way I've leveraged it a lot in like, the programming. I haven't done full on, like software development. Like, you know, go write me a web app around this. I have a project or two in mind that one of my coworkers and I want to do for fun, you know, for work, for a day job. But, you know, I haven't had time to sit down and play with it yet.
[00:14:56] Speaker C: All right, let's move on to aws. And it's time for a funeral, boys.
Amazon has a very cryptic celebration of life. Celebration.
[00:15:06] Speaker D: Yeah, or death. One of the two.
[00:15:10] Speaker C: And a very cryptically titled blog post update on support for Amazon Chime. They have announced that Amazon has decided to end support for their Amazon Chime service, including business calling features, effective February 20, 2026. And Amazon Chime will no longer just have new customers starting February 19, which was the day this dropped. You can continue to use it as an existing customer for meetings through September 26th and you can delete your data prior to that day so you can not worry about your data privacy. For those of you using the Amazon Chiden SDK, which is what powers Slack Huddle, for those of you who use Huddles, that service will not change. So they're going to continue to maintain the SDK and those APIs around the backend service. Amazon provides you a few options to replace Chime, including their own Amazon Wickr service or from AWS partners such as Zoom, WebEx and Slack.
[00:15:59] Speaker B: It's funny because all my conversations with AWS are still on Chime. So it's kind of funny to me. But you know, like, we'll see. Sure, that'll change.
[00:16:09] Speaker C: Well, I was reading some chatter in some of the cloud AWS Slack rooms, you know, where you can hang out and talk to people who do Amazon things. And um, you know, many of them are talking, you know, like, oh yeah, we have teams now and we have Zoom and we have Slack. And so, you know, but right. The default choice for any meeting still is Chime is what they were saying until some point in the future. But Zoom will be apparently what they're going to move is the default meeting choice and then they are still going to encourage them to use Huddles for smaller meetings and standups.
[00:16:39] Speaker B: That makes more sense.
[00:16:40] Speaker D: I was surprised at how short of a timeline this was because I feel like code command, some of the other ones that were multi year and maybe that's just from memory but like one year if you're fully integrated into the solution doesn't feel like a long time to migrate as a business or, you know, no one actually uses it, so who cares?
[00:16:58] Speaker C: I mean, I think, I mean I think Amazon is definitely the biggest user of Chime. I've yet to ever run into another company using Chime other than Amazon.
So you know, it's probably a lot of smaller companies, you know, or SMB type companies that are maybe using it because they were like, I already have an Amazon account, let's use this Chime thing and it'll be fine because I only do one call a month or something.
But I do also think in general, I think we're seeing faster deprecations being announced from all the cloud providers because they're trying to cut costs. And so if they. There's two things. One, if they tell you it's not going to be retired for three years, then there's no motivation for you today to do it until three years from now, then you forget and then all of a sudden Amazon's chasing you down going like, hey, we told you three years ago, we're deprecating this thing, but you're still using it. Then you're like, oh, there's no possible way I can get out of this thing. So they tell you a year and then in nine months from now you're not ready. Them extending at a quarter or two quarters isn't a big deal versus they've been trying to deprecate something for three years. They're still maintaining a team, doing security patches and all these other things. I think that's what you're seeing is cloud providers realize long winded periods of time is probably too long and if they can shorten it down and then give extensions for the few customers who need it, it's a better model.
[00:18:15] Speaker D: You say that, but I got an email notification from Azure that some service is Getting deprecated in 2030. And I said, cool, that seems like a future Mets problem, not current maps.
[00:18:29] Speaker C: I mean, that even sounds like a future, future person who works at my job.
[00:18:34] Speaker D: Right?
That's kind of where I was. I was like, this isn't my problem. This is going to be someone else's by then.
[00:18:43] Speaker C: I know you're right. You never know. Might still be there. But I always think about like, typically my lifetime and job is five to six years and then I move on to the next challenge. And so yeah, like thinking about 2030 problem right now. Like, I don't know, but maybe, maybe I'll worry about it. Maybe I probably won't.
All right, Amazon ECS is increasing the CPU limit for ECs and this is in the. Are you sure your containers are the right solution? Question of the day. They now support those 192 VCPUs for ECS tasks deployed on EC2 instances and increase from the previous 10 VCPU limit. This enhancement allows customers to more effectively manage resource allocation on larger Amazon EC2 instances. Now the 10 VCPU limit did feel a little tight. So I appreciate that we were going to increase the number of CPUs beyond 10, but to 192 is bold. So that's a bold move.
[00:19:38] Speaker D: Go big or go home, Justin.
[00:19:40] Speaker B: Yeah, like, you know, I guess maybe for, you know, Windows workloads you can get all 192vcpus and have your image be 5 gigs and just use up all the memory too. Why not just, you know, cool.
[00:19:55] Speaker C: I mean, I think I know what it's for. Your, your appreciation for Microsoft Windows. Being able to use this is. Is misplaced. And I'm pretty sure Matt knows exactly where these are going to be used.
[00:20:07] Speaker D: I think that it's for Justin's project of getting SQL to run on.
[00:20:11] Speaker C: Oh, on Container.
[00:20:13] Speaker D: Yeah, that's why they built this for Justin.
[00:20:15] Speaker C: The right. The right answer is AI, guys. But okay, yeah, SQL for my project too is fine. I'll take that one too. Yeah, fine.
[00:20:21] Speaker D: Honestly, I don't know they ever ran into the 10V CPU limit, which kind.
[00:20:26] Speaker C: Of surprises me because you're a sane person and you like, oh, I just need more tasks. And each of them are going to have 10 CPU and you scale out horizontally because you're not a sadist.
[00:20:36] Speaker D: I mean, questionable. I wrote out Azure. Just saying.
[00:20:40] Speaker C: Well, I mean, if you had your choices and you were not at a job that made a bad choice, you would have chosen not Azure. So, I mean, I'm running, I'm supporting gcp. Did I choose that? No. Would I choose it again? Probably not. But here we are.
All right, guess what, guys. Amazon is going to Support Anthropics Cloud 3.7 Sonnet, and they already are both Bedrock and Q developer. I mean, within minutes of the announcement, they had already dropped this. On the blog post I saw Corey Quinn.
What is he mass tuting? They call it just tooting about how all the documentation still references 3.5 even though they've just announced 3.7. Like, you know, you got to rush these things out. You know, you don't want to lose market share.
[00:21:30] Speaker D: I mean, it took them a matter of minutes. Come on, guys, what's taking you so long?
[00:21:34] Speaker B: Only if there was a, like, some sort of like, smart computer process that you ask like a question like, hey, go update all our documentation and all these sources. I wonder if you had a thing that could do that.
[00:21:50] Speaker C: Yeah, I don't know what that would be. That's so weird.
[00:21:53] Speaker D: Grep, sed and awk.
[00:21:56] Speaker C: Yeah, set an awk, turn around 100%.
[00:22:02] Speaker B: I knew it. Matt is AI.
[00:22:05] Speaker C: That's it. We figured it out. We cracked AI. It's just said awk and you know, grep. And what's the other one? The really weird syntax thing we always joke about all the MLS built on top of Regex. Regex. Thank you. I need my coffee, man.
[00:22:23] Speaker D: Yeah, yeah, it's like 6am for yourself, you know.
[00:22:26] Speaker C: You're good. Yes. It's Regex. It's Regex. Set and ock. That's what it is. That's all. That's all the Eyes.
[00:22:31] Speaker D: Maybe a couple cuts and splits, but you know, you're good cutting that cuts off like characters.
[00:22:38] Speaker C: Yeah, yeah.
AWS Network Firewall introduces automated Domain Lists and insights, which is a one less thing I have to use Athena for, which is always a victory in my book. AWS Network Firewall now offers Automated Domain Lists and insights, a feature that enhances visibility into network traffic and simplify rules configuration. The capability now is HTTP and AWS traffic from the last 30 days and provides insights into the frequency of access to domains, allowing a quick rule creation based on the observed network traffic patterns.
[00:23:07] Speaker B: It's funny because I, when they rolled out this, this feature or network firewall together, I've. I've become real spoiled. And so like, when it didn't have this, I was like, ah, how am I supposed to use this? I gotta go like compile all my traffic to figure out what's going on. Like, boo. And so yeah, this is, this is great because compiling these data sets and running your queries is, is a chore. And typically that's all you want, right? You just want to be able to very quickly sort of say, these are, this is what's coming in and answer a question and move on.
[00:23:40] Speaker C: Yeah, and I have a love hate with Athena. Like every time I have to use it, I get mad and then I get it to work and I'm like, this is so cool. And then I go like, I don't want to do that again. Yeah, no, I don't know what it is about Athena. I just can't quite get the syntax down in my brain of like, how to structure it properly because it's like a pseudo SQL, but it's not SQL. But then it's also Spark and you're like trying to. I don't know, I can't quite wrap my head around it. I don't know if that's just a me problem.
[00:24:09] Speaker D: There was only a tool you could talk to and just say English what you want. It would spit out an answer in.
[00:24:16] Speaker B: Phenol language, but it'll spit out the wrong answer.
[00:24:20] Speaker C: Yeah, no, I tried it with Q.
[00:24:22] Speaker D: Like, no, no, I did say Q. Yeah, yeah.
[00:24:27] Speaker C: Else. Other tools, you were correct, you can try it, but the one that's embedded into the console that I'm supposed to use per Amazon does not do well. Now stuff they've added into BigQuery on Google is actually that good. Like, I've definitely simplified my BigQuery world because I can use Gemini to create enough scaffolding that I can just tweak it to do what I need to do. Or a lot of times it just gets it right out of the box from what I said in English and like, oh, yeah, that's exactly what I needed. Or it's something minor, like the way it's summarized. I don't quite like the group buyer is wrong or something. Very easy. But yeah, I can't say the same for Athena and Bedra and Q. Nope.
[00:25:04] Speaker B: Yeah, I've had the same experience and really where I need AI is like partitioning the data set, like at the generation time because it's always the wrong way around for me.
[00:25:13] Speaker C: That's really where I need. I need you to look at the CSV file and tell me the best way to put it into Athena, because I have not done enough for that. Yeah.
[00:25:21] Speaker B: And just responds back, no, exactly, you're doing it wrong.
[00:25:26] Speaker C: The reason why Q is so messed up is the same reason why I messed up by default. Because the things that I want to do are very simple things like, oh, I want to go look at the ALB access logs and pull out some IP addresses that are doing something bad on my website. Right. So I have a very clear ask. But when you go look at the Amazon documentation that powers Q, even setting up the data source for an ALB log file, there's four different ways, four different instructions, and three of them are wrong because they've changed the ALB output so many times, they're not updated the documentation to match.
[00:26:00] Speaker B: Yeah. So you go, yeah, it's. It's every single time. And that's part of the frustration thing is you find these easy buttons where it's like, oh, just copy this table schema into your thing and generate it that way. And it's all wrong.
[00:26:12] Speaker C: That's the problem is Q is using the same data that I am to figure out what that table schema should be. And so it's also getting the wrong one because it has no concept of that. This is wrong.
[00:26:19] Speaker B: Yeah.
[00:26:22] Speaker C: All right, let's go to GCP World. Google is announcing the general availability of cloud DNS routing policies with Public Health IP Checking, which provides the automated health aware traffic management that you need to build resilient apps no matter where your workloads reside. Running on multiple cloud versions can often lead to fragmented traffic management strategies. And cloud DNS now lets you intelligently route traffic from across multiple cloud providers based on application health from that single interface. Cloud DNS supports a variety of routing policies, including weighted round robin geolocation and failover, and giving you the flexibility to Tailor your traffic management strategy to your specific needs needs and do houseworking. So it's great.
[00:26:58] Speaker B: I mean so maybe you can take your kubernetes workload and actually spread it across multiple clouds and serve from all clouds with a solution like this. Because you know it's. It's always that sort of edge case where it sort of the rubber meets the road and you run into these weird things trying to serve from multi cloud. But this is a big step towards that.
[00:27:19] Speaker C: I'm sure.
[00:27:19] Speaker B: There's other edge cases that I'm not thinking about. I know there's a ton of operability concerns but this is kind of neat.
[00:27:26] Speaker D: Doesn't Route 53 already do this? They already have weighted. They already have geolocation there. You have failover.
[00:27:31] Speaker C: I don't Inside of aws.
[00:27:33] Speaker B: Yeah, I think if it's aws, I think. But I don't think you can point it at a GCP workload and have it roll over.
[00:27:40] Speaker D: You can definitely do it based on IP addresses. I know you can do some of it.
[00:27:44] Speaker B: You can configure the IP address statically in Route 53 and have it do that. But because I've done that for data centers.
[00:27:51] Speaker D: But I don't know that you're right. I feel like you can definitely do like geolocation and fail over now. I'm gonna go do this. I'm pretty sure you can do this for a while now.
[00:28:02] Speaker B: It. It's very possible that I just haven't looked, you know, because it's been a while.
[00:28:06] Speaker D: Yeah. Like I'm thinking like five to six years because I'm thinking like I. I did this, I thought I did something like this for like a migration like to.
[00:28:13] Speaker C: I mean you definitely could do it. Like you definitely were able to do it when you were doing like ECs on prem or some of the other things. That's how I would have check to your on premise environment if you're using that. So you're right. It might have some capabilities of it. I just don't. I don't recall either.
[00:28:30] Speaker D: Though. You know I also was looking at Azure DNS the other day. I was like, it's missing so many features.
[00:28:38] Speaker C: Isn't that your like daily routine of Azure? Like oh, it's missing so many things that everyone else has. Oh, this is also missing so many things. But then you get weird things like Azure sand. Because that's what I want in the cloud is a big mess of sand. Yeah.
[00:28:52] Speaker B: Wow. That's what a lot of people want. So they can do Windows File share just right on top of it and not have to manage it.
[00:28:57] Speaker D: Even though there's a Windows file share service that you could just use. That's not, that's not a sand.
[00:29:02] Speaker B: Oh no, you got to run your, you got to run your own Windows file server.
[00:29:06] Speaker C: Hey, I took my people and I told them to be cloud native and they know that they need a San and so now when they Google on Azure website they'll find Azure San. And now they're like, okay, we have what we need. 4. They would look for Azure SAN and they wouldn't get anything and they tell you, oh, I can't move it to Azure because they don't have a storage service because they don't know what to Google for.
[00:29:25] Speaker D: I don't know why the cloud providers that we all work with like us. We just make fun of them all day.
[00:29:32] Speaker C: You think they like us? That's old. I mean I'm pretty sure Amazon likes us much better than they like Corey Quinn.
[00:29:40] Speaker B: Yeah, well that's.
[00:29:41] Speaker C: Yeah, we're nowhere near as me as Corey is to them.
[00:29:46] Speaker B: If they know who we are, I would be impressed, but I doubt.
[00:29:52] Speaker C: Yeah, very true.
[00:29:56] Speaker A: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance but remember to ask, will you actually give me my money back? Achero will click the link in the show notes to check them out on the AWS marketplace.
[00:30:36] Speaker C: All right. Google is releasing Quantum safe Digital signatures or FIPS 204 FIPS 205 compliant signatures and KMS for software based keys available to you in preview. They're also sharing their high level view and their post quantum strategy for Google cloud encryption products including for Cloud KMS and cloudh sim. Their goal is to ensure that both Cloud KMS and cloud are quantum safe. And they want to do that by offering you software and hardware support for standardized quantum safe algorithms, supporting migration paths for existing keys, protocols and customer workloads to adopt a post quantum world and quantum proofing Google's underlying core infrastructure as well as analyzing the security and performance of PQC algorithms and implementations and contributing technical comments to PQC advocacy efforts and standards. Standardized bodies and Government organizations like ISO. So glad to see this. Google's late to the party because I believe AWS already has Quantum post quantum libraries available to you to use today. So I'm glad to see they also now have that.
[00:31:31] Speaker B: Oh, I didn't know because I was excited by this because I thought it was the first sort of key management service that was offering this.
[00:31:38] Speaker C: No, what I can tell you is I don't know that Amazon's KMS is using the post quantum. I know they support it in the HSM and I know they have a bunch of research and data they reduced and put out there for post quantum key stuff. I would have to double check on the KMS pair. Yeah, so you might be correct on that.
[00:31:56] Speaker B: I don't know because that's the part that I got really excited is that because I don't want to know anything about quantum safe encryption. I've never wanted to know anything about the mathematics behind encryption. I just want to know that the stuff is safe and mathematically really hard to descramble.
[00:32:13] Speaker C: Perfect.
[00:32:14] Speaker B: That's all I need to know. And so like using a cloud service for that and preparing for the new world where nothing is safe. Like I think that's a fantastic option.
[00:32:25] Speaker D: As you're saying all this, I might have opened the FIPS 10205 PDF on this.gov don't do that. It's going through and it's literally all the math. The idea I'm like I didn't even know that these were real letters and numbers. Yeah it is 50. 50. Sorry, 61 pages of like just things that make me feel like a two year old looking at it.
[00:32:51] Speaker B: Any FIPS encryption standard is like the worst read. It's not even like it'll bore you to sleep. It's like I these I it like will not consume in brain like just.
[00:33:02] Speaker D: 7.2 hypertree signature verification.
[00:33:06] Speaker C: What's a hyper tree is Take that PDF, take it to Claude and tell Please explain this PDF to me like I'm five years old.
So I did. I just did a Claude about post quantum safety and KMS and I was just reading this as we were talking about 204 spec which I also decided not to open. It says AWS key manager supports post quantum cryptography as of July 2022 there's post quantum hybrid key exchange for KMS which gives you standard TLS encryption with post quantum keys. They implemented hybrid post quantum key exchange that combines traditional and quantum resistant algorithms and they provided protection against both current threats and potential future quantum computer based attacks and then he goes on to talk about the implementation use of Kyber and I do remember us talking about Kyber because I think we talked about Kyber Crystals at one point.
[00:33:51] Speaker B: I do remember Kyber.
[00:33:52] Speaker C: Yeah, so we did talk about this at one point. So yes, you're good at already. So yes, Google is coming in either maybe behind Azure, I don't know. I need to look at Azure if they have anything.
[00:34:02] Speaker B: Yeah, it'll be interesting. I hope this is like rolling out, you know, going from like 2048 to 4096 or something like that versus my fear, which is a completely different and horrifying like migration path.
We'll see.
But scary. I hope not. I hope it's easy.
[00:34:27] Speaker D: Asher has a nice landing page about Quantum Safe. They have a whole program about it that's good and it's in Copilot. Explore cryptography with Copilot now.
[00:34:39] Speaker C: Sorry, Claude's a little buried right now by people using all the new Sonnet stuff apparently because like I keep trying to ask it hey this Azure have similar things and it sound like cloudsonnet's a little busy.
[00:34:51] Speaker B: Yeah, try again later like okay.
[00:34:54] Speaker C: Yeah, I see see what's happening here.
Well let's move on to Google and the A4X we talked about just recently the A4 which they announced in general available or having preview. Now they're announcing the A4X VM in preview. And if you're asking what's the difference? Well I can tell you the A4X VM is powered by Nvidia GB200NVL72 just rolls off the tongue a system consisting of 72 Nvidia Blackwell GPUs and 36 ARM based NV Nvidia Gray CPUs connected by a 5th generation Nvidia NVLink. With this integrated system, A4X VMs directly address the significant compute and memory demands of reasoning models that use chain of thought, unlocking new levels of AI performance and accuracy. Google Cloud is the first and only provider today to offer both the A4 VM powered by Nvidia B2 hundreds and the A4X VMs powered by the Nvidia GB200. And now NVL72 again rolled off the tongue again. If you are curious about the workloads that you might want to use for these, Google is recommending that you use the Aforex VMs for purpose built training and sorting the most demanding extra large scale AI workflows, particularly those involving reasoning models, large language models with long context Windows and scenarios that require massive concurrency. And this is enabled by the unified memory across a large GPU domain. They recommend the A4 VMs if you're because it provides excellent performance and for diverse AM model architectures and workloads including training, fine tuning and serving. And the A4 offers easy portability from prior generations of cloud GPUs and optimized performance benefits for varying scale to training jobs. So basically, if you need big expensive hardware, use the A4X. And if you just want to do some inference and basic things, you have a model you already are happy with, the A4 VM is probably the right choice for you.
[00:36:33] Speaker B: Yeah, I wonder how many people are still like just custom training models instead of like using public ones. Because I know like 90% of the AI usage I'm certain could be covered by publicly available models.
And I imagine there's still a ton of people that are building their own.
[00:36:52] Speaker D: I feel like most people are just like tweaking, like using RAG on top of it.
[00:36:57] Speaker C: Yeah, I think most of them are grounding or doing fine tuning. I don't think they're building LLMs. If you're building an LLM, you've gone. You either are a big company that has a lot of data that can make that make sense, or you're fooling yourself into thinking you are and you should probably reevaluate your life choices.
[00:37:12] Speaker B: Yeah, I'm worried about the latter.
[00:37:15] Speaker D: Are your company's willing to let you have some fun and burn some cash?
[00:37:19] Speaker C: Well, they are until the CFO gets the bill and then they probably aren't happy with you.
Okay, so Claude did come back to me on quantum and they do say that they have had post quantum cryptography in their services since 2022 or 2023 and it is integrated into their KMS as well. So there you go, you get post quantum in Azure.
[00:37:41] Speaker B: All right.
[00:37:42] Speaker C: Okay.
[00:37:42] Speaker B: So yeah, so you're right. Google is the last of that party.
[00:37:45] Speaker C: Yep, sad indeed. Google has launched a new AI to be your co scientist. A new system AI system built on Gemini 2.0, designed to aid scientists in creating novel hypothesis and research plans. Researchers can specify a research goal, for example to better understand the spread of disease causing microbes using natural language. And the AI co scientists will produce testable hypothesis along with summary of relevant published literature and a possible experimental approach. Cool. Nice. All right. That's wild, right?
[00:38:15] Speaker B: Like I don't know, like it's such a specific use case. Like you know, like this is one of those things where you're you know, if you're struggling, how am I going to use AI or what am I? You know, and then they come up with, you know, an example like this where it's like, I wouldn't have, you know, I'm not a science researcher, but that's kind of wild. And I hope that this is really handy and feels not like a dedicated project but, you know, kind of weird.
[00:38:40] Speaker D: I feel like a group of parents had their seventh graders trying to figure out what their science fair project was going to be and then they did this on their side, on the side and created this.
[00:38:52] Speaker B: I bet this research plan is really in depth.
[00:38:56] Speaker D: Like that's where my mind went, was like me doing my seventh grade, like science fair project.
[00:39:03] Speaker B: Get some poster boards.
[00:39:05] Speaker C: I mean, it could be cool.
I'm not going to knock it completely.
[00:39:09] Speaker B: I think it's really neat and I imagine this is really hard, right, to do. You know, I don't have any first end experience, but yeah, I was thinking.
[00:39:16] Speaker C: About this from the idea of FMEA failure mode and event analysis where you like, you do this process where you basically try to think about all the ways your software fails. And my biggest complaint about FMEA process is it's only as good as the imagination of the people who are in the FMEA process process. So if you invite me to it, I have a really good imagination of all the terrible ways that your software is going to blow up in your face, but a typical engineer does not. And so that's the problem. But I was thinking like, can I use something like this to create an FMEA AI thing? And you basically tell it. Here's what my application does. Tell me all the ways it could possibly fail. That would be kind of fun.
[00:39:51] Speaker B: That would be really fun. Yeah. That's a great project actually. Yeah.
[00:39:56] Speaker C: Yeah. I'm actually, I'm probably going to play around with Claude, see if I can copy something close to that because I could see that being valuable.
[00:40:04] Speaker B: Yeah, totally.
[00:40:06] Speaker C: All right. Announcing Claude 3.7 Sonnet, Anthropic's first hybrid reasoning model is available on Vertex AI.
[00:40:13] Speaker D: Ta da. No way.
[00:40:16] Speaker C: Yeah. Shocker. I know. The only one who didn't announce it, at least I didn't see a blog post for it was Azure. I think because Azure's busy talking about what we're about to talk about, they're a little distracted by something cooler. Well, maybe cooler. Depends on how you feel about it.
[00:40:31] Speaker D: I saw something GitHub, copilot. They had it in.
[00:40:35] Speaker C: Yeah, I'm sure. I, I assumed it came out after I stopped adding show note items. I just have to go look and see. But yeah, I assumed it was coming very quickly.
[00:40:44] Speaker D: No, GitHub Copilot runs quad on AWS. I think as I dove into this the other day.
[00:40:51] Speaker C: Really interesting.
[00:40:52] Speaker D: Yeah, it was.
[00:40:53] Speaker C: I know they, I know they offer.
[00:40:54] Speaker D: It as very confusing. Yeah, yeah they definitely. So if you go into GitHub co pilot you can run Claude and one other model. You can have it leverage.
[00:41:05] Speaker C: And in there there's the microphone I believe.
[00:41:09] Speaker D: Yeah. And then I dug into it. One of them runs on Azure and Claude I think runs on aws and I was like that's strange. I totally just lost $5 by this. Like why would they run it on their competitors cloud?
[00:41:21] Speaker C: So yeah, I'll look into that. I thought they had the Sonnet model because again they have a model garden just like every other cloud provider does. But maybe they haven't signed a partnership with Anthropic yet to give them lots of money.
All right, I'm going to try and get through this. This is outside of my realm of truly understanding but Microsoft has had a quantum breakthrough. They're promising to usher in the next era of computing in years, not decades. And they've done that by they've creating a new type of matter. Growing up in some I science class it was on a quiz that you know, what are the three main types of matter? And that was solids, liquids and gases. But now Microsoft apparently has turned this on its head. They have created an entirely new state of matter unlocked by a new class of materials called topo conductors. Enable the fundamental leap in computing. All of this powers the Majorna Is it Majorana 1, the first quantum computing processor. A unit built on topological core. Satya Invalid believes this breakthrough will allow them to create a truly meaningful quantum computer not in decades, but in just a few years. The qubits created with topo conductors are faster, more reliable and smaller. And they are 1/100th of a millimeter, meaning we now have a clear path to a million qubit processor. And a chip the size of the polymer can yet is capable of solving problems that even all the computers on earth today could not solve.
[00:42:41] Speaker B: Wow. I mean that, that last bullet point is what like my head explodes. Like I know I don't understand quantum computers and I, you know like from a any kind of real way. And now you know, they're introducing new states of matter like in order to, to power some of those things. It's gonna, it just feels like tomorrow world it's going to be completely Unrecognizable.
[00:43:06] Speaker C: I mean you think about like disruption that AI is causing and just normal jobs that exist today. Okay, now I'm going to add a computer that can solve all the world's problems in one second. What are we going to do after that? Hopefully it solves world peace and a post currency economy for us too.
[00:43:25] Speaker B: Yeah, right. That's our only hope. Or we're going to turn into batteries to power the AI machines.
[00:43:33] Speaker C: May I Matrixy of you?
[00:43:35] Speaker B: Yeah, I wonder.
[00:43:37] Speaker D: I'm just like thinking, you know, you always hear the Apollo 11 had like, you know, less power than our phones nowadays. You know, like 20 years, 50 years ago, you know, how much power was there on Earth? You know and I know there's you know, Moore's law where you know, where speed and performance doubles every 18 months. You know, is, are we really like.
[00:44:01] Speaker C: But we've broken Moore's Law now? But yes, it worked for a long time.
[00:44:04] Speaker D: But like how many years ago was it that we had the same, you know, we've one system had the double, had all the power that like one of the Nvidia's cluster has, you know like I'm curious to see like the scale of growth like of that statement because I'm like well if it was 50 years ago, okay sure. But you know it also is very impressive and I do feel that they're literally making a new form of matter. I feel like they're either going really wrong or really right on this process.
[00:44:38] Speaker C: I definitely, I was seeing some commentary on the interwebs about you know, this path was not one that everyone agreed on was the right path for Quantum. So there was a few companies in this camp who thought atopos, the topo, what is it? Topo, Topo Conductor was the right path forward. And then there's other people who say it's not.
But there's some physicists out there who've been working on this problem for decades who are asking Microsoft to publish their research and have a chance to review it and conclude the same things that Microsoft did in their funding. But it doesn't sound like Microsoft's been releasing all of the data yet. So I'm curious as they what we find out more about this and do people start disputing if it's a true type of state of matter or is it something different? Yeah, again I imagine there'll be lots more conversation with this in the upcoming years.
They did have another article here where they talked about how they came up with this. So the second article they took A step back and said okay, if we invent the transistor for the quantum age, what properties does it need to have? And that's apparently how they got to the topo conductor. Being able to fit a million qubits in the palm of the hand unlocks the path to meet the threshold for quantum computers to deliver transformative real world solutions such as breaking down microplastics into harmless byproducts or inventing self material for construction, manufacturing and healthcare. So basically this million million qubit threshold has been their super important one which has been holding back really your ability to use them for massive problems.
And so again, this is where all the computers operating together on Earth can't do 1 million cubic quantum.
1 million cubic quantum computer can do. And they said that this new topological core powered by the Marjana one is reliable by design, incorporating errors is at the hardware level, which we talked about error resistance in a prior show and I think Amazon talked about it in a keynote a couple years ago about trying to fix the error handling problem. So that was addressed. And then crucially important application require trillions of operations on a million qubits, which would be prohibited with current approaches that rely on fine tuned analog controls of each qubit, which is how other path people have been taking. This new chip allows them to be controlled digitally. Redefining and vastly simplifying quantum computing works at the core. And Microsoft is Now one of two companies to be invited to the final phase of DARPA's underexport systems for utility quantum computing program.
And so the other company, I don't remember who it is, but they are want to use one of the other approaches so we'll see who wins out in DARPA's competition.
Just crazy.
[00:47:08] Speaker B: I'm still blown away by the scale of computational power. And it's like I still laugh at when we talk about, you know, new instance types and the CPUs and, and GPUs and those capacities and, and then to think like that's amazing to me and then to think about this is just, you can't even put it in the same concept.
[00:47:30] Speaker C: Well, I mean we, I mean I remember when we went from teraflops, we're talking about exaflops of computer capacity and I thought that was crazy. And so like if you're talking about 1 million qubits is more computing capacity than all of Earth, like, I mean like what's after exaflops? I don't even know. Right?
[00:47:45] Speaker B: Yeah, no idea.
[00:47:48] Speaker C: Well, for those of us who live in the Current world, not this new future dystopian of quantum and AI. They're releasing a new AI agent that can control software and robots. They unveiled Magma, an integrated AI foundation model that combines visual and language processing to control software interfaces and robotic systems. If the results hold up outside of Microsoft, it could be a meaningful step toward for an all purpose multimodal AI that can operate interactively in both real and digital spaces. Microsoft Clients is the first multimodal AI that can not only process, but also actively act on the data. From navigating user interfaces to manipulating physical objects in the meat space.
Yeah, so good. Let's take robots, let's put them with AI and let's give them Quantum and then they'll take over the labor force too. And then kill us all. Terminator is so close.
[00:48:35] Speaker B: Yeah, can. All I ask is that you don't give them red eyes.
[00:48:39] Speaker C: Right?
[00:48:39] Speaker B: You know that's all I want.
[00:48:42] Speaker C: But then how would you know it's a. How you know it's an evil trans. You know one. If it doesn't have red eyes, they're.
[00:48:48] Speaker B: All evil because it's shoving me into a pod. But.
[00:48:54] Speaker D: For some reason when I read this, I thought of the IROBOT movie years ago and I was like oh, okay, yeah, it has to like see through its eyes and like process it. I'm like Great, we've created iRobot. Cool.
[00:49:07] Speaker B: Great.
No, I had the T1000 from Terminator in my like immediately like cool. That's what we got. Giant metal skeleton gonna kill me.
[00:49:17] Speaker C: Yeah. Well, if we come up with matter that you can now change to be any shape, you can get the T1000 for sure. Oh, definitely.
[00:49:25] Speaker B: Yeah.
[00:49:27] Speaker C: Well, again, other future robot overlords.
Azure is launching Azure AI Foundry Labs, a hub for developers, startups and enterprises to explore groundbreaking innovations from research to Microsoft until visual Studio kills it. Launching UBS Studio, Microsoft's newest AI breakthrough, Muse is a first of its kind of world and human action model or wan, available today in Azure AI Foundry. It's the latest example of bringing cutting edge resource innovation to their AI platform for customers to use. Because WAN definitely takes me right back to the 80s over time.
[00:49:58] Speaker B: Oh yeah.
[00:49:59] Speaker C: Cutting edge. At the same time, 80s is back with Azure AI Foundry Labs.
They're excited to build new assets for their latest research driven projects that empower developers to explore, engage and experiment projects across models and agentic frameworks. Include Aurora, a large scale of atmosphere model to do weather predictions Exact an open source product, Agents to learn from past interactions and improve search efficiency Magentic1, a multi ager system involved in complex problems by Orchard in multiple agents built on the Autogen framework. Matter sim, a deep learning model for atomistic simulations predicting material properties with high precision. The OmniPreserver V2, a vision based module converting UI screenshots and Tam Gen, a generative AI model for drug decision using a GPT like chemical language model for target aware, molecule generations and refinement. Magda points out that the speed of innovation is crucial and points to the slow adoption of GPS and the decade it took for military application consumer use. But AI innovations are moving much, much faster than that and the pace of AI vamps has accelerated dramatically requiring the ability for tools like Azure AI Foundry Labs help you experiment faster than ever before.
[00:51:07] Speaker B: Yeah, I mean I can't agree more that the speed of innovations blinding.
You know, we started this podcast to keep up with cloud news as the hyperscalers got to a certain scale when they were announcing enough stuff that we couldn't keep up to date. Now I feel even with this like it's, I really struggle to, you know, understand half of these use cases and how it's applied and the whole thing. Like it's crazy to me how fast things are moving.
[00:51:33] Speaker C: Yeah, it is really crazy.
And Microsoft decided to release one more AI model. They just, they're on the roll this week with Muse and the first one was kind of generative AI model that they are applying to gaming. It's a huge step forward in gameplay. Addition Muse, just from observing human gameplay, has developed a deep understanding of the environment, including its dynamics and how it evolves over time response to actions. And this unlocks the ability to rapidly iterate, remix and create in video games. So developers can create, eventually create immersive environments and release their full creativity that don't require massive amount of coding and effort to dynamically adjust to your needs as a user or gamer. Which that's cool.
[00:52:11] Speaker B: I mean, I think that's, I think it's completely fascinating like the, the idea of having, you know, like a World of Warcraft kind of game with no LIM ever. Right. Like, or.
And just something that maybe it's not the same every time you play it. You know, that kind of thing. Like there's all kinds of things you could do with gaming on this that I think are really fascinating and that'll be a fun space to watch.
[00:52:35] Speaker C: I think we'll all be unemployed watching our AI overlord robots do all the work for us. We'll be able to play these games. So it's great I hope so.
[00:52:43] Speaker B: I hope this is what we're going to spend our time doing.
[00:52:45] Speaker D: Yeah, we'll just be addicted to the game, so it'll be fine. What could go wrong?
[00:52:48] Speaker C: Yeah, someone will figure out how to make, you know, make sure the games give a good endorphin hit and you'll get to endorphins. That'd be great.
All right, gentlemen, that was another fantastic week here in the Cloud here from India, but glad to talk to you guys. And I'll be back stateside next week for our regular recording.
[00:53:09] Speaker B: All right, well, safe travels and Dubai, everybody.
[00:53:13] Speaker D: Hi, everyone.
[00:53:17] Speaker A: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services.
While you're at it, head over to our
[email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback, and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.
[00:53:49] Speaker C: All right, I have an after show for you guys.
You might know this about me, you may not. I'm a big James Bond fan and Pierce Brosnan days. And before that, even into some of the, you know, Sean, what's the last name again? I need cosmic this morning.
[00:54:06] Speaker B: Sean Connery.
[00:54:07] Speaker C: Connor. Sean Connery. Thank you. You know, all the way back, I watched all the James Bonds. I love the last ones, but it's been kind of frustrating over the years because it's primarily been controlled by two people, Barbara Broccoli and Michael G. Wilson, who basically have very strong opinions of what they think about James Bond, how the character should be used, but apparently that's ending. Amazon has paid a billion dollars to secure creative control of the James Bond franchise, according to person familiar with the matter. The deal is a joint venture with Barbara Bradley and Michael G. Wilson, of course, both of whom have been long standing stewards of the Bond franchise. Amazon bought MGM Studios in 2022, which gave it the right to distribute Bond films, while Broccoli and Wilson retained creative control. But they have been odds of Amazon since the tech giant bought MGM in 2020, delaying the production of a new James Bond film as well as a new James Bond, since we know the last one has retired or was killed in the last movie, which kind of really makes it hard to make more movies. This new deal allows you to integrate television and movies based on the Bond character, and rumors are they're looking to expand the Bond universe similar to the Marvel Cinematic universe. Now, I don't really want a Bond universe, but I do want more James Bond films.
So I am slightly excited and also slightly terrified of what Amazon might do to this. Because if you want examples of what Amazon has done with movie franchises, just look at Lord of the Rings, which is in the Power series, which the first season was atrociously mediocre and the second season actually was much better. I actually didn't hate that one. But yeah, I don't know that I trust Andevon quite yet on this one, but I am excited that we might get some more Bond. Yeah.
[00:55:43] Speaker B: I did not know that you were a giant Bond person and you know, so. Giant Bond person. So that's crazy. But yeah, no, I mean it's. They vote. It's. This has always been like a thing that's newsworthy.
[00:55:55] Speaker C: Right.
[00:55:55] Speaker B: The creative control and how tightly they hang onto it and who's going to be the next Bond. You know these stories for years because of, you know, how slow things move. And so we'll see what Amazon does. I agree. I think they're going to speed this up and it's going to not go well.
[00:56:10] Speaker D: But I just vote they have a little bit more Q in that.
[00:56:13] Speaker C: I don't know. I've always liked the idea of having 007. So there are other Bond, like Agents Bond, but you could do something interesting potentially with adding some additional secret spies with different code names. You could maybe do something there which would be interesting to me.
But again, like, do I want 12 television shows following different people? Then we're all wrapped into a massive get together, gangbuster Bond action movie. Absolutely.
[00:56:44] Speaker B: Not hard.
[00:56:45] Speaker D: No, I'd be fine.
[00:56:47] Speaker B: Even with spin offs on like some of the villains and some of the other characters they've had over the years. Right. Like there's a lot.
[00:56:51] Speaker D: I want more Q. Come on. Yeah, more Q in our life.
[00:56:56] Speaker B: Yeah, Q and gadgets.
[00:56:57] Speaker C: Yeah, Yeah.
[00:56:58] Speaker B: Q just shows me cool stuff for.
[00:57:00] Speaker D: Maybe it might be Inspector Gadget.
[00:57:02] Speaker B: Ye.
[00:57:06] Speaker D: Yeah, sorry.
[00:57:08] Speaker C: I mean, you guys. Do you guys have a preference for who you want to be the new James Bond? I was. That was pretty big idea. Idea of Idris Elba being a good James Bond.
[00:57:17] Speaker B: I'm in the Idris Elba camp as well. I think it's a fantastic choice.
[00:57:21] Speaker D: That'd be good.
[00:57:23] Speaker C: Only the only problem is he's getting kind of up there in age and if they keep waiting too long, he's not going to be able to do it. But I think he'd be a fantastic Bond.
[00:57:31] Speaker B: Which is funny, because that's what I thought when Daniel Craig's first movie.
[00:57:35] Speaker C: Then.
[00:57:35] Speaker B: Yeah, I was wrong. I was like. They announced him, and I'm like, what?
[00:57:42] Speaker C: He was so good. And Casino. Right? So good. So good. Last one where they killed him off was not the best of them.
[00:57:50] Speaker B: I thought it was okay.
[00:57:51] Speaker C: It's fine. It was okay. You know, it just. Yeah. Like, he has a kid and then he dies. I'm like, you have a kid you never knew about, and then, oh, you're dead. That sucks.
[00:58:02] Speaker B: But we didn't see a body, though. We didn't see a body.
[00:58:04] Speaker C: That's true. We did. We did not see a body. But Malik, it was a bad guy. What's his name? Malik. Robin Malik. Anyway, he was fantastic.
[00:58:14] Speaker B: I like the villain.
Oh, yeah. No, he's a fantastic actor.
[00:58:19] Speaker C: Yeah.
[00:58:20] Speaker B: Yeah, he's. He's really good. And there's, you know, they've had that series, that whole, like, latest series. I love the, you know, breaking them up into their own sort of chunks. I think they do a really good job of that. And this, you know, this last one, not all of them are good, right? Like.
[00:58:35] Speaker C: No, I mean, like. I mean, look, my first introduction to Bond when I was a kid was Pierce Brosnan and I.
[00:58:39] Speaker B: Pierce Brosnan. Yeah.
[00:58:41] Speaker C: You know, and, like, at the time, I thought it was amazing. And I love this character. And, like, this is so cool. And, you know, and then I saw all the other movies as I got older, and I was like, okay, he's not really the best Bond, but he's, you know, it has a close. You know, a very close place in my heart. And then you're then seeing the early ones get all. Get skewered by Austin Powers. You then understand, like, oh, I see the parody of this now, because if you didn't know the early Bonds, they're very campy. And so then the turn to the more serious Bond was super fun.
[00:59:10] Speaker B: So. Yeah, I think my first Bond was Timothy Dalton, actually.
[00:59:15] Speaker C: Right.
[00:59:15] Speaker B: Before now I think about it. Yeah.
[00:59:17] Speaker C: Yeah.
[00:59:18] Speaker B: That's way back. I think he was still. Yeah. I mean, that used to be like a. Just a tradition of, you know, every Thanksgiving, for some reason, there was a Bond marathon, and they would just play on movies all the time. And so I just.
[00:59:31] Speaker C: I mean, I give your. I give your family a lot of props. Like, you have some really great Thanksgiving traditions.
You guys all go bowling. You guys.
[00:59:39] Speaker B: Yeah, well, that's Christmas. Yeah.
[00:59:42] Speaker C: Whatever your holiday stuff is. Like, I was like, yeah, I never thought about, for my holidays, like, Going to the bowling and getting wasted on a Christmas day. That sounds great. Or Bond is, you know, your. Your thing for Thanksgiving.
[00:59:54] Speaker B: That's great.
[00:59:55] Speaker C: So I'll do. I'll talk to you later. Like, what other things do you do that maybe I want to steal?
All right, well, let's look. Let's hope they don't ruin it. I'll keep an eye on it. Since apparently all three of us are big Bonds fans. I don't know about Jonathan. I sort of feel like Jonathan feels probably like it's an assault of British people, but, yeah, who knows? Maybe he's into it. We'll task him when he's back.
[01:00:18] Speaker B: Well, maybe he shouldn't be such a super villain. You know, like, he. Yeah, Jonathan is a perfect definition of a super villain. And he's got the accent, so I.
[01:00:26] Speaker C: Think Jonathan Moore's Q. Like, he is kind of Q. Ish. Yeah, he's definitely on the Gadget side. He's not gonna. He's not gonna go out and beat up the bad guys, I think, but.
[01:00:34] Speaker D: So we should all put Jonathan.
[01:00:37] Speaker B: Some of the ways he thinks about breaking things. I don't know.
[01:00:41] Speaker C: Clearly Jonathan would be the villain. Like, yeah, if he's not on the good side, he's maybe on the villain side for sure. But anyways. All right, guys, I'm gonna get coffee because I'm gonna die if I don't get coffee soon. The jet lag is brutal. And then I gotta head over to the office, which is only like a mile away, but takes 25 minutes to drive, too, so it's kind of crazy.
I have my window open. Here's recording, because the daylight helps me wake up, but I can see the office. It's like, right through this, like, these towers. And I'm like. It's right there. And it'll still give me 25 minutes to get there. Traffic in Bangalore is no joke.
[01:01:15] Speaker B: Yeah, it is crazy.
[01:01:18] Speaker C: All right, gentlemen.
[01:01:19] Speaker B: All right, have a good one.
[01:01:20] Speaker C: Talk to you next week. Yeah, have a good one.
[01:01:22] Speaker D: Stay safe.