[00:00:00] Speaker A: Foreign.
[00:00:08] Speaker B: Where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure.
[00:00:14] Speaker C: We are your hosts, Justin, Jonathan, Ryan and Matthew.
[00:00:18] Speaker A: Episode 289 recorded for the week of January 21, 2025. Dora the explorer of EU regulations.
A riveting show ahead of. We're talking about e regulations in the show title. So sorry about that.
[00:00:32] Speaker D: We got Dora the Explorer. Come on.
[00:00:35] Speaker A: Yeah, I mean that's definitely. You're in your wheelhouse, Matt, with your younger children.
[00:00:38] Speaker D: Too young still, but yeah.
[00:00:41] Speaker C: Can you say compliance framework?
[00:00:46] Speaker A: What was. What's swiper? Stop swiping. That's the only thing I remember from Dora the Explorer. Swiper, stop swiping.
[00:00:53] Speaker D: I feel like that was probably when your kids were growing up, Justin.
[00:00:56] Speaker A: It was part of the door Explorer.
[00:00:58] Speaker D: Yeah, yeah. I think now the big one's Bluey.
[00:01:01] Speaker A: Yeah, Bluey is the big.
I'm at least hip enough with young kids for whatever reason because I have friends with them that I know. Bluey is the thing.
[00:01:09] Speaker C: Yeah.
[00:01:10] Speaker A: But that's all I know about it. I know it's a dog. I know it's from Australia.
[00:01:13] Speaker D: And then that's a Ms. Rachel.
[00:01:16] Speaker A: Okay, there's a Ms. Rachel.
[00:01:17] Speaker D: I didn't know.
[00:01:19] Speaker A: Well. Hi, Matt. Hi, Ryan.
[00:01:20] Speaker B: Hello.
[00:01:21] Speaker D: Hello.
[00:01:22] Speaker A: We're out. Jonathan decided to get the whatever cold is going around in January. And so he had a scratchy voice and said, I'm not coming. So we said, okay, we're recording without you. So we don't get to Jonathan's beautiful British accent with the raspiness, unfortunately. This week, maybe next week, we'll see.
Well, we have a bunch of news. I mean it's, it's a little weird. We're still seem to be like Amazon's kind of slow on their news right now. They seem to be still dealing with RGO perhaps.
But yeah, so we have, we have news, but not a lot of Amazon news, which is sort of a weird turn of a phrase. So we have a more Google news, which it's kind of nice, but let's get into it. First up, aOpen AI right at deadline for our show cutoff today announced a joint investment of $500 billion over the next four years building a new AI infrastructure for OpenAI in the United States with the intent to deploy a hundred billion dollars. Of that 500 billion immediately. This infrastructure will secure American leadership in AI, creating hundreds of thousands of American jobs and generate massive economic benefits for the entire world. The initial equity funders in Stargate are SoftBank, OpenAI, Oracle and MGX, SoftBank and OpenAI are the lead partners for Stargate from the financial side and Osafink having. Oh, sorry, Osoping having financial responsibility and OpenAI having operational responsibility for the partnership arm, Microsoft. Nvidia, Oracle and OpenAI are the key initial technology partners. The builder is currently underway starting in Texas and they're evaluating potential sites across the country for more campuses as they finalize definitive agreements as part of the Stargate, Oracle, Nvidia and OpenAI will closely collaborate to build and operate this computing system. This builds on a deep collaboration between OpenAI and Nvidia going back to 2016 and a newer partnership between OpenAI and Oracle. It also builds on the existing OpenAI partnership with Microsoft. With OpenAI will continue to increase its consumption of Azure. As OpenAI continues its work with Microsoft, there's additional compute to train leading models and deliver great products and services.
And the, the quote that kind of made me shudder just a little bit was at the very end of the article where they said all of us look forward to continuing to build, develop AI and in particular AGI for the benefit of all humanity.
I'm like, okay, who wants some Kool Aid? Yeah.
So this makes a little more sense with the conversation we had back I think in December about OpenAI trying to figure out their ownership model.
[00:03:40] Speaker D: Sure.
[00:03:41] Speaker A: That was a factor in any of these, in these investments? Oh, absolutely. But $500 billion for new AI infrastructure to do training and research, particularly in AGI. So that's very interesting. I was surprised that Microsoft wasn't more involved, but it sounds like Microsoft looked at this and goes, yeah, that's you playing around. And we're not sure we're convinced. But we, we still want the open AI thing. We sell like that's still us.
[00:04:02] Speaker C: I was surprised by the four year timeline because that's just, you know, usually those, the projects of like the scale they're hinting at are longer than that.
[00:04:10] Speaker A: So it's, I mean $500 billion is what, 20, you know, 50, 90 chips from Nvidia? I mean, I mean I know how.
[00:04:16] Speaker C: They'Re going to spend the money. Don't get me wrong.
[00:04:19] Speaker A: This really, this is really, hey, you should buy some Nvidia stock because they're about to get $500 billion richer in the next four years.
[00:04:26] Speaker C: Yeah, exactly.
[00:04:27] Speaker A: It's crazy.
[00:04:29] Speaker D: Don't mind that power bill, wherever they build it. Also buy that whoever stock in that power company.
[00:04:35] Speaker C: Yeah, I was sort of hoping for more like, you know, sort of manufacturing sort of in this. Whenever they had lined so, but like it's, you know, a lot of bringing the infrastructure back into America for, you know, hundred thousands American jobs. And I get why they're putting that in there. That's just feeling of the political waters. But it's, this is just like they're going to stand up, compute somewhere.
[00:05:01] Speaker D: Yeah, I mean, yeah, that's not what they're doing. They're building a data center.
[00:05:04] Speaker A: It's interesting SoftBank is investing so much money into it considering, you know, the trade issues with China and SoftBank, you know, being mostly Chinese owned and invested in.
[00:05:12] Speaker C: Oh, I hadn't done that.
[00:05:13] Speaker A: Yeah. It's one of the things about SoftBank that's interesting as well as. I didn't think their funds had done that well after crypto kind of blew up on them in pretty spectacular ways. Although it's, it's back apparently.
So, yeah, it's, it's interesting. You know, post Inauguration Day, which was yesterday, this was one of the first announcements coming out of that. The other thing that's interesting is Texas as the location for the data center.
I mean, latency is not really a concern for a central location, but I would think cooling and power would be. And I'm not sure Texas is the right place to get a lot of massive amounts of cooling and power.
[00:05:47] Speaker C: Well, you know, infamy is a word that they use to describe their power grid.
[00:05:50] Speaker A: Infamy. Yes. It is an infamous power grid for sure.
[00:05:53] Speaker D: Yeah. Definitely didn't cause any problems a couple years ago. But maybe they fixed all them.
[00:05:58] Speaker A: I mean, they just had a deep freeze, you know, polar vortex had down there right now. And you know, they haven't had anything freeze up yet. So, you know, maybe they, maybe they fixed it. I don't know.
[00:06:07] Speaker D: We'll find out. Give us a week, we'll talk about it next week.
[00:06:10] Speaker A: Yeah, yeah, we'll see how it goes over the next couple of days. That cold snap continues. All right, let's move on to aws. AWS is announcing multi session support, which enables AWS customers to access multiple AWS accounts simultaneously in the AWS console. You can sign up to five different sessions in a single browser. And this is going to be a combination of root accounts, IAM accounts, or federated roles in different accounts or the same account. I have to say this is probably the biggest improvement they've made to my life in a long time. Like, I mean, we've talked about other quality, like, yeah, that's really nice, I really appreciate it. But this one, this one really is nice because, you know, as all technologies are, we're a bit of tab junkies. And so you end up with a lot of tabs. Then all of a sudden you're like, oh, I need to log into an Amazon account. You log in and you forget the other ones were all already logged into a different account and now they all get logged out or, or they refresh in the other account and you're like, oh, no, I was in the middle of something important and now you won't have that problem. So I'm super happy about this one.
[00:07:04] Speaker C: Yeah, this is like, it is funny because I was laughing exactly the same way, which was like, this is the biggest thing they've ever announced, you know?
[00:07:13] Speaker A: Yeah, I didn't even get a full blog post. It was just like I was gonna do a what's new blog post. Like one, like one paragraph. I'm like, that's it? That's all you're gonna say about this amazing feature? This is. Screw your AI noise. This thing is awesome.
[00:07:24] Speaker C: Yeah.
[00:07:25] Speaker D: I ended up finding this the day it was released because I was doing something, did exactly what you guys did, which was, oh, went over here. Oh, wait, got logged out over there. Crap. Okay, let me go back. And then I was like, wait, what's this? Because I was setting up a cross count roll, I was like, oh, wait, when did this get released? Hold on. Google it. Has this been here for like six months? Nope, Nope, just was today. It was amazing. Made my life so much better.
[00:07:50] Speaker A: Yeah, I mean, I, the way I've tried to get around this issue. Multiple profiles in Chrome with different accounts and then, yeah, because like you, it never feels like, oh, I'm, I'm in the sandbox account or the dev account and I have the production account. I want to compare settings across things. Like you, there's all these scenarios where you're like, you want to look at something and compare and it was impossible to do. So. So, yeah, this, I'm very, very happy.
[00:08:12] Speaker C: I mean, this single handedly, you know, ostracized me at work because it binds me to Firefox because I, I can't deal with Chrome profiles because I do not like the. And so Firefox containers was my solution to this. And so now I've got a lot of, you know, investment in that. And I don't know if I'm willing to let that go. See, now that I don't have to.
[00:08:32] Speaker A: I mean, we, we give you so much crap about that Firefox addiction you have, so I mean, maybe it's worth it.
[00:08:37] Speaker C: I Don't know, Tony. Works great.
[00:08:40] Speaker D: See. But can the Firefox containers do colors? Because that was always my code.
[00:08:44] Speaker C: Yes. Was that. That.
[00:08:46] Speaker A: That is the requirement.
[00:08:47] Speaker C: So it did colors before Pro. You could sort of monkey around with the profile aesthetics. And so that was a big thing for me too. Is that the individual tabs were colored.
[00:08:55] Speaker D: Yeah, that. That one's a big one because I always make production red. Sorry. I know, like, super careful. So.
[00:09:02] Speaker C: Yeah. Wonder why this is limited to five sessions.
[00:09:05] Speaker A: I. I imagine some reason. I just.
[00:09:09] Speaker C: Well, I mean, it's got to be something like session size and length in of. Of a header or something, right?
[00:09:14] Speaker A: They. They calculated the MA memory that Chrome could handle and there is no maximum amount.
[00:09:20] Speaker C: Whatever you could feed, I can tell.
[00:09:22] Speaker D: You there's not, because I think my Mac, I clicked Chrome once and like 40 gigs of memory came back all of a sudden to me.
[00:09:28] Speaker C: Yeah, I think you quit Chrome. And my lights got brighter.
[00:09:33] Speaker D: That's when I closed my Excel for my billing. That's when your lights got brighter.
[00:09:37] Speaker A: I just opened Firefox for the first time and I can't tell you how long, and it said you had to update. And so I was curious, like, what version was. I was at 128 and they just upgraded to 134. So I was sick. Six versions. Fine.
[00:09:47] Speaker C: Yeah.
[00:09:48] Speaker D: Well, for Chrome, that's like six weeks.
[00:09:50] Speaker A: Yeah. I mean, why? I one time made a mistake of signing up for the beta channel of Chrome updates. That was a mistake. That was every five minutes. Like, you need to upgrade Chrome. I'm like, no, this is annoying. I have too many tabs for this.
[00:10:01] Speaker D: And then one password doesn't work and everything gets annoyed.
[00:10:04] Speaker A: One password gets super cranky. Like, one password gets cranky when there's an upgrade. It's like, oh, you need to upgrade. I'm like, why? Why do you care about this?
[00:10:11] Speaker C: Like, come on.
[00:10:12] Speaker D: Forces you to. Otherwise I have to, like, manually go. And. Yeah, that's pretty much what makes me reboot Chrome.
[00:10:18] Speaker A: Yeah, we at work, they. You've been, you know, slowly restricting things on your computer over time. And, you know, it's a good practice. And so this latest one they did to us in December was they decided that you have to use a company laptop to access company email and things. And so to do that, they had to, you know, enroll your laptop into mobile device management or something. I don't know. It's not my. I don't do corporate it.
And he sent us the instructions to go, step one, close all of your Chrome browsers and I'm like, I'm resigning. I'm not doing this. I'm out. I'm not going to do it. You can't make me do this.
Luckily, I do happen to know that in Chrome you can restore your tabs. And so after I had a pissy party about it, I did it and was fine, but there for a moment, I was done. That was. That was the moment.
[00:11:05] Speaker D: Quitting Chrome. Yeah, for me is when 1Password stops working and then random other stuff starts working. If I like, all right, I could pay attention to this meeting or I can quit Chrome and try to make sure everything reopens.
We're quitting Chrome.
[00:11:20] Speaker A: Yeah. Say, like, pay attention to the meeting and quit Chrome. Like, you're. You're going to do the right thing. But, yeah, we're on a sidetrack now. 1Password. So I was the last best guy.
I love LastPass. I thought it was amazing. And then, you know, after the third time it got breached, I was like, okay, I can't, I can't. I'm gonna cut over cold turkey to 1Password. And so I know that I am in the Flex app era of 1Password and that people who were in the old era still hold very hard to the old one that was more embedded into macOS and was in need of Mac app. But I never knew that one. So I've always been a Flex app one. But I have to say the things that 1Password does doesn't do. That LastPass did annoy the crap out of me. Like, simple things like, hey, I'm updating my password on the website and like, it doesn't pop up and say, hey, I just noticed use a different password than what was in the thing. Would you like to update it? Yes, it does. It doesn't do that. Uh, and then for whatever reason, like autofill and web forms, like, half the time it takes forever to pop up a little thing to insert my username and password. And like, those two things alone have almost made me. Driven me back to LastPass at least like four times, like, just in anger. And then I tried the whole like, Apple password thing. I tried that for about a week and I said, this is.
[00:12:28] Speaker C: Yeah, that didn't last.
[00:12:29] Speaker D: Nope.
[00:12:30] Speaker A: And I was out of that. But yeah, like, the 1Password transition has not been easy. And even a year and a half into it now, I'm still very not happy with my 1Password experience.
[00:12:40] Speaker D: So I don't have the first problem. So I've been using 1Password since I looked it up. The other day. Cause I was like trying to like figure out a licensing because I was on the perpetual license, which was like 1Password4 or something. It's like in my 1Password still. It's my, my key. So I was trying to like find it to deal with to send to them.
And it like, it's gotten better. The old one was better. I still find that the integration with Chrome sometime is a little clunky. Not for updating it, but for filling in like forms isn't great. I just think it's not their core competency. It's like something they added, like my name, address, whatever. Like that is always terrible.
[00:13:23] Speaker A: Yeah, it's always bad. And like the other secure notes are not very good. And like how many times I have to unlock this vault on a daily basis is kind of crazy to me too.
[00:13:32] Speaker D: You can adjust that. But also that's a little bit also of probably Chrome having to restart.
[00:13:38] Speaker A: I don't know. I'm not happy with my transition still.
[00:13:41] Speaker C: No, it is really frustrating. The best working solution is the one that's been breached so many times because it really is like I tried to switch off of it and my wife vetoed the switching off. She just couldn't make 1Password work.
[00:13:56] Speaker A: I'm pretty sure my wife has basically just adopted Apple passwords at this point because I don't think she fully understands what 1Password is trying to do. And like last patch we understood. And I think the 1Password interface is not. Not tech.
[00:14:08] Speaker C: Yeah. You know, if you're, if you're not in tech, it's not friendly.
[00:14:11] Speaker D: Yeah.
[00:14:11] Speaker A: Yeah. So. But that's fine. She can be fine with Apple passwords for her use case. It's good. But yeah, no, I just wanted to complain about that for a minute because he mentioned one password and.
[00:14:22] Speaker D: With last password of like the location change and it emails me and I'm like, did I type it wrong? So I go, try again and then it would like send me another email. I finally would like, remember. Oh yeah, it sent me an email. I have like six emails there.
[00:14:35] Speaker A: Yeah, that. That was annoying at times.
But that didn't happen very often to me. So I.
[00:14:40] Speaker D: It's because I wasn't a normal, like an everyday user.
[00:14:43] Speaker A: Yeah.
[00:14:43] Speaker D: So like every time I use they got mad.
[00:14:45] Speaker C: I mean, I think I fixed those problems by never leaving my house. It's really easy because you don't wear pants.
[00:14:51] Speaker A: Exactly.
[00:14:53] Speaker D: But you went hiking this weekend, so.
[00:14:54] Speaker C: You know, I did go hiking.
[00:14:55] Speaker A: I did.
[00:14:55] Speaker C: I did actually go out in nature. Have fun.
[00:14:58] Speaker A: It was, yeah, he'll leave for nature. He won't leave for work. Like, yeah, like he's, he's seeing all these like return to office high days of weeks and he's like this better not happen to me. I'm like, I, I was quitting over closing my Chrome browsers. He's gonna quit over how do you.
[00:15:09] Speaker D: Put on pants, tight hats, your what.
[00:15:12] Speaker A: Said before that like, I, I, I think it was just normalized. Wearing shorts into the office. It'll, it'll help. They'll solve part of the problem.
All right, we get back on track because we've reached slide raw. It's all, you know, we rat hole.
[00:15:24] Speaker C: This is too early to get this far off track.
[00:15:26] Speaker A: Yeah, yeah, sorry. All right. Back on track. AWS announced the General availability of two new larger sized EC2 Flex instances. This is the 12x Large and 16x Large in the C7i and M7i variants. This new size expands the EC2 Flex portfolio, providing additional compute options to scale up existing workloads to run larger sized applications that need additional memory. These instances are powered by the custom 4th gen Intel Xeon scalable processor, only available on AWS and offers up to 50% better performance over comparable x86 based intel processors. And when I was reading through this article I remembered that I really don't understand what Flex is still like. So if either of you two have done any more research than I have on this, I sort of get that it's like burstable access to some type of memory. But like how and why and what scenarios I don't fully get.
[00:16:15] Speaker D: That's where I was. I was hoping that you guys knew because I kind of understand the same thing. It lets you like burst CPU and burst memory. I assume it's like the T series but for memory, but honestly I wasn't sure.
[00:16:29] Speaker C: I was hoping you guys would keep talking for a little longer while I did some research real quick.
[00:16:33] Speaker A: Well, I did, I did just Google, you know. So basically Google Gemini in my Google search, which is the way I search all things now I don't even look at the websites anymore. I just look for Gemini tells me, which is how it's only hallucinated to in AWS. Flex instances refer to a specific type of EC2 instance like the N7i, Flex or C7iflex, which offer greater flexibility and resource allocation by allowing you to choose from a limited set of pre configured sizes within a specific instance family, providing more control over your compute needs while potentially optimizing cost based on your workload demands compared to standard instance types, which didn't really help me.
[00:17:04] Speaker C: So ah, I think I, I think I can decode that and I think I'm right is that it's, you know, the way that you had to provision memory and CPU and the relationship between the chosen Amazon was the instance type. And now I think if you select this instance type, you can tune those specifically.
[00:17:21] Speaker A: Ah. So instead of shape, instead of being forced into a 128 gig, you know, box that you'd only need 25 gigs of memory on, you can now customize that a little bit when you deploy. So it's like custom shapes in Google but a worse implementation of it. Yes. Oh, good. Okay, I got it. That helps. This is why people listen to us, so we can learn these things together.
[00:17:40] Speaker D: Yeah, One person Googles it, the other person complains about it for a little bit and we.
[00:17:45] Speaker A: One person stalls.
[00:17:46] Speaker D: It's great teamwork, guys.
[00:17:48] Speaker C: Yeah. Go team.
[00:17:49] Speaker A: Go team, go. This is why Jonathan, not having Jonathan here, because he probably knew exactly right.
[00:17:54] Speaker D: Yeah, he wasn't.
[00:17:54] Speaker C: Yeah, he reads all the things.
[00:17:56] Speaker A: Yeah. He's thorough. He's the research guy. All right. AWS code build now supports test splitting and parallelism. You can now split your tests, run them across multiple parallel running compute environments. Based on your sharding strategy, code build will divide your tests to run them across the specified number of parallel environments. Now I appreciate this, but I would like to run multiple tasks on the same environment versus spending more money on more parallel environments. Or I'd like you to handle all the automation of spinning up all those parallel environments so I don't have to do that. So if codebuild could get on that part of it, I'd be much happier.
[00:18:28] Speaker C: Yeah, no, I definitely don't want to pre plumb environments for it to run. I was, I hit this limitation a number of ways because it's something I just think it should do and it doesn't.
[00:18:39] Speaker A: Right.
[00:18:39] Speaker C: But yeah, then even once you have it paralyzed, I don't know if it's exactly what I want. But you know, if you're building from multiple branches, if you have different like outputs, you, you know, if you're doing a build for processor architecture. Makes sense. Used to have to do some terrible things to make that work.
[00:18:57] Speaker D: Yeah, it's the processor architecture one that I've had to deal with before for ARM and for intel, especially like with containers. So at least now you're able to build them all at once in parallel and get everything at the end versus dealing with the Disaster. As you're going and watching your builds just take forever.
[00:19:16] Speaker A: I mean, I guess you can learn that your test failed faster this way.
[00:19:22] Speaker D: Fail fast, fail often.
[00:19:24] Speaker C: Oh. So all of my developments are a lot of time waiting for compiling and then a lot of time really cursing myself that I spend all this time waiting for compiling because I just type out something really stupid.
[00:19:34] Speaker A: Yeah, that's. That's kind of my experience, my life.
[00:19:38] Speaker C: Yeah.
[00:19:38] Speaker A: Yeah.
[00:19:38] Speaker C: So this is sort of like I. I do think that, you know, you paralyze this and at least my mistakes are less costly.
[00:19:44] Speaker D: No, you get to find out four times that you write the same typo if you're me. Four different builds, it's great.
[00:19:50] Speaker C: Usually find out four different ways, but it takes four hours.
Not like I watch it.
[00:19:57] Speaker D: Well, normally I have a fail at the first time. I don't like, let it pass and keep going. Right.
[00:20:02] Speaker C: Yeah. Well.
[00:20:03] Speaker A: Well, the flip side of it is that typically my code build fails because my code pipeline broke. And then. If you've ever tried to debug a code pipeline, Amazon's helping you out today as well, because they're not offering you an enhanced debugging experience in the AWS management console, enabling you to identify and resolve pipeline failures more efficiently. The new debugging interface gives you a dedicated debugging page of accessible via the Action debug button. And if you actually go play with this feature, which I did, it actually gives you the ability in the side panel to see a streamlined layout of your pipeline and see exactly where it failed, which is so much better than before, which was just a benign error message that you had to now remember what your pipeline steps were called to remember where that failed and maybe possibly why. So, so I appreciate this one. So now I can curse out code build and code pipeline at the same time.
[00:20:45] Speaker D: You forgot codecommit.
[00:20:47] Speaker A: Well, I don't use code commit.
[00:20:48] Speaker C: That's.
[00:20:48] Speaker A: That's full talk. No one does that.
[00:20:51] Speaker C: You don't want to drink all the Kool Aid that the developer tools. Yeah, yeah.
[00:20:54] Speaker A: I mean like once GitHub private repos came out like, and you can have basically unlimited ones for org of three people. Like, I don't need. I don't need code commit. I got off that pretty quick plus idea. Aren't they deprecating code commit?
[00:21:06] Speaker D: Yeah, they are deprecated. I was waiting to see how long tricky to get there.
[00:21:10] Speaker A: Yeah, that's like, hey, that's one they killed. There's other code pipeline things or code similar.
[00:21:15] Speaker D: There's Code Star that does anyone it's meant for like small businesses that like do this one thing. It automatically integrates all the other pieces together, but doesn't. It's actually its own thing, but really should have just been like a wrap around everything.
[00:21:32] Speaker A: But yeah, CodeArtifact. I have used that. That one's up.
[00:21:36] Speaker D: Oh yeah, I forgot about that one.
[00:21:38] Speaker A: Code Build Code Commit Code Deploy and codepipeline.
[00:21:41] Speaker C: There's Code Star, which is the branding.
[00:21:44] Speaker A: I don't even see Code Star on here.
[00:21:45] Speaker C: Yeah, it's because they killed that too, but.
[00:21:47] Speaker A: Oh, did they?
[00:21:48] Speaker D: I really do Ms. CodeDeploy. Dan Azure. It just worked.
[00:21:54] Speaker A: And that's what GitHub Actions did for me. So I guess I don't care anymore about that. But yeah, I hear you better than Jenkins now.
[00:22:02] Speaker C: There's no better than Jenkins. What are you talking about?
[00:22:05] Speaker A: Yeah, as the guy who manages 9,000 Jenkins jobs would say that.
[00:22:10] Speaker C: And I hate every single one of them.
[00:22:14] Speaker D: There's Terraform for Jenkins though, if you really want to hit yourself.
[00:22:18] Speaker A: Terraform for Jenkins against that API. Oh my God, that's terrible.
[00:22:23] Speaker C: That's the only API I won't use. I was like, I'd rather click around.
[00:22:27] Speaker A: It can't really be an API because there's no database, there's no like actual. Yes. The type of thing is wearing Java, I guess. But.
[00:22:33] Speaker C: Oh no, it's all just. It. It's just weird ways that it scrapes its XML and displays. Right.
[00:22:38] Speaker A: I was like, yeah, a weird XML parser pusher. Like it's got terrible garbage. Yeah, yeah, I was. I was talking to somebody about like having to update repositories and I was like, yeah, technically you could. You could update all the XML by running a scan thing and doing, you know, some seg grab type work, but you're not going to like yourself when you do that.
[00:23:00] Speaker C: So I mean, it is something that you can do where the Jenkins API, but you are basically just ex. Updating an XML file.
[00:23:06] Speaker A: Yeah. Which you could do a set just.
[00:23:08] Speaker C: Over but by calling a URL first.
[00:23:10] Speaker A: Yeah. But if you need to set Ock, you don't have to mess up the API. But even that I.
[00:23:14] Speaker C: It's if you have access to the base infrastructure.
I was completely wrong on Flex Sense. It's this, by the way. Oh, were you good now that we've stalled?
[00:23:24] Speaker D: We all believed you.
[00:23:25] Speaker C: Yeah, I know. It sounded so plausible, right?
[00:23:27] Speaker A: Plausible.
[00:23:28] Speaker C: Yeah, it did.
[00:23:28] Speaker A: And it's all people who are writing us right now like, yeah, yeah, you don't know what Flex instances are.
[00:23:33] Speaker C: It is really More of a a T2 and T3 play where there's a baseline CPU performance that can burst up for up to 24 hours. That is as simple as that. And I don't understand why anymore.
[00:23:47] Speaker A: So you get, you, you get a premium version of a tn, you pay a premium price for that. It still has all the pain of.
[00:23:53] Speaker C: Burst of capacity, all the downsides.
I really don't understand why I've never.
[00:23:58] Speaker A: Been more mad though when I've been in incident incidents where like I'm like the server is not doing anything like I mean it.
[00:24:05] Speaker C: It does specifically call out Kafka and elasticsearch as good workload. So then I now no, I hate him.
[00:24:10] Speaker A: Okay, yeah, that's not good. Okay, let's move on to better topics. Amazon continues to push the high throughput boundaries of SNS FIFO topics or first in, first out topics now with default throughput matching SNS standard topics across all regions. When you enable high throughput mode, SNS FIFO topics will maintain order within message groups while reducing the deduplication scope to the message group level. And with this change, you can leverage up to 30,000 messages per second per account by default in the US east region and 9,000 messages per second per account in US west and European regions. And you can request quota increases to make it equal to Europe because that will mess you up if you're doing distributor infrastructure. So you probably want to make sure you get that quota increase up in now. But this is available to you if you can use a high throughput mode for your system. So I think.
[00:24:55] Speaker C: Or if you can afford high throughput.
[00:24:57] Speaker A: Mode or if you can afford.
[00:24:58] Speaker D: Yeah, 30,000 messages a second. I will add up quickly.
[00:25:02] Speaker A: It's still cheaper than Kafka Kinesis.
[00:25:07] Speaker D: More curious and obviously we won't know. But like why do we get more in East? Do we really trust us East1 to do more things? This just feels like a bad life choice for everyone else.
[00:25:19] Speaker A: Just the wrong word.
[00:25:20] Speaker C: Yeah, yeah.
[00:25:23] Speaker A: I, I don't really get why they didn't explain it in the article either. Why it was bigger in U in U.S. east or that maybe that's where you have all your D. I mean because basically your account lives in US East 1 regardless of where you are. So maybe there's a SNS quota that you're using and you don't know about that you would run out of if you didn't have more.
[00:25:43] Speaker D: Definitely run into unmarked quotas in a past life. Those are always fun to hit.
[00:25:47] Speaker A: Yeah, or, or why am I running this quota. Like, well, you're using it and we're using it.
[00:25:51] Speaker C: Like oh yeah, a little problem those days.
[00:25:55] Speaker D: It was the time I found out the AWS account that runs the AWS console On the account ID by hitting random APIs until things broke and opening work cases and they were like, hey, this account's hitting this API. I was like, I don't know what that is. Can you tell me? No, we can't tell you what it is. It's an AWS account. I was like what?
Thank you Support. It's a great conversation.
[00:26:21] Speaker A: I've had fun calls with Amazon support in the past, so I understand.
[00:26:26] Speaker D: Yeah.
[00:26:29] Speaker B: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask will you actually give me my money back? Our Chair A will click the link in the show notes to check them out on the AWS marketplace.
[00:27:08] Speaker A: All right, let's move on to gcp. Google Cloud are committed to providing the fastest and most reliable Kubernetes platform with Google Kubernetes engine. And now they're announcing the improved Horizontal Pod Autoscaler, or hpa. The Kubernetes feature that automatically updates workload resources to match your demand. The newest Performance HPO profile you get 2x faster scaling, improved metrics resolution and linear scaling for up to 1000 GPA objects. This matters because customers are regularly asking for it and they frequently overprovision resources to account for delays in the auto scaling stack, resulting in lower efficiency and higher costs. By nailing this, you'll minimize your waste, you'll improve application responsiveness and you'll increase operational efficiency. And Spotify, who is probably demanding this feature, has a quote here with GKE's Performance HBO profile. We've witnessed a remarkable boost in horizontal auto scaling speed and our test with over 1000 GPA objects workloads scaled up twice as fast and we're excited to leverage this performance enhancement in our production environment, says Sophie Kao, senior engineer at Spotify.
[00:28:06] Speaker C: So I am dubious because every time I've if my day job scaled up larger events while if you can create the containers, great, but something else is going to fall down within the Kubernetes infrastructure. And so I was flabbergasted when joining a new team and I found out that they still have a huge process to warm their pods by pre launching containers because they found that they would crash the DNS server container or the another sidecar that did proxy or something else in there. So I'm. I'm hoping that this profile fix a lot of those issues because it's.
[00:28:44] Speaker A: You were shocked to learn this? Like shocked. Like shock shocked or.
[00:28:49] Speaker C: I've been told that there's this pan of Kubernetes and that all of my magical problems, all the reasons on why I just don't like Kubernetes have all been fixed. As long as you run your workload on GKE and you know, give it to the pros who understand this. And I was like, these are all the same problems.
[00:29:07] Speaker A: Oh, you sweet summer child.
[00:29:08] Speaker C: I know you fell for the marketing.
[00:29:11] Speaker A: Marketing so hard.
I never believed that for a second. Well, it wasn't even the.
[00:29:17] Speaker C: It was other people, people that were touting kubernetes to me. It wasn't even like people from Google.
[00:29:22] Speaker A: The problem is, the problem is there's people who have religion for kubernetes who don't know what they're talking about either.
[00:29:27] Speaker C: Yeah.
[00:29:27] Speaker A: So it's. This is one of these problems that everyone thinks they Kubernetes is the panacea of all technology problems. And it's like, yeah, it can do some cool things. Mm. But you're better off the ECS 99% of the time.
[00:29:39] Speaker C: Yeah.
[00:29:39] Speaker A: Unless you're on Google, then GKE is your only choice. But then I recommend going autopilot and trying not to think about this too much.
As much as you can, but indeed.
All right.
The EU DORA regulation has arrived and Google Cloud is apparently ready to help you. And this is not the good DORA that we'd like to talk about here on the show. This is not the DevOps report. This is the Digital Operations Resilience act door for short, which takes effect as of January 17th. And financial entities in the EU must rise to a new level of operational resilience in the face of ever evolving digital threats. To help you tackle these new threats and regulations, Google is sharing the door Customer's Guide on Register of Information and Information and Communication Technology Risk Management and their new Google Cloud Third party Risk Management Resource Center. Additional financial entities can request their door subcontractor listing today. Which all sounds terrible if you ask me.
[00:30:30] Speaker D: Yeah. When I had to do some research on this for my day job. It looks like it mainly maps over to ISO. So if you are ISO 2701 you're mostly covered for this, which does make your life easier. I'm waiting for Azure to kind of come out with their same offering because it will make my life a little bit easier.
[00:30:51] Speaker A: Yeah, I'm sure they'll get there eventually there right now dealing with the AI regulations.
Yeah there is actually a nice, you know, in the door register of information customer guide. They have all the contractual arrangements you need to meet. They've got, you know, basically Google Cloud's comment about them and how they were applying to them. So it's actually a pretty good reset of resources. These three documents they link to in the article and then the third party Risk Management Resource center is helpful for all of your third party risk needs for Google, including all the nasty due diligence questions you get from customers that they never read anyways are all available to you all those resources. So check those out if you are now under the Digital Operational Resilience act for eu and I assume this is coming my way too because I work in tech and so financial services will definitely be impacted.
[00:31:35] Speaker C: Yeah, yep, it's nothing is more compelling than a good, you know, risk management guide. So I look forward to those days.
[00:31:42] Speaker A: Nothing puts me to bed better than a good risk management guide.
C4A the first Google Axiom processor is now GA with titanium SSD. Google is making the new C4A virtual machine with Titanium SSD generally available. The Titanium SSD of course is a custom designed for Google Cloud workload that requires real time data processing with low latency and high throughput storage performance. And the titanium SSDs on C4A VM deliver storage with up to 2.4 million random read IOPS, up to 10.4 gigs of read throughput, up to 35% lower access latency than previous generation SSDs found it ironic that they did not mention write throughput, which apparently is probably bad. C4A is a VM instance family based on Google Axiom processors that give you a 65% better price performance, up to 60% better energy efficiency than comparable current Gen X86 based instances. And you can scale these things up to 72 VCPUs, 576 gigs of memory and 6 terabytes of local storage in two shapes. Standard with 4 gigs of memory per VCPU or high memory with 8 gigs of memory per VCP. They both support up to 50 gigs of standard bandwidth and up to 100 gigabytes with tier 1 networking for high traffic applications as well as they both support the latest generations of balanced and extreme hyper disk storage. So you can burn all the monies as fast as you want with all those great new features.
[00:32:57] Speaker D: Wait, you can specify your networking size on Google?
From 50 gigabytes to 100 gigabytes. And how do we not make fun of extreme hyper disks when we make fun of Ultra Pre.
[00:33:13] Speaker A: Oh, we did. We did.
[00:33:14] Speaker D: I missed that one.
[00:33:15] Speaker A: Yeah, you missed that one.
[00:33:15] Speaker D: I just want to make sure. Sorry.
[00:33:17] Speaker A: No, we made fun of it for. On your behalf as a thank you.
[00:33:20] Speaker C: I appreciate MSSD because it's sort of like a. What?
[00:33:23] Speaker A: It's a. I don't, I don't actually remember if you can specify the specific bandwidth. I think it is based on the size of the. I mean Google's weird, right? Because they have standard shapes and standard shapes do have memory and disk and bandwidth and all those things you just mentioned. But then you get into custom shapes of these things. Then when you get into custom shapes, I don't think they let you specify it, but there's a certain allocation you get and then there's the burstable nature of it as well, just like you have in Amazon. But I don't believe they allow you to Specify I want 50 gigabytes with my two CPU box.
[00:33:55] Speaker C: Yeah, I, I haven't seen it. I mean I don't have any workloads where I would need to.
[00:34:00] Speaker D: Because you've done your job. Wrong then. But we'll bypass that point.
[00:34:04] Speaker C: Yeah, wrong.
[00:34:06] Speaker A: I don't know.
Yeah, there's a lot of.
[00:34:09] Speaker C: Right.
This is a lot of throughput for one place.
[00:34:14] Speaker A: Yeah, we already know he doesn't do databases and we already know he doesn't do front end. He's strictly an API guy.
[00:34:19] Speaker C: So full table scans of key value stores, that's. That's my go to. Right.
[00:34:25] Speaker D: And yeah, this is why we can't have nice things.
[00:34:28] Speaker A: Right.
[00:34:31] Speaker D: Well, so this was their first ARM processor they've. They released.
[00:34:36] Speaker A: This is the. Not the first Axion processor. This is one of the second models they've released with it. This is the first one with the Axion and the Titanium ssd. That's the.
[00:34:45] Speaker D: Okay, it's the very. And that I missed. Got it.
[00:34:48] Speaker C: Yeah.
[00:34:49] Speaker D: Because like I don't think Azure still released it. And I'm like AWS had the gravitons for like five years now, so I'm still amazed at how far behind the eight ball Azure is on this. And makes you feel a little bit better that Google's like only like a couple years ahead of Azure. So like they're not in the. A different ballpark but you know.
[00:35:11] Speaker A: Yeah, yeah, just it was GA. So the first C4A was GA'd in October.
[00:35:21] Speaker C: Oh, okay.
[00:35:22] Speaker A: Yeah, so it's not that long ago, but it, yeah, it's, it's still relatively fresh and shiny.
[00:35:27] Speaker D: Yeah.
[00:35:29] Speaker A: All right. Google has a RSS feed that I typically ignore, but they had announcement that I saw. So this is, we're calling this smaller releases of note.
There's no blog post, just a mention in RSS feed, but basically they've announced general available managed instance groups or MIGs. You create pools of suspended and stopped virtual machine instances and you can manually suspend and stop VMs in a MiG to save on costs or use suspended and stop pools to speed up scale operations of your MiG, which is exactly what Amazon and Azure have been doing for a while. So congratulations Google, you are now a future parody.
[00:36:06] Speaker D: Golf club.
[00:36:08] Speaker C: I still today I learned that this is something that other people have, which is you can launch machines in a suspended state, which is pretty rad rather than just sort of managing the workflow.
[00:36:17] Speaker A: Because we talked about that when it happened and we're like, oh, that's actually cool. You can, you can spin them up and suspended. Which was. I think that, I think Amazon got the suspended state thing last year but they've had, they've had the ability to hibernate or suspend auto scale group nodes for quite a while now.
[00:36:32] Speaker C: You could, but you had to manage the workflow. Correct. I was getting the impression from this that you didn't have to.
[00:36:37] Speaker D: Yeah.
[00:36:37] Speaker A: And Amazon fixed their. When they did the ability to launch them in hibernate state, they fix that too. This is cool.
[00:36:44] Speaker C: I mean, you know, if you have Windows workloads, this is completely necessary because there is no, there is no scaling Windows workload.
[00:36:51] Speaker A: I mean as long as you can, as long as you can wait five to six minutes for your box to scale up, it's fine.
[00:36:54] Speaker C: Yeah. Enjoying the domain services? Yeah, it's like a half hour if you want to work in box man.
[00:37:00] Speaker A: It's sometimes that's why I don't use that pesky domain thing. Screw that noise.
[00:37:04] Speaker C: Yeah, I know.
But then you're security team makes you. Because you know how are you going to sell your security agent unless you put it in a group policy?
[00:37:13] Speaker A: Right? Hey, that's my trick.
Right. Google has entered into a long term agreement with Leeward Renewable Energy to support over 700 megawatts of solar projects in Oklahoma. Google Says it's strategically located to support their data center operations with one being less than one mile from their data center in Prior Oklahoma, which I don't know if that's quite as lucrative now post Monday because this was announced last week prior to the administration change. But I'm glad to see Google's not completely abandoning their green energy needs yet.
[00:37:45] Speaker D: Prior Oklahoma population 8,659 2,000 do you think we have any listeners there?
[00:37:53] Speaker A: I maybe who were at Google or will be? Yeah, yeah, it's a, I actually don't know what region that is I was thinking about because I was like I thought Iowa was the region for Google.
[00:38:06] Speaker C: Well, central, yeah, central ones. Iowa.
[00:38:08] Speaker A: I don't know which one this one is, to be honest.
[00:38:10] Speaker D: Apparently they are known for the annual music festival located four miles north of town.
[00:38:16] Speaker A: On their Wikipedia page they actually have a photo of the Google data center. I don't know if this is like Google corporate data center for like Google search. I don't know if this is not a Google cloud data center. Oh, that's also a possibility.
[00:38:26] Speaker C: Yeah.
[00:38:28] Speaker A: Then also continuing on the investments in net zero emissions, Google is announcing two long term purchase agreements to help scale biochar as a carbon removal solution, partnering with Vahara and Charm to purchase 100,000 tons of biochar carbon removal from each company by 2030. This will enable them to remove 200,000 tons of carbon, helping them achieve their net zero emissions goal as well as help catalyze biochar production towards a scale that helps the planet mitigate climate change. Which I think this is cool. I did about 10 minutes of research because I was curious what biochar was. And so apparently it's a, once you capture the carbon into this, it's like a char, it's like a, you basically use it as a fertilizer component to basically help fertilize fields and farming. And so when you go to their websites, they're full of farm equipment and farms and they're trying to also help small farmers build sustainable futures by using biochar as a fertilizer element which will help reduce carbon emissions. So I, overall I like the idea of this and I hope more and more investment in this happens because as we're seeing from California this month, climate change is here.
[00:39:35] Speaker C: Yep, everything, yeah, everything's changing. And so like I, I, I assume that they're sequestering the carbon into the ground by adding it to the fertilizer thing. That's cool.
[00:39:46] Speaker A: Yeah, makes sense.
[00:39:48] Speaker C: And maybe it'll seep down and concentrate and turn back into oil.
[00:39:52] Speaker A: I don't know if that's how that works, but no, it doesn't. I assume that the dinosaurs. Doesn't the carbon become food for the plants and technically the carbon then goes back into plants and then we basically repeat the cycle. Just it's a circle of life kind of thing. Maybe.
[00:40:05] Speaker C: Maybe it'll filter down through the cracks and turn into dinosaurs. I like. I like that play. Okay.
[00:40:10] Speaker A: Sure.
[00:40:10] Speaker D: I like dinosaurs. They're fun.
[00:40:13] Speaker A: Dinosaurs are great. Who doesn't love the dinosaur? I did see another rat hole here for you guys because apparently we're just.
[00:40:19] Speaker C: Yeah.
[00:40:20] Speaker A: Someone. Someone took Jurassic park and then made the raptors have feathers. Like so they're.
[00:40:25] Speaker D: Oh no.
[00:40:26] Speaker A: And you know, it's just as terrifying with feathers. I thought it'd be dumb, but the feathers definitely concerning.
[00:40:32] Speaker D: I think I'm more terrified of that. I'm kind of afraid to look at.
[00:40:36] Speaker A: It was a bit more like. Like I didn't like birds before they put birds on feathers on the dinosaurs. But yeah, now we. You. Yeah, maybe people said that wouldn't be scary, but I disagree.
[00:40:47] Speaker D: I think.
[00:40:48] Speaker C: I think it's scary. Yeah.
[00:40:50] Speaker A: All right, move on to Azure and other scary Quick, quick. More regulation from the EU Microsoft which unfortunately might be the only place to get regulations for for the next four years. Microsoft, during its recent AI tours, took a chance to meet with EU regulators and politicians to discuss AI and the new European Union AI Act. This is the first comprehensive legal framework for AI. It aims to ensure that AI systems developed and used in the EU are safe, trustworthy and respect fundamental rights. The act classifies AI systems based on their risk level of unacceptable, high, limited or minimal or no risk. Microsoft approach to compliance against the act includes proactive approach they are preparing by conducting internal reviews, updating internal policies and contracts and engaging with policymakers directly. Focusing on customer support by ensuring that documentation, tools and guidance all consider the act and the requirements of the act. Shared responsibility between AI providers and users and emphasizing the need for compliance. Building compliant products and plan to publish transparency reports and provide documentation for its AI systems to help customers understand their capabilities and limitations and collaborate with policymakers, regulators, industry groups to shape the implementation of the act and ensure effective and interoperable practices and make sure it doesn't make it worse for them. Perfect.
[00:41:58] Speaker C: Yeah, I both love and hate this. I feel like we have no idea what we're doing yet and we're trying to regulate it and so it seems like that's going to be a problem because there's there's so much in, in here where it's like their plans to do a thing, they're going to, you know, put the frameworks together, none of it exists. And we all know how fast, you know, compliance can grow and adapt to a changing technology ecosystem because our day jobs are super fun at times.
[00:42:27] Speaker D: Very quickly, I think, is what you're going with.
[00:42:29] Speaker C: Yes, super quickly. So I like that Microsoft is sort of taking this on behalf of their customers of Azure and I like that I'm hoping that it's more than just like a checkbox or, you know, something throw away in the AI console in Azure. But I'm worried about it because I don't know if it has any.
[00:42:53] Speaker D: Well, what's nice about it is it's kind of very broad categories right now. So like, if they're trying to still figure out, you know, the way everyone is, you know, how to properly use and to validate and to trust, you know, the results from all these systems, kind of like they're going with good, bad, eh, kind of categories or we have no idea.
And that kind of feels like a good starting point versus like getting very granular on answers and requirements and everything else.
So kind of that, like, general stroke, I feel like isn't a bad, you know, first approach until they can, you know, in a year or two kind of fine tune this down. It's something that makes more sense.
[00:43:37] Speaker A: I mean, I kind of wish they would like, try to learn what it is before they regulate it. I feel like they don't always know what they're doing quite yet. I mean, I think I have more confidence in the EU to do it at this moment. Um, but it does feel a little like, okay, do you really know what you're regulating and do you understand it? And I. That part I don't know.
[00:43:57] Speaker C: And it's, you know, it's so early that, you know, and, and because it's so early in the AI journey and they are defining very broad strokes that what it's going to do is allow for a lot of interpretation, which is both good and bad when it comes to compliance. Because it's, on one hand it gives you the room to operate and sort of, you know, say this is our compliance story and, and build to that narrative. On the other hand, it does allow some external auditors and compliance and risk officers to sort of say there's only one implementation and it's this. So I kind of see both sides of that.
It's rough, but I'm hoping that this is, you know, good at least guidance in a positive direction as these things become more concrete.
[00:44:40] Speaker A: Yeah, well, Microsoft apparently while they were in eu, also decided that if you can't beat them, you should join them. And they're joining the CISPE Group, which is the cloud Independent Cloud association of European Cloud providers. They negotiated a settlement with them earlier this year over alleged anti competitive software practices. Of course not all members of the group were happy with the move being Amazon Web Services who opposed micro joining but was outvoted by the board. And Google also attempted to join the board earlier this year and pay the last year and pay them lots of money and bribes basically get them to, you know, join the board and continue to beat on Microsoft for their licensing violations. But they ended up pivoting and joining the Open Cloud Coalition instead. So apparently the pressure from the EU on Microsoft may be reduced for software licensing, which is kind of a loss for all of us, but not for Microsoft.
[00:45:32] Speaker C: Yeah, it just seems kind of, I just, you know, they had all that settlement and then they're joining this like it furthers this. Like I don't know if it's a good model.
[00:45:43] Speaker A: Right.
[00:45:44] Speaker C: Like you're, it is, it's like lobbying and bribery just out in the open.
[00:45:49] Speaker D: Yeah, we're going to sue you and tell you that this isn't good. Oh and by the way, here, here's an open seat, come join us, tell us what to do.
[00:45:56] Speaker A: They only paid $30 million. I mean that's a pretty cheap seat coming where they probably going to make in licensing.
[00:46:02] Speaker D: I mean the other interesting thing is at least on the Azure cloud side they are kind of getting away from some of the licensing stuff. So like hyperscale, you're no longer paying for SQL licensing. They've removed that from the fee associated with it. I mean they've increased some other stuff in order to compensate for that. So they still make their money but it's now longer listed as, you know, SQL licensing costs.
So I don't know if this is like, hey, this is their first kind of test into it or if they're going to continue to move down that route with other services, but you know, it'd be nice if they did.
[00:46:37] Speaker C: So they launched as part of the CISP thing. They launched the European Cloud Observatory and it was supposed to oversee the technical requirements. And the first progress report was due at the end of January.
But it was observed that Microsoft provided the CSP members to Redmond last month. But on the Surface at least, the three day event seemed to be more about a wine tasting than a serious technical conversations per the article on the register that were in our show notes. So I think this is exactly as deep as it looks like on the Surface, which is. Yeah, not great.
[00:47:10] Speaker A: Well aws, maybe should go join Google on the other coalition. Maybe that's the. Maybe that's the retort like screw you Microsoft and Cispe. Don't know.
[00:47:18] Speaker C: Maybe. I mean it's so weird.
[00:47:22] Speaker A: All right, let's cover our last story which is Microsoft's response to Stargate, which talked about the top of the show. So we're, you know, we're bookending the show with Stargate. Microsoft is thrilled to continue their strategic partnership with OpenAI and to partner on Stargate Announcement is complementary to what the two companies have been working on since they got together in 2019. Feels a little like we got to save some space. The key elements of the partnership remain in place and the duration of the contract is through 2030 with access to OpenAI's IP, revenue sharing arrangements and exclusivity on OpenAI's APIs all continuing going forward. This means that Microsoft has rights to OpenAI IP inclusive of the model and infrastructure for use within our products. Like Copilot, this means our customers have access to the best model for their needs. The OpenAI API is exclusive to Azure, runs on Azure, and also available through the Azure OpenAI service. This agreement means customers benefit from having access to leading models on Microsoft platforms. And direct from OpenAI and Microsoft, OpenAI have revenue sharing agreements that flow both ways, ensuring that both companies benefit from increased use of new and existing models. And Microsoft still remains a major investor in OpenAI, providing function, funding and capacity to support their advancements and in turn benefiting from their growth and valuation. OpenAI also recently made a new large Azure commitment that will continue to support all OpenAI products as well as training. This new agreement also includes changes to the exclusivity on new capacity, moving to a model where Microsoft has the right of refer refusal to further support OpenAI. Microsoft has approved OpenAI's ability to build additional capacity primarily for research and training of models on that other thing.
So yeah, this one feels a little safe Facey like yeah, we didn't really want to invest a lot more money and Oracle and then we're throwing a lot of money at them and so we'll let them build another Open AI research and development center to do your AGI toy thing where we'll keep taking all the money from OpenAI. That's what sort of feels like to me. I don't know, what do you guys think?
[00:49:08] Speaker C: Definitely seems sketch. My, my thought was that, you know, like they, they announced Stargate and then Microsoft was like, what? You know, and then Microsoft is coming in, trying to be, you know, a part of it and reinforcing that their relationship with OpenAI is still there and still beneficial for both companies. And it's not that OpenAI is just going with some other data data center provider, but that's my read.
[00:49:32] Speaker D: It's interesting also that Microsoft had to approve the OpenAI ability to even be a part of that is the least with the last sentence I read was, you know, so I guess in the original agreement maybe there was some like.
[00:49:44] Speaker A: Control that there's probably, there's probably restrictions. They had to do all their training and stuff on top of OpenAI and yeah, but again like the problem of Microsoft can't get all the Nvidia CPUs, Oracle can't get them all, Amazon can't get them all. And so if you're really trying to build AGI and really trying to build the next generation, like the amount of compute capacity you need is far stripping one cloud provider's ability to provide that. So I sort of find it funny that Stargate's supposed to solve that problem by building another data center with more Nvidia CPUs. Like, it's like, how does that solve it? I, I think it's a bigger problem than they're really admitting it.
[00:50:18] Speaker C: I don't know, maybe I'm just overly cynical today. Cause I'm just poking holes and everything and like government conspiracy. I'll put my tinfoil hat on. But it really did feel like the OpenAI announcement was definitely like, you know, and very touting the America side of the of, you know, the launch and the details of it were scarce. So it's. It wasn't about manufacturing, it was just data center and which feels like maybe that's political posturing. I don't know.
[00:50:47] Speaker A: It's very possible.
Would not be surprised.
All right, well that is it for another fantastic week here on the cloud. Gentlemen, we'll see you next week.
[00:50:59] Speaker D: See you then.
[00:51:00] Speaker C: Bye everybody.
[00:51:04] Speaker B: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services.
While you're at it, head over to our
[email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.
[00:51:27] Speaker C: Sa.