279: The Cloud Pod Glows With Excitement Over Google Nuclear Deal

Episode 279 October 23, 2024 00:54:48
279: The Cloud Pod Glows With Excitement Over Google Nuclear Deal
tcp.fm
279: The Cloud Pod Glows With Excitement Over Google Nuclear Deal

Oct 23 2024 | 00:54:48

/

Show Notes

Welcome to episode 279 of The Cloud Pod, where the forecast is always cloudy! This week Justin, Jonathan and Matthew are your guide through the Cloud. We’re talking about everything from BigQuery to Google Nuclear pow, and everything in between! Welcome to episode 279! 

Titles we almost went with this week:

A big thanks to this week’s sponsor:

We’re sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You’ve come to the right place! Send us an email or hit us up on our slack channel for more info. 

Follow Up

00:46 OpenAI’s Newest Possible Threat: Ex-CTO Murati

2:00 Jonathan – “I kind wonder what will these other startups bring that’s different than what OpenAI are doing or Anthropic or anybody else. mean, they’re all going to be taking the same training data sets because that’s what’s available. It’s not like they’re going to invent some data from somewhere else and have an edge. I mean, I guess they could do different things like be mindful about licensing.”

General News

4:41 Introducing New 48vCPU and 60vCPU Optimized Premium Droplets on DigitalOcean

6:02 Justin – “I’ve been watching the CloudPod hosting bill slowly creep up over the years as we get more and more data into S3 and we have logs that we store and things like that for the website. And I have other websites that I host there too. it originally started on DigitalOcean and it was a very flat rate for that VM that I need. You start sort of thinking like, maybe Amazon is great for this use case.”

AWS

19:31 Cross-zone enabled Network Load Balancer now supports zonal shift and zonal autoshift

19:57 Justin – “I like just to do that off my health checks, not off AWS telling them, but I appreciate the effort because when you do run into these type of AZ specific issues, they can be a bit of a pain to identify quickly. If Amazon can identify they have a problem and route your traffic for you, that is a great upgrade.”

21:23 Announcing Amazon MemoryDB for Valkey

Announcing Amazon ElastiCache for Valkey

22:54 Matthew – “10 terabytes for a month on the free tier is a ton too. Like, I know a lot of apps that use Redis that honestly probably don’t even hit that in a production workload. So this is great. And I think I’m just more mad that when Redis forked or changed license, they were like, Azure stay with us. And now I’m just mad at everyone with all these improvements.”

24:16 Access organization-wide views of agreements and spend in AWS Marketplace

24:40 Justin – “…this is actually an interesting challenge, because if you’re buying your cloud solutions, you typically have a reseller or you’re going direct with AWS. And in the event that you’re doing marketplace, just it’s part of your cloud spend. And so you can commit a lot of money through marketplace without going through proper procurement cycles and without proper governance. And so by giving this now a consistent single dashboard, you can now hopefully start keeping track of where things are being spent.”

26:10 Mountpoint for Amazon S3 CSI driver introduces new access controls for individual Kubernetes pods 

26:51 Jonathan – “I thought pods had the ability to have their own roles that they can assume for a long time, so I was surprised that this wasn’t already inherited from that existing functionality.”

27:19 Amazon OpenSearch Serverless introduces a suite of new features and enhancements 

29:09 Justin – “new features are always a bit delayed. Like they would announce it with a blog post and the blog post all you get for like two or three weeks. I mean, if you look back next week, I bet there’s updated documentation. So there’s a disconnect between the announcement and the documentation team and when they publish things.”

29:34 Convert AWS console actions to reusable code with AWS Console-to-Code, now generally available

31:07 Matthew – “Well, the problem with CDK was, especially – granted this was years ago – you tried to do anything too fancy with it and it just kind of tried to do too many things and then CloudFormation would barf…I’m sure it’s exponentially better now, like five years later, or might be more than that at this point. I don’t really want to do that math.’

GCP

31:58 New nuclear clean energy agreement with Kairos Power

35:04 Matthew – “I’m waiting for these cloud providers to vertically aggregate now and become power companies for their own things and their own like little generators now they have five little nuclear sites on each data center and that’s their power. And they’re essentially off grid except for the internet.”

37:46 Google DeepMind’s Demis Hassabis & John Jumper awarded Nobel Prize in Chemistry

40:02 The new Global Signal Exchange will help fight scams and fraud

41:05 Matthew – “This is great, you know, the amount of people I know that have been scammed from, you know, one thing or another, or, you know, one of my friends, friends, grandparent got scammed a few weeks ago. It was, you know, messaged me to help. when I’m like, there’s not much you can do, you know, we can solve this in the world, you know, hopefully the world becomes a better place.

Database Center — your AI-powered, unified fleet management solution 

43:51 BigQuery tables for Apache Iceberg: optimized storage for the open lakehouse

45:17 Justin – “So one of my secret tricks to figuring out AWS predictions is go look at all the Apache projects that have gotten popular in the last six months. So I’m giving away trade secrets here, that is, yeah, there’s a lot of Apache projects. There’s a lot of Open Cloud Foundation projects. There’s a bunch of things, and those are all definitely ripe for opportunities.”

46:58  Gain control of your Google Cloud costs: Introducing the Cost Attribution Solution

49:46 Justin – “Your pipeline has to be using the G cloud beta Terraform provider to do this. And so basically you, you know, it’s a G cloud beta Terraform vet command you run basically to do your policy validation. And so there are some pretty easy ways to bypass that for the Terraform code. So I would like the other option as well to basically post creation, which they kind of say they have in the reactive side with the alerting. But yeah, it’s still better. And if you are doing a lot of Terraform work on Google, you’re probably looking at this Terraform feature anyways, because it’s pretty powerful. But they’re providing basically a Terraform cloud implementation for Google that you don’t have to pay for, which is a plus.”

Azure

51:31 Code referencing now generally available in GitHub Copilot and with Microsoft Azure AI 

49:46 Jonathan – “Well, AI generated content still isn’t copyrightable, so I’d be surprised if anyone actually admits that something was written by AI.”

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod

View Full Transcript

Episode Transcript

[00:00:07] Speaker A: Welcome to the cloud pod, where the forecast is always cloudy. We talk weekly about all things AWs, GCP and Azure. [00:00:14] Speaker B: We are your hosts, Justin, Jonathan, Ryan, and Matthew. Episode 279 recorded for the week of October 15, 2024. The cloud pod glows with excitement over Google's nuclear deal. Good evening, Jonathan and Matt. How you doing? [00:00:31] Speaker A: Good. Good to see you back. [00:00:32] Speaker C: Good, how are you? [00:00:33] Speaker B: Not bad. I was listening to your guys episode from last week that you guys got done as out, so congratulations. [00:00:38] Speaker C: Do we sound like ranting lunatics? [00:00:40] Speaker B: No, you sound good. Like I kind of saw you guys do where you guys kind of rotate through the host thing. It's kind of nice, kind of a little envious of it at times, but it's okay. It's all good. But yeah, no, you guys are together. I haven't finished it, but about halfway through. All right, well, we've got a bunch of news to cover and so let's get to it. OpenAI's newest possible threat is ex CTo Mira Murati, which we talked about about three weeks ago before I went on vacation, when we talked about the fact that she was leaving, most likely to probably start a new startup or new AI ventures. Rumors have been running wild since her last day, October 4, with several people reporting that there has been a lot of churn happening inside of OpenAI. Maradi may join former OpenAI VP bet Brett Zoff at his new startup. And apparently it may be easy to steal some people as the research organization at OpenAI is reportedly in upheaval after Liam fetus promotion to lead post training business. In addition, Ilya Sutzkever I don't know how it's seen an opening. AI co founder and former chief scientist who resigned earlier also has a new startup which may be interested in Marati as well, or she can do something completely on her own. So speculation runs wild, just as we said it would. So we thought, keep you posted on the, as the World turns AI, then. [00:02:04] Speaker A: We'Re going to run out of people working for these companies. It's just going to dilute the talent to the point where none of them are successful. [00:02:08] Speaker B: They're going to rotate through the companies all the time, just jacking their salaries up more and more and more. [00:02:14] Speaker A: Yeah, I kind of wonder what, what will these other startups bring that's different than what OpenAI doing or anthropic or anybody else? I mean, they're all going to be taking the same training data sets because that's what's available. It's not like they're going to invent some data from somewhere else and have an edge. I mean, I guess they could do different things like be mindful about licensing content, but then that just increases their cost versus people who are stealing it. So I don't know, I'm not quite sure what, what the value add's going to be for one company over another in the end. [00:02:47] Speaker B: Well, I assume that exclusivity is going to become a big factor. Like, you know, was it Gemini or somebody who signed an exclusive contract with the New York Times to use their past archives of the newspaper? And so there's going to be a point where like certain specialization of content in certain models may be necessary for your business. So if you maybe not public news like New York Times, but I could see like, oh, we got access to all the chemistry journals posted published for the last 20 years and we're the only model that has access to them because we have an exclusivity agreement. I can see where that might be more interesting from some of that specialization, but it'll be differentiation is going to come from price or it's going to come from where the foundational data came from. [00:03:32] Speaker C: I think from there you're going to start to see in tools, you're going to see different models being used in different locations. So you might start with running it through one model and take the output from one model into another model and tweak stuff in that way. Therefore you got to get the best outputs from everything. [00:03:49] Speaker A: Yeah, I saw this week there's actually, I'm not sure if it's open source or it's a commercial tool, which was like the LLM router, which if you put the query in, it figures out which model it should send it to, either for price or performance or expertise in a particular thing, and then it sends you the result back. So it's like a load balancer for LLMs, but with using context to decide where to send it to, which is kind of cool. [00:04:16] Speaker B: Yeah, I was in the solution where you basically send the query, the prompt to two different AI's, and then basically there's a way you can combine the results back together. That way you can help avoid hallucinations. That's one of the techniques you can use for that. And so I can see some of those becoming aggregators, AI aggregators, becoming kind of a big deal to be able to combine multiple foundational models together. [00:04:41] Speaker A: It's almost like businesses where they hire experts in different things to work on different tasks. Yeah. [00:04:51] Speaker B: All right, well, Digitalocean this week announced new 48 vCPU and 60 vCPu optimized premium droplets on digitalocean. Those raindrops are getting pretty heavy at the size, the memory and storage optimized premium droplets and the general purpose and cpu optimized premium droplets are great for use cases like S four HANA leveraging the big data solutions or potentially even using it for some type of training or vector type databases to support your rag augmentation needs. These are all Linux based virtual machines. The premiums optimized for dedicated cpu instances with access to the full hyper thread as well as ten gigs of outbound data. And with the 48 VCP box you get up to 384 gigs of memory. And the 60 VcPU box you get up to 160GB with lots of different options. I was poking at the digitalocean site. They've added a lot of VMS types in the last couple years I've been missing out on. But yeah, definitely continuing to be a very interesting fifth cloud option at this point. [00:05:52] Speaker A: Yeah, I should put some effort into using them. I used them years ago when I, I think I was doing some web development work and I used one of their droplets for something small, but I haven't really touched them since then after getting into AWS. But now I'm paying the bill again. Maybe I should switch back to digitalocean. [00:06:09] Speaker B: I've been watching the cloud pod hosting bill slowly creep up over the years as we get more and more data into s three. And we have logs that we store and things like that for the website. And I have other websites that I host there too. You know like it originally started on digitalocean and it was a very flat rate for that vm that I need. And uh starts to start. You're so thinking like hmm, maybe, maybe Amazon, that's great for this use case because uh, you know back when I was an Amazon customer I could get credits, you know from our sales reps regularly. But I, I don't currently have an Amazon rep who I can beg for some uh free credits or I don't even know if they're even giving those away like they used to. These give them my like candy at events and conferences where you could just pick up like four or five of them for a couple hundred dollars and all the stuff spot instances anyways and all back behind cloud flare so even the spot instance goes down. The website still works, but yeah it's being expensive so maybe it's time. Or maybe it's time for a GCP migration since that's my day job cloud. Maybe I can nagle some deals. [00:07:10] Speaker C: I mean, I think you're going to see it for like the small and medium sized businesses. Like it will work. A simple website, simple, simple web apps, whatever it is, just will work there without. You said all the extra overhead. [00:07:24] Speaker A: So what about WordPress? Are we going to, since our sites on WordPress, are we safe? [00:07:29] Speaker B: So we are not using WP engine. So I think we're okay. But I definitely, if there's a successful fork of WordPress at this moment, I would seriously consider making a pivot. But the other thing is, there's a bunch of CMS. CMS market's gotten super crowded. There's a ton of headless CMS's out there. There's a ton of platforms for that stuff. There's a lot of static website generators. The challenge for me is that I don't want to be that picky about my RSS feeds for the podcast. So there's just plugins that just do that for us so we don't have to think about. And I had to rewrite all that code. I don't really want to do that. And I know none of you three have any of the time to do it. So unless one of you is like, I'm going to redesign the website from scratch, I'll be like, have at it. But for now, I'm just going to take it with what's easy and we can just focus on making the podcast. How's that? [00:08:22] Speaker C: I don't think you want us to want me to do any UI or building anything like that. I would add poorly for everyone involved. [00:08:28] Speaker A: I built the UI. I built the UI with AI recently. It's part of my Jonathan Ditter thing. After we talked about let's pick a project and use the technology we've never worked with before. I built a react app. Don't ask me to do anything that looks good. Just trying to get the boxes to line up and be the right size. It's just a real chore. But then my rule for myself was, I'm not going to touch any of the code myself. Everything is going to be driven by words into the language model. Like, can you move this left a bit? Can you help me center this? And I got to, I mean, it's pretty complex web app that I built. I basically built kind of a front end for LM studio's model API because I wanted to have things like projects like Claude does or canvas now like OpenAI does. I was like, I can build this if I can use AI to build the front end that does all these same things. That'd be fantastic for Johnson to do the thing. And I kind of got to probably about maybe 1500 lines of code in total. And after that it's just like whack them all, fixing bugs, like, can you fix this thing? And it's like, well, let's just redesign this. And then it's like the cascading changes you have to make everywhere, it's just a nightmare. So I think I kind of reached the limit now. I'll admit that it wasn't structured well because I started off with something simple and then expanded on it. So I think if I took the concept I have now and thought better about it, broke it down into components, now I understand a bit more about react, I may have a better outcome. But yeah, it's really interesting exercise and I'm really pleased with how well it's come out. [00:10:06] Speaker B: So you're basically saying your AI project created tech debt because of your inexperience with react, but yet we're supposed to replacing junior developers with AI. Interesting. [00:10:15] Speaker A: Well, I think I agree, but I think a senior developer needs to give junior developers instructions and they need to understand best practices and good patterns. And I didn't start with that. I was like the junior developer trying to figure it out by myself. [00:10:33] Speaker B: Yeah, that's fair. I actually, I know you've been a big advocate at Claude, and so I actually do a talk later this week on finops for AI that I somehow volunteered myself to do foolishly before I went on vacation. And so I was using Claude and chat, GPT and Gemini to just kind of help like build some of the structure for the deck. And like, I kind of conceptually knew like what I wanted to talk about. I knew the right pieces. But I was really impressed with Claude because when you say I'm trying to create a slide deck, you know, and you give it the prompts, like I'm doing a presentation for many people and this amount of time, and here's the topic and here's, here's the key things I want to cover in the thing. And it's like, here's a, it basically creates a slide layout. And I'm like, cool, that's awesome. I'm like, can you build this slides for me? And it literally created it in react right there real time and basically created me a react based Java application that I could really use for my slide deck. Now the slides look terrible. I would not use them, but it was a nice way to start out and gave me some inputs I was like, oh, that's a good point. I should definitely talk about that too. In my talk, it was a fun project and I was very impressed with Claude. I would say chat GPT, which through Microsoft Office 365, through copilot is pretty terrible, this kind of thing, because this is where you would think of all the features you could ask from Microsoft. For Office 365 copilot, being able to create PowerPoint slides would be the most common one, and it's the worst at it. Of all three, of all of the ones I've tried out on this particular thing, it does good design recommendations. One thing I do like is you put your content on the slide and then it gives you design recommendations of how to lay the content out. That's pretty good. But you realize right quickly that that's not actually AI, it's just pattern matching. Oh, you have bullets on a text. Like bullets good in these four different formats, and you're like, oh yeah, that one looks good for this content. But yeah, it's like this week on this project, I was just like, this whole concept of the AI is going to take all of our jobs. It might eventually. I don't think that's anytime soon because I think AI is just a convenient excuse for people to lay people off right now. And I don't think it's as real as people think it is because what I see, and even doing the react thing with the slide deck, I decided after I saw it, I was like, oh, I want to add some content on this. And then it regenerated those two slides. I'm like, okay, well, now take the previous slides and those slides and regenerate together. And it's like, thanks for getting react errors and troubleshooting. The same thing you do in the troubleshooting all the problems. And I don't know enough about react as well to make that good. So yeah, there's definitely, the more you muck with it, trying to make it to tweak it, the quicker it falls apart. [00:13:17] Speaker A: Yeah, I feel like I'm in a good place with AI because I don't think it will replace what I do. And it does a good job of doing what I ask it to because I know how to ask it. And it does like going up the other way into the project. It's great at building documentation. I've had it write readmes, I've had it write summaries of the code so that I could pass it to somebody else. Literally like this is my code base. Write a summary of everything, all the functions what they do, how they work, why we made these decisions. And give me a document that I can give to my friend Jimmy so that he can get a head start on the project when he takes over from you, basically. And it's built some fantastic documentation and it's something that I would have sucked at or never gone around to, quite honestly. But I think starting with a really well formed idea of the project, which was started with a narrative that we wrote, it's broken the work down into epics. It's written stories for me. It's pointed stories, which in general I agree with. So I think I find it very useful. I don't think it replaced me just yet, so I'm good. [00:14:33] Speaker B: Next week I'll talk about notebook LM, which is another tool I learned through this process. And the cooling about that idea is that you take all this content, you put it together. I want these websites as references and I want these PDF's. And I want to do then I want to basically have a chat conversation with that content as being anchoring. Basically it's a rag implementation of it for end user. And I'll talk about that next week because I have some work to do still to get that ready. But definitely there's some really cool stuff you can do for sure. And I think it's just a matter of time until 1015 years from now. Yeah, I think a lot of jobs would be at risk for AI. I'll hopefully be retiring by then and I won't care. It'll be unfortunate for my children. But I think it's just a. It's too early to blame all these layoffs on AI comment unless you work. [00:15:24] Speaker A: In a call center. Cause they need to be screwed. [00:15:27] Speaker B: Yeah, but those. Those jobs are already at risk. Those are the jobs you outsource to India or to other third world countries. Like, you know, now we're just gonna, you know, that's not. No longer cheap enough, now we're gonna outsource those to AI's. And that's kind of a bad experience too. Like, have you ever ran one of those Wendy's AI drive thrus? I'd rather you pipe my audio back to India, to a call center. Someone in India take the order remotely, than use that AI thing ever again. [00:15:54] Speaker A: Yeah, I mean, the irony is that I suspect that AI may cost more than some workers and some outsourced workers. [00:16:03] Speaker B: Yeah, well, it's interesting because in this research I was doing for this presentation, and I know we're digressing here, but I was looking at some stats from Gartner on AI finops and AI cost management. And they were saying, when you think about the cost of the model, actually 70% of the cost is an inference, which is for the lifecycle of the model. However long models we live, if it's trained weekly or nightly or whatever timeframe within the amount you're using it, 70% of your costs actually be on the inference side. That means that for these large implementations of AI solutions, majority of the cost is actually going to switch from training, which is where a lot of the costs have been driven from initially to operational side. And that's the reason why I now understand some of the things I'm seeing in the cloud providers about a lot of managed services, a lot of auto scaling, and things that don't make a lot of sense when you're thinking about training, but make a lot of sense when you're thinking about inference, it's starting to make some sense to me in different ways. [00:17:02] Speaker A: Yeah, yeah, I agree. It's using far too complex a system and using far too much energy to do effectively what is very simple tasks in many cases, which could be, but then I think the technology will change. I think somebody will come along with chips that are far better than the Nvidia. [00:17:18] Speaker B: Well, the models will change. I mean, like, if you look at bitcoin, right, and being able to do algorithmic verification of proofs, as your model was incredibly expensive for GPU usage and required a massive amount of power to generate basically fake money, that was then valuable. And then we've seen other blockchain technologies now move to not requiring massive amounts of mathematics to do proofing. [00:17:47] Speaker A: Just need money. Proof of stake. [00:17:49] Speaker B: Yeah, you just need money and get scammed that way. But the reality is that the original way it was implemented used a lot of power and was very hungry, and people did it for a long time. But now everything's moving to this proofless model, and I think the same thing will start to happen at EAI, is that people are getting the lessons of this. They're understanding how it's working. Eventually they'll start figuring out ways they can do this without the heavy GPU's or without using or being able to put more capacity into the GPU's. So you need less of them. And I imagine over time it gets way more cost effective as the technology improves and our understanding of how to build these things gets better. And I'm sure someone will come out with a different training model that'll be cheaper at some point, because even transformer which is what most of these training models are built on. What is ten years old now 15. It's not young. And so again, you see all the flaws with the transformer model. I assume that there's R and D happening at DeepMind and other places where they're looking at different ways to do training and tuning. That's less expensive. [00:18:48] Speaker A: Yep. [00:18:49] Speaker C: That goes back to maybe to the first conversation where maybe it's not that they're looking for new, you know, ways and, you know, different training. Diaspora, maybe it's going to be how the actual training is done and redoing that. So it's exponentially cheaper and therefore they can pass those savings on. [00:19:07] Speaker B: Yeah, and that's very possible as a differentiator for AI. That's a good point, again, like, is that something a startup can do? You know, I guess if you have the right scientists who, you know, has the idea how to do it, maybe. But I think it takes bigger R and D teams like Google, DeepMind to figure that stuff out and he's willing to lose millions of dollars trying to make it work because it has a huge upside for them. All right, Aws cross zone enabled network load balancers now support zonal shift and zonal auto shift. This allows your load balancers enabled across zones to quickly shift traffic away from impaired availability zone and recover from events such as bad application deployments and gray failures. Zonal offshift safely and automatically shifts your traffic away from the AZ. What AWS identifies potential impact to it. So I just do that off my health checks, not off AWS telling them. But I appreciate the effort because when you do run into these type of AZ specific issues, they can be a bit of a pain to identify quickly. And if Amazon can identify they have a problem and route your traffic for you, that is a great upgrade. [00:20:10] Speaker C: So this came out of us east one having enough failures that they wanted to fix themselves, so they had to offer this as a service now. So it just was solved for you. [00:20:19] Speaker B: Yeah, basically. [00:20:21] Speaker A: I really like it actually, because for me I've always had this problem trying to figure out, well, if I've got an app in multiple zones but s three goes down in one place or pick a different services, there's probably not s three. When do I failover? My app now has to be aware of all my downstream services and I have to decide, do I failover? Do I not failover? But I think doing it at the NLB level and having a system that can say, yes, you've got this dependent service in this zone. Will shift the traffic for you. Just makes the whole thing so much more seamless. It means a person doesn't need to be in the loop to decide whether or not the outages or the impact to service is severe enough to warrant doing it. I like the easy button of turning off an ac, which might be unreliable. [00:21:13] Speaker B: Agreed. Well, Amazon is announcing two Valkyrie features. First is Amazon memory DB for Valkey, and second is Amazon elasticache for Valkey. Amazon MemoryDB and elasticache will provide up to 30% to 33% lower cost than memory DB and elasticache for redis open source. Ironically, I did save you 50% by combining these articles together into one topic, so I did better than Amazon on this one. In addition, they give you a nice free tier where the memory DB are now charged for the first ten terabytes of data written per month, and then any data over ten terabytes a month is built at basically four cents a gig, which is 80% lower than the memory DB for Redis open source. The elasticache serverless is 33% lower, and node based pricing is 3% lower than for the supported redis engines. And overall, I would call this a win win for Amazon. They get to recoup a bunch of the margin they were giving to Redis and give you some of those savings as well. And so you think you won because you're getting a savings, and Amazon's are laughing to the bank saying we're making now three to four points, probably more than they were before. We're still giving you 20% or 30% discount. So nice job Amazon. Win win. [00:22:23] Speaker A: Yeah, it's great. I think we talked about GCP getting there first. I think. I think last week they announced their Devki service, which I think is actually not GA yet, but it's Google, so that's kind of normal for them. But massive performance increases, not just cost savings, but huge performance increases as well over redis, which is great, especially speed of failovers and things like that. [00:22:46] Speaker C: Ten terabytes per month on the free tier is a ton too. Yeah, I know a lot of apps that use redis that honestly probably don't even hit that in a production workload. So this is great. And I think I'm just more mad that when Redis forked or changed the license, they were like Azure. Stay with us. And now I'm just mad at everyone with all these improvements. [00:23:11] Speaker B: I thought Google also had a pretty good partnership with Redis as well, so they're also undercutting some of their market too. But it's interesting because I think as we've seen each of these open source projects going closed source against the hypercloud, and then the hyper advisors basically all took these and forked them. They've all then prioritized like things they've been trying to get into the code for a long time, like faster failover, faster startups, all these things that the core like elasticsearch didn't want to do. And because it wasn't invaluable, it doesn't make revenue for them. And like these top providers are like, well, we're going to fork it. We're going to take advantage of that fork. We're going to do a bunch of things that benefit our scalability, our things that matter to us, and then make it turn over to these foundations after those fixes already done. Smooth move by cloud providers. [00:23:56] Speaker C: Rub it in. That's all I have to say. [00:23:58] Speaker B: Unless you're on Azure and then you're just getting screwed all over the place, you can now access organizational wide views of agreements and spend in AWS Marketplace. This is now GA for the new procurement Insights dashboards, helping you manage your organization's renewals and optimize your AWS marketplace spend. The new dashboard gives you a detailed visibility into your organization's at marketplace agreements and associate spend across your AWS accounts, across your various organizations. Then you can do showback and chargeback models, et cetera. This is actually an interesting challenge because if you're buying your cloud solutions, you typically have a reseller or you're going direct with AWS in the event that you're doing marketplace, it's part of your cloud spend. You can commit a lot of money through marketplace without going through proper procurement cycles, without proper governance. And so, so by giving this now a consistent single dashboard, you can now hopefully start keeping track of where things are being spent, what's being added through marketplace, and hopefully avoid some nasty surprises to you from your teams just going into marketplace and buying things willy nilly without you knowing about it. And hopefully this is something that the cloud providers are thinking about, is in this very cost conscious world that getting proper approvals for things like million dollar spends, super important. [00:25:14] Speaker A: Yeah, I'm surprised they don't have workflows built around things like that. Just like you can put spending caps in for compute. Why don't they have spending caps in for marketplace purchases or commitments? [00:25:25] Speaker B: Yeah, there is a plugin for Kupa for AWS Marketplace, but they announced a bunch of fanfare, then nothing really happened with it, and they haven't really extended to either other clouds. I don't think anyone actually uses it. The idea was that you're at least putting the kupa approval flow into your marketplace spend, but I think that's a limited input. That's Kupa partnership versus making something that everybody can use because unless you're a Kupa customer you don't get that advantage. [00:25:54] Speaker A: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archero gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days. If you dont use all the cloud resources youve committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance, but remember to ask will you actually give me my money back? Achero will click the link in the show notes to check them out on the AWS marketplace. [00:26:33] Speaker B: Alright, mount point for s three container storage interface will now support configuring distinct AWS identity and access management roles for individual Kubernetes pods built on top of mount point for s three, the CSI driver presents a s three bucket as a volume accessible by containers and Amazon EKS and self managed Kubernetes clusters. Previously to this you would have to basically give it to the entire EKS node and then basically set up in your pod. Set up how to access the s three thing, which is not great from a security perspective if you're trying to use multi tenant Kubernetes pods with different workloads that could potentially provide opportunities for data exfiltration. And so this is nice that you're not pulling it into the pod level in a much better spot for it. Honestly. [00:27:14] Speaker A: Yeah, I thought pods have been had the ability to have their own roles that they consume for a long time. So I was surprised that this wasn't already kind of inherited from that existing functionality. [00:27:29] Speaker B: Minimal viable products man. [00:27:30] Speaker A: Yep. [00:27:31] Speaker B: Name the game of Amazon. Yeah, it was already there. They could have done it, they just didn't. [00:27:36] Speaker C: Somebody would keep in the pockets and they asked for it. [00:27:38] Speaker B: Yet. Yep. Amazon Open search serverless has several new features this week. There's a new flat object type which allows for more efficient storage and searching of nested data. There's new support for enhanced geospatial features providing users their ability to uncover valuable insights from location data. Expanded field types including support for unsigned long and doc count mappers and a multi term aggregation feature enables you to perform complex aggregations and deeper insights into your data faster. Furthermore, service open search has been a significant reduction in indexing latencies and faster ascending and descending search sorts, improving efficiency and performance overall, which if you're using full open search, I definitely think serverless is good for many, many use cases. [00:28:23] Speaker C: Yeah, these seem like all good quality of life improvements to hopefully help people migrate to the serverless models and also. [00:28:30] Speaker B: Things like the cloud provider really wanted. Probably things like being able to have enhanced geospatial features might be important to Amazon who likes to route trucks that deliver packages to somebody. So I think again, there's definitely some features and things coming into the open search from Amazon's involvement and their ability to influence the roadmap a bit more than they used to be able to. [00:28:52] Speaker A: I think don't have much to say about the features they added, but I will make a little complaint that I went to try and research some of the features that they like, the flat data structure that they've I mean, they're just taking nested structures and unraveling them so that they're easier to scan. They don't have to recurse through these complex objects and things. But I wanted to read more about it and their documentation is out of date. They have not updated the actual open search serverless guide with any of this new stuff yet, so it's quite disappointing. [00:29:20] Speaker B: I mean, that sounds very common for Amazon. [00:29:22] Speaker A: I don't know, the documentation used to be some of the best around, but. [00:29:26] Speaker B: Yeah, new features are always a bit delayed, like they would announce it with a blog post, and the blog post is all you get for like two or three weeks. If you look back next week, I bet there's updated documentation, so there's a disconnect between the announcement and the documentation team and when they publish things. [00:29:41] Speaker A: Yep. [00:29:42] Speaker B: All right, our good friend Ian from Ema K is being sherlocked by Amazon this week, apparently with the general availability of AWS console to code, which makes it easy to convert AWS console actions to reusable code. You can use AWS console to code to record your actions and workloads in the console, such as launching an EC two instance, reviewing the AWS CLI for your console actions. Just a few more clicks, you can take that output, put it into Q, and generate code for use in your IOC solutions or infrastructure code solutions including cloudformation, YAML or JSON and AWS CDK, typescript, Python or Java. Or I'm pretty sure you can ask Q to also generate your terraform, but Amazon doesn't own that yet, so they won't mention that one. This can be used as a starting point for instructor automation and further customized for your production workloads, including in pipelines and more. It does not support all Amazon services, to be clear, GA only supported or the early access only supported EC two, but GA has added rds and VPC as well. The simplified experience in managing the prototyping, recording and co generation workflows, new preview code and advanced code generation capabilities, all released with the general availability. And I had to ask you guys, cloudformation, JSON or YAML? Does anyone still do JSON or is everyone? As soon as YAML support came out for cloudformation, I ran to YAML and went that way and then moved to Terraform. [00:31:00] Speaker C: I think I dumped it before YAML really came out. [00:31:04] Speaker B: Yeah, I mean, I was learning terraform at the same time as that was coming out and so I converted all my JSON once to YAML, which was much nicer than trying to debug JSON code all the time then. Yeah, terraform replaced it very quickly after that. [00:31:15] Speaker A: Yeah, I'm more a CDK fan personally, so I write it in python. [00:31:20] Speaker B: Nice. [00:31:20] Speaker A: And I don't care what it's rendered as at that point. [00:31:24] Speaker C: Well, the problem with CDK was especially granted, this was years ago, you tried to do anything too fancy with it and it just kind of tried to do too many things and then cloudformation would barf. So I know they made a lot of improvements since at first GA, but at the time when I tried to do the same thing in terraform, cloudformation and CDK CDK, what you tried to do, just launching a VPC with a couple subnets and everything and loops, it was dying on trying to create the cloudformation under the hood. I'm sure it's exponentially better now five years later, or might be more than that at this point, and I don't really want to do that. Mathematic. [00:32:08] Speaker B: All right, GCP. Google saw, Microsoft announced they're going to restart three mile island a couple weeks back, and they decided to raise their nuclear ambitions by building nine new small modular reactors developed by Kairos Power. This is the first corporate agreement to purchase nuclear energy from multiple small modular reactors, or SMR, to developed by Karos Power. The initial phase of work is intended to bring Keras power first SMR online quickly and safely by 2030, followed by additional reactor deployments through 2035. The deal should enable up to 500 by seven carbon free power to us electricity grids and help more communities benefit from clean and affordable nuclear power. The Karos power technology uses a molten salt cooling system combined with ceramic pebble type fuel to efficiently transport heat to a steam turbine to generate power. This passively safe system allows the reactor to operate a low pressure, enabling a simple, more affordable nuclear design. Using an Internet development approach, Kairos Power will complete multiple successive hardware demonstrations ahead of the first commercial plant. And this will enable critical learnings and efficiency improvements and accelerate reactor deployments, as well as greater cost certainty for Google and other customers in the future. Keras has been at it for a while, having received over the summer a construction permit from the US Nuclear Regulatory Commission to build the first power producing reactor with the Hermes non powered demonstration reactor in Tennessee. So I'm a little nervous with the whole iterative development approach of nuclear. And as I went, like, red flag, I was like, you iterate wrong. Things go real bad real quick in nuclear. But they've talked about SMR for a long time and it sounds like AI is finally going to get these things off of the drawing boards and into the ground and actually built. And then we'll be in the new nuclear age, perhaps, you know, power, which. [00:33:56] Speaker C: Will be six years. [00:33:58] Speaker B: Well, I mean, 2030 is pretty fast and we'll see. [00:34:02] Speaker A: Isn't the Department of Energy facility in Tennessee, is that. I assume that's why. Why they tested there, because I assume it was somewhat regulated. [00:34:11] Speaker B: Yeah. [00:34:13] Speaker A: Yeah. That's awesome, though. I mean, people have been clearly that the public sentiment has. Has been quite anti nuclear lately, and I'm not quite sure why that would. [00:34:24] Speaker B: Because of Fuji. Fukushima. Yeah, Fukushima Fu. [00:34:29] Speaker A: Yeah. [00:34:29] Speaker B: That put a pretty negative taste into the world about nuclear power in the old reactor style. And you remember that reactor was built back in the sixties or seventies. Most of Germany's were built around the same time. And so I think there was this kind of like, hey, we don't have great solutions for, you know, disposing of the nuclear waste combined with the risk of these aging plants. I think that would kind of turn the tide after Fukushima against a lot of these things. But SMR, while it still has some of the same issues, it's so much dramatically less risky than it is in the larger fusion reactors. [00:35:06] Speaker A: Yeah, I know Bill Gates was working on thorium reactors at one point, which it's a completely different kind of thing as well, but it's so good to say that, that it's going to make a comeback. And if we improve the technology in small local deployments for clouds, then maybe public sentiment will change a little bit. And we can move back to nuclear for, you know, on a national scale. [00:35:28] Speaker C: I'm waiting for these cloud providers to vertically aggregate now and become power companies. [00:35:37] Speaker A: Oh yeah. [00:35:37] Speaker C: I mean, for their own things and their own like little generators. Now they have five, you know, will nuclear sites on each data center and that's their power. And they're like essentially off grid except for Internet. [00:35:50] Speaker A: Yeah, I was thinking, I know, I'm pretty sure here in California the utilities aren't, what's the word? It's not deregulated. It's like where I live, I got to get my power from one company because that's the company that monopoly. Well, it is like a monopoly, yeah. But other states, other parts of the world, you can choose to buy your power from a different provider at a different rate and obviously they figure out the transport of that power to your house. But you can negotiate rates and get better deals with other companies. But thats not available in California. As far as now, the rate you get is the rate you get and you have not much choice about that. But it would be cool if we do start building these nuclear reactors all around the place. All of a sudden theyre like, I want to buy my power from there, please. [00:36:40] Speaker B: I didn't work out so well for Texas. [00:36:42] Speaker A: It did not work out well for Texas, no. [00:36:44] Speaker C: Yep. [00:36:45] Speaker B: But you know, initially it did, you know, because everyone is able to basically negotiate rates and compete with each other until the world basically froze them to death. All the rates got jacked up pretty high and they all suffered there for a bit. But yeah, no, I think an open power market would be nice to have. And that's kind of what you're getting with solar, just at the local level where, you know, if you have a city like, you know, where Jonathan and I live, that you build an SMR reactor that powers the city. Right. Like, does city become its own municipality for power just like it is for water? And some of the other things about being beholden to, you know, the big mega conglomerate causes fires here in California. [00:37:27] Speaker A: Yeah, it's the shareholders I have an issue with. And like, you know, I call me a communist, but I really think things like public utilities should be public utilities and electricity is such a fundamental part of modern life. I don't think. I don't think they should be privately owned companies. I think they should be publicly owned and either cooperatives or run at cost. [00:37:52] Speaker B: Well, Google's DeepMind, Demas Sebes, he's the co founder and CEO, and John Jumper were awarded the Nobel Prize in chemistry last week. Or week before they were coordinated the 2024 Nobel Prize in chemistry for their work developing alphafold, a groundbreaking AI system that predicts the 3d structures of proteins from their amino acid sequences. And David Baker was also coordinated for his work with computational protein design. Before alphafold, predicting the structure of a protein was a complex and time consuming process. And the alphafold predictions are freely available through the Alphafold Protein structure database, which have given more than 2 million scientists and researchers from 190 countries a powerful tool for making new discoveries, which we will hopefully learn more about in the upcoming years as the research and science continues to get better and smarter than ever. And I like to know that AI can be used for things other than making really silly cat memes, so I appreciate that. [00:38:49] Speaker A: Yeah, it's domestic. Abelis is such a, such a smart guy. I've watched a lot of his lectures and conversations he's having, people, I just. I'd love to meet him and actually talk to him in person. But the funny thing is, like, this is, this is for a Nobel, this is like a relatively recent piece of technology and use case. Like, very rarely a Nobel awarded so soon after discovery or after an invention. So this is. It's kind of cool. And I mentioned the physics Nobel also went to John Hotfield for his work in the sixties, actually, on some of the first AI training algorithms, which is also cool, much to the disdain of the physics community, because they feel like. [00:39:31] Speaker B: They'Ve been sort of usurped. [00:39:34] Speaker A: Yeah. [00:39:34] Speaker B: What does that algorithm have to do with physics? Yeah, I can see that. I can see their complaints. [00:39:38] Speaker A: I mean, it has a lot to do with things. It's like Hutfield was. It's all about entropy. [00:39:42] Speaker B: It's really energy moving through a chip and how. Yeah, entropy is definitely in play there, but, yeah, I can see the slight annoyance. [00:39:53] Speaker A: Yeah, there's definitely, definitely some of that. But it's good to see he's been recognized, because we really wouldn't be here today if nobody had done that work that he did. [00:40:03] Speaker B: Yeah, that's a good one. Check those out if you're interested. All right. Scams have, of course, been a huge impact on people's lives, with people losing their life savings, in some instances due to pesky scammers out there trying to get their money. Keeping people safe from scammers is core to the work of many teams at Google, and they are excited to share information about a new partnership with basically global anti scam alliance, or GASA, and the DNS Research Federation to launch the global signal Exchange. This joins the effort from cross account protection, which is actively protecting 3.2 billion users on the Google platform with the data from Global Anti Scam alliance and the DNS Research Federation in this single global signal exchange. The GSE is a new project with the ambition to be a global clearinghouse for online scams and fraud with bad actor signals. With Google becoming the first founding member in May, they also want to let you know that cross account protection, a tool which enables ongoing cooperation between platforms, basically reach out 3.2 billion users across sites and apps when they sign in with their Google account. So not only is your Google account protecting you on Google services, but it's also protecting you on third party sites that use Google authentication across camp protection. So pretty nice. [00:41:12] Speaker A: Just nice. [00:41:13] Speaker C: Great. You know, the amount of people I know that have been scammed from one thing or another or, you know, one of my friends, friends grandparents got scammed a few weeks ago. It was, you know, messaged me to help. And when I'm like, there's not much you can do, you know, we can solve this in the world, you know, hopefully the world becomes a better place. [00:41:34] Speaker A: Yeah. I regularly get invoices through PayPal for things I never ordered, and so click this link to. [00:41:40] Speaker C: It's done really bad at picking those up recently. [00:41:44] Speaker A: Yeah. Yeah. I'm not so impressed with. I have slight concerns about centralizing this data, though. It kind of reminds me of the, like the black mare episode where everyone gets a rating and if your rating goes down for one reason, then all of a sudden you can't rent a taxi or something else. [00:42:01] Speaker C: I don't know what. [00:42:03] Speaker A: They do. Well, they do within themselves. Like, they could blacklist you within their own company, but I'm kind of slightly concerned about the creep of this information. It wouldn't take much for an actual bad actor to report somebody who's not done wrong into a system, especially if there's not really good vetting of data that's being exchanged between these different organizations and really screw someone over. [00:42:30] Speaker B: Yeah, it's one of those questions of, is the solution worse than the disease? I think there's still benefit to it and I think there's value there. But that attack, that happens where someone shares a Google Doc with you, which is an invoice with a link to go pay, they're getting more sophisticated and how they do that, which is why Google, it's slipping past Google's attack detection systems. And so when you get those things, you can mark them as fraud or phishing. There's a way to do that just click on it and mark it that way. But again, that data will then go into this database, hopefully potentially, or any threat protection or threat intelligence solution that's out there that you can subscribe to. And Google has many of these, this isn't the only one they have. There are a lot of these accurate things, but the same thing we used to have with spam filtering. Spam filtering you. So I'll be reputation list based too. And there was a bunch of companies that made blacklists and you could subscribe to those different blacklists and you can do something for IP reputation and stuff. So these things exist in multiple different avenues. And yes, they all are a potential additional attack vector if you're using in bad ways, but they have greater good than maybe bad risk. [00:43:40] Speaker A: Yeah, I agree. I think I'd just like to see it remain a little bit more distributed, and I don't want to see it a time when there is one central authority that keeps track of bad actors. It's a bit too monoun report kind of thing. [00:43:54] Speaker B: I agree. Well, Google is announcing in preview bigquery tables for Apache Iceberg. You gotta get their salad and you know, a fully managed Apache iceberg compatible storage engine from bigquery with features such as autonomous storage optimizations, clusterings and high throughput streaming ingestion. Bigquery tables for Apache Iceberg use the iceberg format to store data in customer owned cloud storage buckets, while providing a similar customer experience and feature set as bigquery native tables. This is nice if you are potentially have been using hdfs for a long time, hadoop file system, or you have parquet for your columnar data, but you need something more for your structured data. This is a nice open source option that now makes that data more portable. If you don't want to use necessary quibble bigquery tables, you want to use snowflake, or you want to use data bricks. They all have access to Apache Iceberg data as well. Now you don't have to translate the data into multiple formats. You can just have a standard iceberg template which is good, and leverage it in all your use cases. [00:44:52] Speaker A: Cool. I've never heard of it. I've so far avoided a lot of the big data work we've been doing lately. [00:44:59] Speaker B: I know, that's why we're all like AI big data, because we've never really been in that space that much. [00:45:06] Speaker C: But the amount of Apache projects always fascinate me. I learned about all these little different projects that they have that are all under the Apache framework and I'm like, oh, these are kind of cool. Didn't know all these existed and how broad it's gotten over the years. [00:45:21] Speaker B: So one of my secret tricks to figuring out AWS predictions is go look at all the Apache projects that have gotten popular in the last six months. Uh, so I'm giving away, giving my trade secrets here. But uh, that uh, is uh, yeah, there's a lot of Apache projects, there's a lot of open cloud foundation projects, there's a bunch of things. Those are all uh, you know, definitely ripe for opportunities. [00:45:49] Speaker C: Yeah, I didn't realize zookeeper was Apache. [00:45:52] Speaker B: Yep. [00:45:52] Speaker C: As I'm looking at the website I just, I just know a zookeeper so I didn't think about it. Apparel is too. [00:45:59] Speaker B: Yeah, there's all kinds of weird ones too, like Apache Aries for example, which I was just looking at pluggable java components. Like, okay, that's pretty niche. [00:46:08] Speaker A: It's not like Apache made them. I mean they're a software foundation that kind of adopts these projects eventually. [00:46:15] Speaker B: Cassandra is one of them too. [00:46:16] Speaker C: Yeah, James is an enterprise mail server. [00:46:20] Speaker B: There's jackrabbit, all kinds of things. And then there's different levels of maturity. And so like some of them are incubation, some of them are mature, some of them are not. But yeah, if you filter by number of committers you can very quickly see which ones are most active. You can see categories of them. So lots of big data tools, lots of content, lots of database. It's good stuff there. Anyways, we'll tip of the trick for your predictions that are coming up very soon. Only a month and a half away from reinvent. [00:46:50] Speaker A: Yeah, and just let nobody mention Nifi would have had to. [00:46:54] Speaker B: Yeah, it's fun. All right. As you drive your finops adoption in your organization, which I'm hoping all of you are at this point, identify which teams, projects and services are driving. Expenses is essential. First step in your finops journey to help ease this, Google is introducing the Google Cloud cost attribution solution. This is a comprehensive set of tools and best practices designed to improve your cost metadata and labeling governance processes, enabling data driven decisions so you can ultimately optimize your cloud spending. The cost attribution solution leverages a fundamental Google Cloud feature that often goes underutilized labels. These simply simple yet powerful key value pillars act as metadata tags that you can attach to your Google Cloud resources. By applying the labels, you can get granular cost breakdowns, data driven decisions and customizable reporting. Google of course understands that your environment is unique and that you may have different levels of maturity, which is why they're giving you proactive and reactive governments options for labels. For those of you who are mature and want to be proactive and enforce labels, start on the right foot by enforcing consistent and accurate labeling from when you provision resources with the terraform policy validation, which integrates into your IAC workflow, helping ensure that every new resource is tagged correctly per the organization's labeling policies. This prevents the cost tracking gaps and improves accuracy from your data. And this is basically a form of OPA. So if you're familiar with the open policy agent capability, these are things you can write, put them into Google directly and then basically use that as part of your IAC workflow to validate that tags are set as you expect. The next is the reactive governance. This is basically reporting, alerting and reconciliation features for existing resources. They're offering you a dual approach. The reporting the tool identifies unlabeled resources, providing a clear picture of where you may have gaps in cost, visibility down to individual products and resources. Alerting receiving near real time alerts when resources are created or modified without the proper labels, enabling you to quickly rectify any issues and maintain control over your cloud costs. I'd really just like you to message someone in teams like hey, you just create something without a tag and I'm going to kill it in ten minutes if you don't fix it. And then reconciliation go beyond just reporting back tivily enforcing your labeling policies on existing projects. This will empower you to automate the application of correct labels to unlabel or mislabel resources for comprehensive cost visibility. So nice to have tools. Amazon's had basically tag enforcement and things like that for a long time. Google a little late to the party on this one. [00:49:11] Speaker C: The integration to the IAC workflow is nice because that's always the fun part because you could do it with SAP is on AWS, but that's post launch so you would have to go on with like Sentinel or something else if you want to do it within your pipeline, everything. So it doesn't just fail in the amount of time. I've seen stuff where it's like hey, why this resource fail? You have to dig through Cloudwatch logs or sorry, cloudtrail logs to figure out what happened and why. I is always fun times. [00:49:41] Speaker B: Yeah, the one thing about the your pipeline has to be using the G Cloud beta terraform provider to do this. And so basically it's a G cloud beta terraform vet command you run basically to do your policy validation and so there is some pretty easy ways to bypass that for your terraform code. So I would like the other option as well to basically post creation which they say they have in the reactive side with the alerting. But yeah, it's still better. And if you are doing a lot of terraform work on Google, you're probably looking at this terraform feature anyways because it's pretty powerful. What they're providing basically a terraform cloud implementation for Google that you don't have to pay for, which is a plus. [00:50:24] Speaker A: You should be able to use IAM policies so that when you create a resource you can say only allow if these particular tags or these particular labels exist in the, sure, in the metadata, that'd be kind. But then, I mean even if you do that, then of course the next step down as well. How do we validate that the thing you put in there for the values of label is accurate, not just someone who's mashed their keyboard to put some random nonsense in. So now you need to think about well, how do we validate that only valid cost centers or valid something else is the data in there. So I think a lot of it's going to have to be after the fact reviews of what's in there. But it would be super nice if they made it harder to deploy stuff without labels or even kind of like organizational templates. If you define some of the I'm like whenever I create the don't give me the standard launch wizard, add these extra things and make sure somebody puts some data in there. I think we're cool to be able to customize the UI to the I thought that for a long time. [00:51:27] Speaker B: All right, and our last story is an Azure story. Code referencing is now generally available in GitHub copilot and with Microsoft Azure AI. This is basically the general availability of code referencing for copilot chat and GitHub copilot code completions. This allows developers to see information about code suggestions that match existing public code. Some of the key features option to block or allow suggestions containing matching code. For allow suggestions, information is provided about the match itself and the notification of the editor showing the matching code. The file where the code appears and the licensing information if detected for the relevant repository is available in versus code, with wider availability coming soon to other tools like Jetbrains, etcetera. The partnership with Microsoft Azure to make the code hosting API available on Azure AI content safety and it's different from previous methods which required you to filter and prevent searches matching public code but lack transparency about the origins of that suggested code. The new code referencing features transparency, allowing developers to make more informed decisions about suggested code, and extends GitHub's indemnification commitment to include the use of code referencing for copilot, business and enterprise customers who comply with cited licenses. This new feature aims to address concerns with the use of public code and AI generated suggestions while maintaining the efficiency and benefits of using GitHub Copilot. So appreciate additional protections for code generated by AI in your application. [00:52:42] Speaker A: Yeah, that's cool. And the attribution of sources is excellent, and I'm sure you'll find use cases well outside code generation. [00:52:51] Speaker C: I'm thinking even like all the us government regulations about tracking all your dependencies and building your s bom, you're now going to start to see AI generated code in your s bombs and stuff like that. [00:53:05] Speaker B: Yeah, that'd be interesting to see an S bomb, how it's materialized. [00:53:10] Speaker A: Well, AI generated content still isn't copyrightable, so I'd be surprised if anyone actually admits that something was written by AI. [00:53:19] Speaker B: Yeah, that's a true point too. [00:53:20] Speaker A: Well, does it apply to code? I would assume it applies to anything that's been generated since a person didn't generate it. That works if it's a company who's taking credit. [00:53:31] Speaker B: Well, it's sort of weird. Like, okay, so you went to Claude and you generated react code. Then you want to put it on GitHub and now it's in GitHub and now you're in a different project and you're referencing code from your project, but it was generated by Claude. Like, who owns the copyright on that? [00:53:46] Speaker A: Yeah, as far as I know, anything AI generated is not copyrightable at this moment in time. [00:53:53] Speaker B: Well, also then who owns liability? This is where all legal teams like, lose their mind. [00:53:58] Speaker A: Yeah, this is why it's a nightmare. [00:54:00] Speaker C: Yeah. [00:54:03] Speaker B: All right, guys, that is it for another fantastic week here in the cloud. I will talk to you all next week. [00:54:10] Speaker A: Yep, see you later. [00:54:11] Speaker C: Cheers. [00:54:14] Speaker A: And that's all for this week in cloud. We'd like to thank our sponsor, Archero. Be sure to click the link in our show notes to learn more about their services. While you're at it, head over to our [email protected], where you can subscribe to our newsletter, join our slack community, send us your feedback, and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.

Other Episodes

Episode

October 12, 2019 47m14s
Episode Cover

The Cloud Pod now with Dynamic Parallelism – Ep 41

Chef finds a bad recipe for success, AWS rolls out Step Functions, Google launches its native load balancer for Kubernetes and Microsoft confuses us...

Listen

Episode 183

September 30, 2022 00:45:05
Episode Cover

183: The Cloud Pod competes for the Google Cloud Fly Cup

On The Cloud Pod this week, AWS Enterprise Support adds incident detection and response, the announcement of Google Cloud Spanner, and Oracle expands to...

Listen

Episode 209

April 28, 2023 00:44:35
Episode Cover

209: The Cloud Pod Whispers Sweet Nothings To Our Code (**why wont you work**)

Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan and Jonathan are your hosts this week as we discuss all the...

Listen