262: I Only Aspire Not to Use and Support .NET

Episode 262 June 06, 2024 00:52:59
262: I Only Aspire Not to Use and Support .NET
tcp.fm
262: I Only Aspire Not to Use and Support .NET

Jun 06 2024 | 00:52:59

/

Show Notes

Welcome to episode 262 of the Cloud Pod podcast – where the forecast is always cloudy! Justin, and Ryan are your hosts this week, and there’s a ton of news to get through! We look at updates to .NET and Kubernetes, the future of email, new instances that promise to cause economic woes, and – hold onto your butts – a new deep sea cable! Let’s get started! 

Titles we almost went with this week:

A big thanks to this week’s sponsor:

Big thanks to Sonrai Security for sponsoring today’s podcast! Check out Sonrai Securities’ new Cloud Permission Firewall. Just for our listeners, enjoy a 14 day trial at https://sonrai.co/cloudpod 

General News 

00:53 Vagrant Cloud is moving to HCP 

01:53 Justin – “I really think Vagrant would be a key pillar of the IBM future strategy for HashiCorp? Nope, I sure did not. I mean, I figured they’d probably just keep it open source and people would keep developing on it, but I didn’t really expect much. So, you know, to at least get this and an improved search experience is kind of nice because the old Vagrant cloud website, it was definitely a little stale. So I can have improved search and a new UI is always nice.”

AI Is Going Great (Or How ML Makes All It’s Money)

02:43 Snowflake Announces Agreement to Acquire TruEra AI Observability Platform to Bring LLM and ML Observability to the AI Data Cloud  

04:02 Ryan – “Yeah, this is a gap, right? Like, and I think we’ll, we’re in that uncomfortable phase of new technology, where it’s sort of rushing, like there’s AI, but there’s the management of AI. And, you know, how to sort of operate it at scale. And so there’ll be a couple different tasks and solutions. I feel like this is one. Hopefully, yeah, observability is a little funny, because it’s sort of like, I get it. But maybe another word.”

05:06 AI Gateway is generally available: a unified interface for managing and scaling your generative AI workloads 

06:46 Ryan – “…it’s funny, because I think they’re largely a very similar offering with Yeah, a little bit of difference in terms of the validity of the responses. But I do, you know, like, it is going to be fun to watch all the all these areas sort of fill in because this is, this is really nice for, for those companies who are trying to productionize AI and realizing like, this is ridiculously expensive if you’re routing everything back to your model and, and so like having your cache is gonna be super key and that’s cool.”

AWS

09:05 Optimized for low-latency workloads, Mistral Small now available in Amazon Bedrock 

09:44 Justin – “So I’ve been playing around with them more and more because he got me LM Studio and I just like playing with them. So I downloaded one, I was downloading the Microsoft ones for their newer model the other day and I was playing with that one and the reality is I very quickly realized I can’t see a difference between most of the models. I am not sophisticated enough to understand what the differences are between these things.”

13:11 PostgreSQL 17 Beta 1 is now available in Amazon RDS Database Preview Environment

14:42 Amazon EKS and Amazon EKS Distro now support Kubernetes version 1.30

15:16 Ryan – “Yeah, that’s that’s not going to be fun for some Kubernetes operators, but probably not to a lot of the Kubernetes users… Yeah, all their automation is now not going to work.”

16:07 Mail Manager – Amazon SES introduces new email routing and archiving features

19:58 Jutsin – “Yeah, I’m just thinking of the compliance benefit of being able to directly write these emails to S3 to then be able to have security scan them for compliance or DLP use case. Like there’s so many use cases that this allows for you to do. That’s really kind of cool.”

20:23 Amazon Security Lake now supports logs from AWS WAF

22:23 Amazon EC2 high-memory U7i Instances for large in-memory databases

**If you’re a company that uses these instances – and you’re hiring- we have a couple of guys who would LOVE to chat with you. Hit us up!**

23:59 AWS Weekly Roundup – LlamaIndex support for Amazon Neptune, force AWS CloudFormation stack deletion, and more (May 27, 2024) 

25:28 Ryan – “…this last one, there’s a lot of words that I don’t understand put together, but hopefully we can part, we’re gonna go through it, Ryan. The Llama index support for Amazon Neptune is now available. You can now build a graph retrieval augmented generation or graph rag. I didn’t know this was a thing. I knew what rag is, I knew what graph database was, but apparently you put together it’s a graph rag. Application by combining knowledge graphs stored in Amazon Neptune and the Llama index, which is apparently a popular open source framework for building applications with large language models, such as those available to you in Bedrock, of course. Apparently that can make magic happen. So, if you’ve been waiting for this, you can now do it.”

GCP

26:31 More FedRAMP High authorized services are now available in Assured Workloads

Google Cloud Achieves FedRAMP High Authorization on 100+ Additional Services 

29:47 Justin – “Yeah, I mean, I would much rather do it this way and then deal with the small extra things on the configuration or additional audit logging capabilities you need to do. And the reality is that a lot of these fast companies are selling to megabanks and other very heavily scrutinized organizations that care a lot about security of their customers’ data, customers like Apple, et cetera. So these vendors are under a lot of scrutiny for lots of reasons.”

31:13 Sharing details on a recent incident impacting one of our customers

35:07 Justin – “Well, as all errors tend to be, they’re all human error. So it’s just, I’m glad Google stood up a blog post really taking ownership of this and said, hey, this was on us. We’re taking responsibility. And it won’t happen to you. And here’s why it won’t happen to you. And here’s what we’re doing to prevent this from happening in the future, which makes me feel more confident. I think they needed to get something out maybe a little sooner. Like, hey, this is true. This had happened. We were helping the customer. We’ll get back to you.”

36:51 Improving connectivity and accelerating economic growth across Africa with new investments

37:46 Justin – “Yeah, pretty heavily invested in by China actually because of how untapped it is by the rest of the market, but you know, I think having more competition there and being able to get access to data and to network services and anything to make it better going to Australia with multiple paths is also a win because, yeah, there for a long time was not a lot of options.”

38:50 Cloud SQL: Rapid prototyping of AI-powered apps with Vertex AI  

Azure

42:03 General Availability of .NET Aspire: Simplifying .NET Cloud-Native Development

44:11 Ryan – “This is interesting because when I first read the title, I thought it was more of like a, you know, features into a .NET framework, but this is more like CDK or programmatic resources for .NET, which is kind of cool, actually. As much as I wanted to make fun of it before, like this is a gap.”

46:17 Microsoft Copilot in Azure extends capabilities to Azure SQL Database (Public Preview)

47:14 Ryan – “…soon will be saying, you know, like we always say, like, it doesn’t have a SQL interface. That’s how you know it’s real, it’ll be like, does it have like natural language processing of a SQL interface? Because it, you know, like I can’t form a query to save my life.”

49:43 AKS at Build: Enhancing security, reliability, and ease of use for developers and platform teams

50:45 Ryan – “Finally, I’ve been waiting for these management features of Kubernetes for years now, because it’s so difficult to operate Kubernetes at scale. And you’re seeing this now with GKE for Enterprise, I think it’s called now, what was Anthos, and now AKS Automatic, which I love the name.”

Closing

And that is the week in the cloud! Go check out our sponsor, Sonrai and get your 14 day free trial. Also visit  our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloud Pod

View Full Transcript

Episode Transcript

[00:00:07] Speaker A: Welcome to the cloud pod, where the forecast is always cloudy. We talk weekly about all things AWs, GCP, and Azure. [00:00:14] Speaker B: We are your hosts, Justin, Jonathan, Ryan, and Matthew. Episode 262 recorded for the week of May 29, 2024. I only aspire not to use and support.net dot good evening, Ryan. How's it going? [00:00:29] Speaker C: Pretty good, pretty good. [00:00:31] Speaker B: It's just the two of us tonight. It's been a little while since just the two of us, but Matt was out traveling for business and Jonathan had some other things going on again this week. So I said, let's just do the two of us. Otherwise we won't record till Friday, and by Friday we're dead to the world. It's just been a long week, but especially this is a short week with Memorial Day yesterday, short was the longest, crammed into two, four days versus the normal, normal amount of time. So, yeah, always good just to get the recording over with in some ways. So I got some, I got not a lot to do for you today, but some meaty topics I think we can get into here. So first up, this one sort of feels like if you care about it, you get it moved to the Hashicorp cloud platform as quick as possible before the IBM acquisition is done. So vagrant cloud is being migrated to the Hashicorp cloud platform under the new name HCP Vagrant registry. All existing users of vagrant cloud are now able to migrate their vagrant boxes to the HCP. Vagrant isn't changing in any way, just that HCP will provide a fully managed platform to make use of vagrant easier, which will include an improved box search experience, a refresh vagrant cloud UI, and no fee for your private boxes, which is pretty nice until, of course, acquisition closes and then IBM figures out how to charge us. Users who migrate will be able to register for free with the same email address as their existing vagrant cloud account. So there's just kind of a nice little simplification and unification of the cloud products from Hashicorp. [00:01:54] Speaker C: Yeah, makes sense. This is probably in the works before the acquisition, but exactly for simplification reasons. But, yeah, it does sort of feel like once there's an acquisition announced, you're like, oh, what's the strategy? Why don't they do it? [00:02:06] Speaker B: Well, I'm like, did I really think Vagrant would be a key pillar of the IBM future strategy for Hashicorp? Nope, I sure did not. I mean, I figured they'd probably just keep it open source and people would keep developing on it, but I didn't really expect much so to at least get this and an improved search experience is kind of nice because the old vagrant cloud website, it was definitely a little stale. So getting a bit of improved search and new UI is always nice. And I've used this feature many times. I actually even have a vagrant account that I need to now migrate to HCP. I didn't say though, what happens if you already have an HCP account and you have a vagrant account? Because I have both, but hopefully worth it out. [00:02:44] Speaker C: Yeah. All right. Same email address maybe? [00:02:49] Speaker B: Yeah, maybe. Nice if it just automatically happened because we'll see. Well, in our AI is going great. Snowflake announces agreement to acquire true era AI observability platform to bring LLM and ML observability to the AI data cloud. This complementary investment will allow them to provide even deeper functionality that will help organizations drive AI quality and trustworthiness by evaluating, monitoring, and debugging models and apps across the full life cycle, both development and production. True errors technology helps evaluate the quality of inputs, outputs and intermediate results of LLM apps, and this expedites experiment evaluation for a wide variety of use cases, including question answering, summarization, retrieval, augmented generation based applications, and agent based apps. True AI algebraically claims that they can identify LLM and AI risks such as hallucinations, bias or toxicity, so issues can be addressed quickly and so that organizations can demonstrate compliance with AI regulations. So that's not really what I would consider to be observability. [00:03:48] Speaker C: Yeah. [00:03:49] Speaker B: And this particular, I get why they went there. I get the naming, it made sense. But yeah, this is really nice. If you are trying to your company, you're trying to build foundation, you're really on top of foundational models, more custom models. You don't have the resources and team to probably build out a full trust organization just to solve if you're, your model has bias or toxicity, et cetera. So this is nice to have a capability directly in the product that you can leverage. I appreciate this one. [00:04:18] Speaker C: Yeah. This is a gap, right. And I think we're in that uncomfortable phase of new technology where it's sort of rushing. Like there's AI, but there's the management of AI and how to sort of operate it at scale. And so there'll be a couple of different tasks and solutions, and I feel like this is one, hopefully. Yeah, observability is a little funny because it's sort of like. I get it. Yeah. I mean, but maybe another word like, you know, I don't know. [00:04:46] Speaker B: Yeah. I mean, like, I would call it integrity or something like AI integrity monitoring or AI trust monitoring. Something, something more on that line, because that's what they, what this is. But observability to me is like, no, I want to know how many LLM transactions I made and what's the average contact and how many are getting errors. I mean, I guess some of the other things it does provide I would maybe want to know in my observability stack as well. But still, it's really trust. Do I trust my LLM is doing the right thing? That's the bigger question. Well, the AI gateway from Cloudflare is now also generally available. Since the beta launch in September, they've already proxied over 500 million requests and are now prepared for you to use it in production. AI Gateway is an AIOpS platform that offers a unified interface for managing and scaling your generative AI workload. At its core, it acts as a proxy between your service and your inference providers. Regardless of where your model runs the single line of code, you can unlock a set of powerful features focused on performance, security, reliability, and observability, and Cloudflare says it's in the control plane under AI Ops. It's just the beginning of a robust roadmap of exciting features planned for the future. Today in the AI gateway, you're getting the following benefits. Number one, analytics aggregated metrics across multiple providers, allowing you to see traffic patterns and usage, including the number of requests, tokens, and costs over time. Real time logs into requests and errors as you build caching capability to enable custom caching rules and use Cloudflare's cache for repeat requests instead of hitting the original model provider API, helping you save on costs and latency. Rate limiting controls how your application scales by limiting the number of requests your app receives to control costs or abuse and the universal endpoint, which in case of errors improves resilience by defining request fallbacks from another model or inference provider. And this currently supports multiple things, including Cloudflare's own worker AI and ten of the most popular AI services, including bedrock, anthropocene, azure, OpenAI cohere, Google, vertex, AI, grok hugging face, OpenAI perplexity replicate, and the universal endpoint. [00:06:40] Speaker C: See, that's a much better description of basically the same thing, except for this one. [00:06:45] Speaker B: They didn't talk about trust and hallucinations and toxicity and those things. So, like, maybe, maybe true era should have got sold to Cloudflare instead of. [00:06:56] Speaker C: Yeah, well, I mean, this, it's funny because I think they're largely a very similar offering with you. Yeah, a little bit of difference in terms of the validity of the responses. But I do, you know, like it is going to be fun to watch all the, all these areas sort of fill in because this is, this is really nice for, for the companies who are trying to productionize AI and realizing like this is ridiculously expensive if you're routing everything back to your model and, and so like having a cache is going to be super key and. [00:07:26] Speaker B: Yeah, well, and even being able to, like right now this is where you're at, but if you think about using workers and then you can basically say, well, this use case is not very complicated. I can pass this to a lesser model that'll answer the question satisfactory versus this is a more complicated question and needs the power of chat, GPT 4.0 or whatever, you get kind of different capabilities. So if you can start using it even as a routing layer, to be able to really optimize based on the tokens, based on whatever you're passing to the model, which one you want to go to, or maybe this is just a text summarization request versus this is a generated image. Those are different models typically or different. It can all be done in one model, but it's expensive and you do it that way. This is nice. This is much more observability as I think about it for LLMs. But yeah, both products have merit. I think this is just more what I would have thought was observability plus with the capability of doing load balancing, basically your AI models, which is also cool for, hey, we're building a new model and we want to try out is the inference is good on the new model versus the old model and we want to send some amount of traffic to the new model to evaluate. There's a lot of benefits of this capability. So I can see, I'm not shocked. I've already done 500 million requests. I think they're probably the only company I've heard about doing this right now. And so it makes total sense to me. [00:08:49] Speaker C: Yeah, you took the words out of my mouth with the ability to do bucket testing, canary testing, and because it, you know, like it's, that's one of the hard parts of, you know, AI, especially if you're using your own custom model. It's like, how do you roll it out? Is it a big bang? Thing like that sucks. So this is a nice, a nice tool to get that, which is awesome. [00:09:12] Speaker B: Moving on to AWS, they have the new mistral small model available for you in bedrock. This is a foundational model. Based on the fast follow up to the recent announcements of the Mistral seven B and the Mistral eight x seven B in March and the Mistral large model in April, you now access four high performing models from Mistral AI and Amazon Bedrock. And like we just talked about, if you wanted to load balance between those models, you could use the cloud three thing we just talked about, the key features of the Mistral small you need to know about. It has a retrieval augmented generation specialization. It helps with coding, in particular, coding proficiency, and it isn't multilingual capable. [00:09:48] Speaker C: This is where we need Jonathan, who's been playing around with these things nonstop. [00:09:53] Speaker B: So I've been playing around with him more and more because he got me LM studio and I just like playing with them. So I download one like I was downloading the Microsoft ones for their newer model their day, and I was playing with that one. And, you know, the reality is I very quickly realized I can't see a difference between most of the models. I'm not sophisticated enough to, to understand what the differences are between these things. But yeah, I was looking at the Phi three mini model, and it's nice because it's actually the LM studio that he recommended. Plugs right into hugging face from their catalog. So you just download the models right from hugging face, then you can run locally, and then if I want to run them on the web, you actually go get a hugging face description too, which is actually a pretty cool service if you haven't played with it. Highly recommend the hugging face stuff. If you're interested in the open source side of LLM. It's kind of fun. Definitely getting excited to play with it more and more. For sure. [00:10:49] Speaker C: Yeah, I play with it a little bit, and then I realized that if I want to do anything real, it's going to take a lot of time that I don't have, man. [00:10:56] Speaker B: Yeah. Again, that's the reason I get to like, okay, I can't break this. [00:11:00] Speaker C: Yeah. [00:11:01] Speaker B: I mean, like, they're fundamentally the same answer with maybe some verbiage changes, but yeah. The other cool thing this week came out was my chat GPT subscription came now with a Mac desktop client. So I can do a window or a Mackey spacebar and it pops up a chat GPT box right in my computer. I have to do anything fancy. [00:11:19] Speaker C: That's pretty handy. [00:11:20] Speaker B: Dangerous. [00:11:20] Speaker C: Yeah, yeah. [00:11:21] Speaker B: And the cool thing is it has the remembrance feature built into it, so I can use all the remembers capability so I can tell it like I'm now starting to customize my prompts to like know me more. So yeah, it's kind of fun. [00:11:33] Speaker C: Yeah, it's kind of neat. And I think that getting it directly at the consumer is going to be kind of a differentiator, which we didn't really see with search, which is interesting. [00:11:44] Speaker B: But yeah, well, the problem was bridging desktop search with online search was sort of difficult. But I do think the nice part about this opportunity is that you get now computers are coming out with it. We talked about the Copilot Plus PC, which is the dumbest name ever from Microsoft last week. Then this week I saw there's a new version of chromebooks coming that have Gemini built into the Chromebooks. I'm sure Apple's going to announce something at WWDC here in a few weeks about how they're going to be integrating a partnership with Chat OpenAI, so I'm sure they'll have something as well. There's a lot of opportunities to get it more and more in the consumer's face, which is cool, but also scary for enterprises who are worried about data leaking. So I'm waiting for the model that I can then point to all my Dropbox into my Google Drive and all my SaaS services that can then search and do AI things across my world because that'll be really cool. Yeah, that's what I'm waiting for. [00:12:49] Speaker C: Yeah, it's going to be really neat. I also think it's a neat way to spread out the compute needs where you can do sort of local rags and you can do maybe a little bit of mini model processing before sending it off to a larger model. It's fascinating the capabilities at putting it directly on consumer hardware brains. [00:13:12] Speaker B: Well, if you are super excited about doing bleeding edge things of postgres, which you should not do in production, the postgres SQL 17 beta one is now available to you in the Amazon RDS database preview environment. This allows you to evaluate all the prerelease features, including features that reduce memory usage, improve time to finish vacuuming, and show progress of your vacuuming of indexes. You no longer need to drop logical replication slots when performing a major version upgrade, and they continue to build on the SQL JSON standard support for JSON table features that can convert JSON to standard postgres tables the merge command now supports the returning clause, letting you further work with modified rows as well as there's general improvements to query performance and adds more flexibility. Partition management with the ability to split merge partitions and overall Amazon's on fire this thing released on May 23 and Amazon had this blog post on May 24. So that RDS database preview environment on top of it. [00:14:05] Speaker C: No kidding. [00:14:07] Speaker B: Yeah, I'm excited for 17. [00:14:08] Speaker C: Yeah. Being able to split and merge partitions, that's awesome. [00:14:12] Speaker B: Yeah, there's some really nice features in 17. Even the reducing the memory usage of the vacuum process and improving the ability to see progress of vacuuming your indexes, which is one of my pet peeves. Wow, that's really nice. So there's definitely some cool features coming in 17. Yeah, I've been in that instance on those outage calls. Yeah. Why is performance terrible? Well, we're running a pretty large vacuum operation and like when's I got done? I don't know. Well last time we ran it took, you know, these many days. Yeah, that's what's fun. Well, areas where Amazon is not quite as fast, at least, at least compared to Google and Azure is kubernetes. But they're pleased to say that EKs and EKS distro now will support Kubernetes version 1.30. Amazon points out the 1.30 includes stable support for pod scheduling readiness and minimum domain parameters for pod topology spread, which makes no sense to me. EKS 1.3 managed no groups will automatically default to the Amazon Linux 2023 as the node operating system. So you too could be mad at system D. [00:15:13] Speaker C: Yeah, that's uh, that's, that's going to be fun for some Kubernetes operators, but probably not to a lot of the Kubernetes users. [00:15:22] Speaker B: Yeah, most kubernetes we'll never know, but the platform team who manages this is. [00:15:25] Speaker C: Yeah, all their automation is now not going to work. Yeah, sweet. [00:15:30] Speaker B: Hey, I mean all three cloud providers now have 1.30 support so that's uh. Yeah, doing pretty well. I don't know when kubernetes 1.3 dropped. It's been a few months now I. [00:15:38] Speaker C: Think, I think it, yeah, I count it in number of episodes because the only thing I know is this arbitrary race we made up. [00:15:46] Speaker B: Yeah, April 17 was when it was released and they've already released, well, they already are working on the next version so. Yeah, it is a fun race that we track that no one else cares about but us. Yeah, and the platform team is probably like, I like the slow one. Yeah, exactly. [00:16:03] Speaker C: Yeah. Less updates please. [00:16:06] Speaker B: Well this is actually an interesting feature for Amazon. So Amazon SES, or simple email service is exactly what it sounds like. It's a simple email service allowing you to send and receive emails without having to provision email servers yourself as long as you manage your bounce rate. However, if managing multiple email workloads at scale can be a daunting task for an organization, from handling high volumes of emails to routing them efficiently and ensuring uniform compliance regulations, the challenges can be overwhelming to those who don't understand SES and how it's finicky as it works. Managing different types of outbound emails, whether it be one to one user, email transactional or marketing emails generated from applications, also comes challenging due to increased concerns with security and compliance requirements, as well as all those pesky DMARC s PIF and other control points make these pain points easier for your organization. They are pleased to announce the SES mail manager because it was so simple it needed a manager. SES mailmanager is a comprehensive solution with powerful set of email gateway features that strengthens your organizational email infrastructure and it simplifies email workflow management and streamlines compliance control while integrating seamlessly with your existing systems. Mail manager consolidates all incoming and outgoing email through a single control point, and this allows you to apply unified tools, rules and delivery behaviors across your entire email workflow. Key capabilities include connecting different business applications, automating inbound email processing, managing outgoing emails, enhancing compliance through archival, and efficiently controlling overall email traffic. The mail manager features include ingress endpoints which are customizable SMTP endpoints for receiving emails. This allow you to utilize filtering policies and rules that you can configure to determine which emails should be allowed into your organization and which ones should be rejected. And you can use an open ingress endpoint or an authenticated ingress endpoint. And I assume eventually you'll have plugins to this to security tools that will do email things traffic policy and policy statements with the rule set so you can set up how all that traffic gets routed an SMT relay which allows you to integrate your inbound email processing workflow with an external email infrastructure such as an on premise exchange or a third party email gateway, as well as email archiving to store all those pesky emails into s three so you can search for them later. It will support for add ons or specialized security tools. They can enhance the security posture and tailor inbound email workflows to your specific needs now and into the future. So overall, pretty nice enhancement. I was just dreaming about having this ingress controller from all these different Amazon accounts. I could route them all to one place and I could do all my compliance and controls there. I could store them for historical archival purposes. Yeah, this thing is sexy, right? [00:18:30] Speaker C: But I'm still stunned that there's a new feature for SES. I don't remember that happening since I've been working with cloud. [00:18:43] Speaker B: So there was another feature, another product called Pinpoint which Amazon came out with, which was really for handling email marketing campaigns, but it was, and also handled sms and stuff like that. So a lot of those pesky, hey, get on the website then sign up for the email, then, oh, if you sign up for the sms too, we'll give you an extra 5% off those scams. Those can all be done through pinpoint. And so they do a bunch of campaign orchestration stuff for more for marketing professionals. But that's the most recent thing prior to this that I remember coming out for anything related to SES. [00:19:13] Speaker C: Yeah, the amount of hacking, of making email work and to do automation as trigger, no longer you'd have to do because of mail manager. It makes me want to use it even though I don't really have a use case for it right now. [00:19:33] Speaker B: That's pretty rad. [00:19:34] Speaker C: Like we've wanted so many of these things like the ability to route different things, the ability to filter rather than having a single sort of SNS topic subscribe to your mail, which you then had to write all kinds of crazy logic. Now you can just do it natively in the project. So that's pretty rad. [00:19:54] Speaker B: Yeah. I just think of the compliance benefit of being able to directly write these emails to s three, then be able to have security scan them for compliance or DLP. There's so many use cases this allows for you to do. That's really kind of cool. [00:20:09] Speaker C: Yeah, it really makes it, you know, like, I mean just a really powerful solution for, for handling email for your company. [00:20:17] Speaker B: But yeah, well, AWS is announcing the expansion in the log coverage for Amazon Security Lake. And you think, oh, it must be adding in something like this mail thing or. No, no, it's just adding the ability to query Amazon web application firewall logs, something I would have thought it did originally being that it's a security lake. Apparently that was not the case. And this is where my fact I'm not using Amazon on a daily basis burns me every time. But you can now easily analyze your log data to determine if a suspicious IP address is interacting with your environments, monitoring trends and denied requests to identify new exploitation campaigns or conduct analytics to determine anomalous successful access by previously blocked hosts through your WAF in the firewall log security. Unique thing. [00:21:00] Speaker C: Yeah. There's something funky about the way that Amazon laugh logs like, it is a. [00:21:05] Speaker B: Weird format I will tell you. [00:21:07] Speaker C: And you can't, like, you can only do sample rates and you can't get consistent logs. I mean, it's been a while since I've used it, but like for a long time it was like security teams would just poo poo all over it because it didn't provide the logging functionality of the third party offered. [00:21:27] Speaker B: Laughs yeah, I mean, a few times I've had to troubleshoot the laugh blocking my traffic. It is always a nightmare, especially because you basically get these custom rules that you can subscribe to from different providers who give you, hey, we're blocking. Owasping. [00:21:43] Speaker C: Cool. [00:21:43] Speaker B: That's awesome. I want that. And then you apply it and it starts blocking something. You're saying, you're like, son of a, I don't know what's blocking. And now you're trying to get it. Yeah, well, you can figure it out. It's part of the manager rule. But then, like, you can't just turn off that rule for that one function. You have to turn off the whole thing. Yeah, there's a lot of things I don't care for in Amazon's waf, to be honest. And I, Cloudflare is not sponsoring. But hey, if you're listening, we'd love. [00:22:05] Speaker C: For you to sponsor. [00:22:07] Speaker B: If you really want a powerful waf solution, check out Cloudflare. [00:22:11] Speaker C: It is my current favorite as well. [00:22:13] Speaker B: Yes. All right. For those of you who like to set money to your, set fire to your wallets, the new Amazon ec two high memory u seven I instance for large in memory databases is now generally available. This was previously announced at re invent. We probably made fun of it then for how expensive it is, but I forgot. But now it's ga, I can make fun of it once again. So here we are. These instances provide up to 32 terabytes of DDR five memory and 896 vcpus, and they leverage the fourth generation Intel Xeon scalable processors, the Sapphire Rapids. These high memory instances are designed to support large in memory databases, including SAP, HANa, Oracle, and Microsoft SQL Server. There's three sizes available to you, the U seven I, twelve terabyte, the 24 terabyte, and the 32 terabyte. And it only set you back for the twelve terabyte $113,000 a month on demand pricing. And if you want to go to that maximum 32 terabyte box, that one's going to cost you $303,000 a month. That's a headcount a month. [00:23:13] Speaker C: That is not. So. I mean, if you're a company who's deploying these and you're hiring, let you know because apparently you'll just give away money. [00:23:22] Speaker B: Yeah, I mean, if you're an SAP Hana shop that your whole business runs on Hana. This is $300,000. To save billions of dollars in processing costs is probably not a big deal. Probably not. You know, it's just not our scale. [00:23:40] Speaker C: It really isn't. And, you know, it's almost anything I'd build or interact with I'd try to scale horizontally. And this is, this is not right. This is, this is the one box that does everything. That's why you spend this much money. Nice. [00:23:55] Speaker B: Well, there is the weekly AWS weekly roundup blog post they do every week. And typically we don't talk about it because we already have addressed most of the stories and other things. But there's three of this week that were not mentioned in other blog posts that I could have pilfered into the story. So we're talking about them briefly. First up is Amazon open search service zero ETL integration with Amazon S three. This allows the open search service to offer a new, efficient way to query operational logs in Amazon s three data lakes, eliminating the need to switch between tools to analyze the data. You can get started by installing out of the box dashboards or AWS log types such as Amazon BBC flow logs, wAF logs, and elastic load balancing, which, having used athena, this is much better. [00:24:38] Speaker C: That's what I was just thinking. Yeah, like zero ETL and, and the old process of looking through your cloud trail logs. Hilarious. So this is going to be very nice for a lot of teams who definitely don't want to programmatically figure out how to create a new partition for every day. [00:25:03] Speaker A: Do you know what's more old school than blowing on a Nintendo cartridge? To make it work manually creating individual policies to achieve least privilege in your cloud. Leave old habits in the past and with a single click, lock down access to sensitive permissions and services without disrupting DevOps. With the new cloud permissions firewall from sunrise security, you can easily restrict excessive permissions from human and machine identities, quarantine unused identities, and restrict specific regions and unused services with a click of a button. Start a 14 day free trial for Sunrun's cloud permissions firewall at Sunri dot co cloudpod that's s o n r a I codepod. [00:25:49] Speaker B: Well, if your data has always had a dream of going and seeing the pyramids of Giza, there's a new cloudfront education for that storage store itself in Cairo. The site highly distributed and scalable content delivery network delivers static and dynamic content APIs and live on demand video now deliver directly to your customers in Egypt with up to a 30% improvement in latency on average. So thanks for that. I appreciate that. And then this last one, there's a lot of words that I don't understand put together, but hopefully we can work through it. Ryan the llama index supports for Amazon Neptune is now available. You can now build a graph retrieval augmented generation or graph rag. I didn't know this was a thing. No, Rag is I knew what graph database was, but you probably put them together. It's a graph rag application. By combining knowledge graphs stored in Amazon Neptune and the llama index, which is apparently a popular open source framework for building applications with large language models such as those available to you in bedrock. Of course, apparently that can make magic happen. So if you've been waiting for this, you can now do it. [00:26:50] Speaker C: Or if you just want to say graph rag a lot, which is which I do. And so I'll make up her excuse to use this graph rag. [00:26:57] Speaker B: I mean, it's going to show up the day job at least once this week, like, hey, we should get a graph rag for that. Then I was going to look at anything crazy. It'd be great. That's it for Amazon this week. So let's move on to GCP. We don't typically talk about this very often, but there's been some changes I thought we should just cover out on Fedramp. So Google is announcing that they have now 100 plus additional services, meaning Fedramp high authorization. Fedramp High is the needed for the four letter agencies, other agencies that are out there. They've shown their commitment by the federal agency with a significant milestone. With 100 new Fedramp high authorized services, including service such as vertex, AI, cloud build and cloud run, Google Cloud provides the most extensive data center footprint for Fedramp high workloads of any cloud service provider, with nine us regions to choose from. They've also received top secret and secret authorizations as well. And so one of the reasons why I wanted to talk about this was because there's been a sort of a change has occurred for those of you who are familiar with Fedramp. So there's the office of Management and Budgets guidance. They're the ones who originally came up with the whole Fedramp thing. And basically when it first came out, it was interpreted into what became Gov cloud on Azure and AWs, where you have to have this highly secure, separate environment that basically means that you have to go through a lot of regulations around certifying your app and vulnerability management and all these things that make a lot of cost and complexity and the barrier to entry into FedrAmp to be quite high because of all this extra work you have to do, depending on the level you want to do, not all FedRamp services have to be in Govcloud. Even today on Amazon you can do up to moderate on their cloud. But it was always sort of felt like if you wanted to be serious in space, you had to go to govcloud because to get to high you had to do that. But the OMB has basically published a new guidance which basically is to embrace the commercial based cloud solutions versus using dedicated cloud providers like GovCloud. And the OMB guidance basically points out that the requirements for dedicated gov cloud regions has decreased the value to the federal government that FedrAMp was supposed to provide, adding high barriers to entry and eliminating choice. So this is a boon for most SaaS companies who have to do this because now you can potentially with some modifications and there's probably some complexity still because it's still draft. But ideally you'd be able to potentially deploy your SaaS application into a commercially viable region as long as you can meet the Fedramp requirements. And that'll be highly supported and encouraged by the OMB and the Fedramp guidelines coming forward once this memorandum gets approved and accepted by all people. I think it's still in public commentary at this moment with expectations to get finalized very soon. But Google's point is you can race and get there sooner by using our already highly compliant environment. And this is all done through assured workloads. So this is how they solved the big problem. [00:29:43] Speaker C: Yeah, I mean they had problems like this where you're trying to figure out how to offer something that's Fedramp certified and you had to operate in gov cloud, and then you realize that you would just have to build a new version of your app, but it would be completely separate due to the changes and limitations in Govcloud. So it's never really made sense to me unless you're only doing one or the other. Google's model has always been appealing for that. I never want to do Fedramp, but if I have to, I'd rather use assured workloads with the same tooling and services that are certified that I'm already using today. [00:30:27] Speaker B: Yeah, I mean I would much rather do it this way and then deal with small extra things on the configuration or additional audit logging capabilities you need to do. And the reality is that a lot of these SaaS companies are selling to mega banks and other very heavily scrutinized organizations that care a lot about security of their customers, data customers like Apple, et cetera. So these vendors are under a lot of scrutiny for lots of reasons. Plus that's why you end up with SLA's and penalties and things like that. Now, government secrets are a little unique because they can start world wars, but again, you can add those layers of protection and control on top of a system versus having to deploy it somewhere special. I'm very glad to see this and I'm sure at some point I'll have to do another Fedramp implementation I've done in the past and they were not fun. So this hopefully will make it more fun next time. So do we talk about unisuper at all on the show? I don't remember. It's been a few weeks of this topic. [00:31:28] Speaker C: We might have mentioned it, but as an after show or something, but I don't remember discussing it on the show directly. [00:31:35] Speaker B: Yeah, so for those again, if we did talk about the show before, I don't remember. [00:31:41] Speaker C: I think there weren't enough details actually, so we skipped it. [00:31:44] Speaker B: I think we skipped it for that reason too, especially when it's all being confirmed by unisuper or not being confirmed by Google. It was sort of weird, but there was basically an event that occurred. The people on X and other social locations have been talking about for the last three or four weeks about Google deleting their customer unisuper's data in Australia. And like I said, we think we talked about it, but this is the first formal communication that Google has written with a formal after action report being posted to their blog. And they said that it was because their first priority was getting a customer back up and running and fully operational. And now they've had a chance to do a full internal review. They're willing to now share more information publicly. Now, I talked to them privately or not, let's talk about it, which I wasn't talking on the show, but basically everything they told me privately is now in this blog post. So win, nice. But basically. So unisuper, for those of you who don't know, is a very large retirement fund or pension type fund in Australia that basically lost all their data, that customers and pension holders wanted to be able to view their balances, take out withdrawals, et cetera, and they were unable to process transactions for a week due to their system going offline so Google would like to know you to know that this was one customer and one cloud region. It only impacted one Google service, which happens to be a very important service because it's the Google cloud VMware engine or GCVe, and it only affected one of Unisuper's multiple GCV private and clouds. It just happens to be the production one. It did happen in two zones. They want to be very clear that did not impact any other Google service, any other customer using GCV or any other Google Cloud service and customers other gcv private clouds, Google account organization folders or projects were not impacted, nor the customer's data backup stored in their gcs in the same region impacted. They basically want you to feel very confident this cannot happen to you and I'll explain why here. Basically, during the initial deployment of Google Cloud VMware engine for the customer using an internal tool, Google operators inadvertently misconfigured the GCVE service by leaving a parameter blank. The default for that parameter basically then became an unintended and unknown consequence of defaulting the customer's GCVE private cloud to a fixed term with automatic deletion at the end of that period happened to be a year out. This incident trigger and the downstream system behavior have been corrected to ensure this cannot happen again. The customer and Google's team worked 24 by seven over several days to recover the customer's GCVE private cloud, restore the network and the security configurations, and restore its applications, and recover data to restore full operations to unisuper and their customers. This was assisted by the customer's robust and resilient artificial approach to managing risk outage or failure, and data backup stored in gcs in the same region were not impacted by the deletion, and third party backup software was instrumental in aiding the rapid restoration of the service. Google has deprecated the internal tool that triggered the sequence of events, as this is now fully automated and controlled by the customer via the user interface even when specific capacity management is required. And Google scrubbed the system database and manually reviewed every GCV private cloud implementation so that no other gcve deployment is at risk, they have corrected the system behavior that set the GCV private clouds for deletion for such deployment workflows. So even if you did set it to delete, it would be a soft delete now in the future. [00:34:57] Speaker C: This is one of those things where they got to the market fast or their dev service to launch vmware in Google cloud turned into production and they replaced it and, uh. Oh, yeah, yeah, that's a rough, they. [00:35:16] Speaker B: Were a very early adopter of GCV in Australia is what it sounded like from other things I've heard. And yeah, when you roll out a service and you're doing things in a curry, then that's what happens. And, you know, I don't know what drove them to move to Google cloud in a hurry, why this is important, but they definitely an early adopter problem, if you will. [00:35:35] Speaker C: The null parameter causing the, the delayed action a year later is rough. Like for operators. [00:35:43] Speaker B: Well, like it's, you know, as all errors tend to be, they're all human error. So it's just a, you know, I'm glad Google's stood up. You know, a blog post really taken ownership of this and said, you know, hey, this was on us, we're taking responsibility and it won't happen to you. And here's why it won't happen to you. And here's what we're doing to prevent this from happening in the future, which makes me feel more confident. I think they needed to get something out maybe a little sooner, like, hey, this is true. Does it happen? We were helping the customer. We'll get back to you with more details and a full RCA after we get the customer up and stable. I think it would have been an acceptable blog post, but they allowed the unitsuper to basically post that to their website with a quote from Thomas Kurian, which was interesting. But again, it was really one of these weird things of like, well, why isn't Google saying something specifically, which made it seem much worse than it probably was. [00:36:31] Speaker C: Yeah, no, it is true. Like, you don't want to announce details before you know them. Right. So. But, you know, announcing a date for a date sort of thing for, for communication makes a lot more sense in that context because it did really just seem like what, something shady is going on and I want to know what it is. Yeah. And nothing was shady. [00:36:53] Speaker B: Well, and people were saying, oh, this is the perfect reason why you shouldn't choose cloud. And like, cloud is terrible and Google's awful and it was so much noise about it that didn't really need to happen that I think Google should take some lessons out of this and how to do crisis management a little better. [00:37:10] Speaker C: Yeah, I mean, the funny thing is that I do feel more confident after reading this just because that is such a crazy string of events that led to this. That's a lot of stars aligning. [00:37:25] Speaker B: Well, today Google is announcing the new investment in digital infrastructure and security initiatives designed to increase digital activity, accelerate economic growth, and deepen resilience across Africa. Yes, that's right. It's a new deep seat cable, the new undersea cable named Emoja. Emoja. Emoja. I don't know how to say that. Swahili, which means unity in Swahili, in fact, is the first fiber optic route to connect Africa directly with Australia. It starts in Kenya. The route will pass through Uganda, Rwanda, the Democratic Republic of Congo, Zambia, Zimbabwe, and South Africa, including the Google cloud region, before crossing the Indian Ocean to Australia. Path is built in collaboration with liquid intelligent technologies to form a highly scalable route through Africa, including access points, allowing other countries to take advantage of the network. So getting Africa more connectivity is a big win. [00:38:12] Speaker C: Mm hmm. No, it's a giant, untapped market for a lot of companies. And so, like, it's pretty crazy. [00:38:21] Speaker B: Pretty heavily invested in by China, actually, because of how untapped it is by the rest of the market. But, you know, I think having more competition there and being able to get access to data and to network services and anything to make it better, going to Australia with multiple paths is also a win because. Yeah, there for a long time was not a lot of options. [00:38:39] Speaker C: Yeah. Ships have anchors and cables are fresh. [00:38:43] Speaker B: Yeah, there was the. Was it Sweden or Norway who claimed the Russians, you basically destroyed their undersea cable. You remember that from a couple years ago? Yeah. So there was. There was a whole thing, you know, beginning of the Ukraine war where Russia, you know, was basically accused of damaging undersea cable and cutting off connectivity between parts of Ukraine and them, and they denied it. I just saw on Twitter over the weekend, someone had, like, a map of one of their boats with a transponder just going back and forth across where the cut happened. Like, yeah, sure. It wasn't. We didn't do it on purpose. [00:39:20] Speaker C: Yeah, yeah. Nice. [00:39:23] Speaker B: All right, next up is Cloud SQL, a rapid prototyping of AI powered apps with Vertex AI developers seeking to leverage the power of ML on their postgres SQL data often find themselves grappling with complex integrations and steep learning curves. Like all things ML, cloud SQL for postgreSQl now bridges the gap, allowing you to tap into cutting edge ML models and vector generation techniques by offering vertex AI directly within your SQL query. That's handy. [00:39:48] Speaker C: This is one of those ones that, I wouldn't have understood this post, like, even a few months ago, but I understand it now after participating in a hackathon day job. And so it's sort of interesting, you know, being able to use, you know, vector search on things and then being able to integrate that and with your model and training. And it is just really complicated still. It's just, Danny, it's one of those things like you have an idea, you have the technologies, it seems very straightforward and should be straightforward. And then once you, once you actually start to go through the implementation details, it turns into this giant Ruth Goldberg thing that you have to do. This is actually one of those enablements where I'm pretty sure they felt the customer pain and fixed it. I'm pretty happy to see this. I can't wait to try it out and see if it's better. [00:40:46] Speaker B: I'm interested to see what Microsoft SQL Server does for vector because I assume they're going to have to, they have it already. In Azure SQL, you can do vector type things if you're on Azure cloud, but to be able to do vectoring inside of SQL Server on Prem or in other clouds can be a big deal. I suspect that might be a big feature of the next version of SQL Server, which will require everyone to upgrade. Of course, my favorite. But it's great that it's coming to these things. But a lot of companies still run SQL server. A lot of them run Oracle. Oracle just came out with their AI. We were talking about that a couple weeks ago. It would be good to see all the major clouds or SQL vendors, SQL OTP vendors come out with their own vertex connectivity options through these type of technologies with vector because it's going to really help make things better. It's also probably going to make me. [00:41:35] Speaker C: Grumpy because, oh, it's definitely going to make you grumpy. [00:41:38] Speaker B: Yeah, because it's going to probably just thrash the crap out of my SQL capacity. So I'll need more. [00:41:44] Speaker C: And then it really gives developers a lot of rope to hang themselves with too. So it's like we, you know, graph is one of those things that I'm learning, um, because giving up on how to spell sequel and so like it's, it is a sort of like, oh, this is, this is different. And I can see how this would go horribly wrong. [00:42:05] Speaker B: Yeah. It's like when Mongo first came out and you're, you go to a session like, oh, I can see how this is really cool and also really bad. [00:42:11] Speaker C: Yeah. Wait, what did you put in there? [00:42:15] Speaker B: How many shards do I need to manage? That's a lot of replication. That's going to be bad for my network. [00:42:21] Speaker C: Yeah. [00:42:22] Speaker B: Remember that, my first meeting with that? But yeah, no SQL. Does graph have a SQL interface because it's not a real database, still has a SQL interface? That's a question. Let's move on to Azure. At build, Microsoft has announced the latest and greatest.net capability, aspire.net or.net Aspire. This streamlines the development of.net cloud native services is now generally available.net Aspire brings stealer tools, templates, and Nuget packages that help you develop distributed applications in.Net more easily. Whether you're building a new application, adding cloud native capabilities to an existing one, or adding already deployed.net apps to production in the cloud, today.net aspire can help you get there faster. Why.net aspire? You may ask. Well, the answer to that question. It's been an ongoing aspirational goal to make.net one of the most productive platforms for building cloud native applications. In pursuit of this goal, Microsoft worked alongside some of the most demanding services at Microsoft, with scaling needs of unheard of for most app services supporting hundreds of millions of monthly active users. So what LinkedIn working with these services to make sure we have satisfied their needs ensured we had foundational capabilities that could meet the demands of high scale cloud services. Microsoft invested in important technologies and libraries such as health checks, YArP, HTTP client, factory, and GrPC. I understood only two of those with native AOT. They worked towards a sweet spot of performance and size, and SDK container builds make it trivial to get any.net app into a container and ready for the modern cloud. The developers said they needed more as building apps for the cloud was still too hard and developers increasingly pulled away from their business logic and what matters to most deal with the complexity of the cloud. Enter.net aspire a cloudready stack for building observable, production ready distributed applications. So some of the benefits of this first, it provides you a.net local development experience with C Hash and the.net Aspire app host project. This allows you to use c hash and familiar looking APIs to describe and configure the various application projects and host the services that make up a distributed application. Collectively, these projects and services are called resources and the code in the app hosts forms and application model of the distributed app. Launching the app host project during the developer inner loop will ensure all resources in the application model are configured and launched according to how they were described. Adding an app post project is the first step in adding.net aspire to an existing application, and then the Aspire dashboard is the easiest way to see your application's open telemetry data available to you in Aspire. [00:44:42] Speaker C: This is interesting because when I first read the title I thought it was more of like features into a.NET framework, but this is more like CDK or programmatic resources for.net core.net, which is kind of cool actually. As much as I wanted to make fun of it before, like this is a gap. I mean, I'm not really super jazzed about this model, but it's because I come from an infrastructure background and everyone who comes from a developer background wants to use the language of choice. And it's sort of interesting. Like I'm very conflicted about this blog post because I'm, I've been burned by.net and.net developer patterns. [00:45:25] Speaker B: On one side you're like, I don't want to get burned again. But on the other side you're like hmm, this could be good. [00:45:30] Speaker C: Yeah. [00:45:32] Speaker B: Yeah. So health checks are a bane in my existence right now for a bunch of things because.net developers aren't as good at them as I've learned. But so they, I'm very pleased to see they have a whole page just dedicated to.net health checks. And then they basically give you some default things and talk about non developer. They don't quite do everything I would want a health check to do, but it does have component level health checks, which is what I really want. And this will do. Database connection could be established. A database query could be executed successfully. It can all be configured inside the health check natively, which is exactly what I would like. I'm very happy about. [00:46:06] Speaker C: I've learned over and over and over again.net development patterns. If it's not part of the ecosystem, it's not going to happen. At least this way you know those are built in, they'll be defined automatically and can leverage a standard in existing apps. [00:46:22] Speaker B: Yep. So if you're our.net and you are doing cloud, I would check out.net aspire or budget Java. You choose whatever you want. [00:46:33] Speaker C: I don't know if that's the, where I'd, where I'd send people. [00:46:37] Speaker B: Sorry. Anything but.net, maybe not Java. [00:46:42] Speaker C: Yeah, Java is just next on the list up, but it's still at the point. [00:46:47] Speaker B: Yeah, exactly. Well, AI has come for SQL server. Microsoft's copilot skills for Azure SQL database is here now. The skills can be evoked in the Azure portal query editor, allowing you to use natural language to query SQL or in the Azure Copilot integration in your ide. And thank goodness because be able to use natural language. So in a SQL server is kind of handy sometimes I just don't want to remember the syntax of all these things. And if I could just ask you, natural language, I'm going to be super happy. I had a recent experience with something similar to this in Jira because they added AI to Jira and oh my God, it's amazing. I can just say what I want and it just produces JKL that's close enough that I can then tweak it in 3 seconds and it saves me at least five minutes of remembering all the JQL sequence syntax and Jira. And this is the same thing for SQL server. I am here for this. Bring this to the next version of SQL server too. So we can also leverage this on Prem. Great, thank you. [00:47:45] Speaker C: Yeah, now soon we'll be saying like we always say, it does have a SQL interface. That's how you know it's a real, it'll be like, does it have like natural language processing of a SQL interface? Because it, you know, like I can't form a query to save my life. And these types of capabilities allow that. I know how to do single lookups across multiple tables, but do a query that combines the result of both those things and do a thing like that, I'm not going to be, that's never going to happen. I'm not going to spend the cat attack. [00:48:19] Speaker B: I know where my limitation is and I don't need to do those things normally. My issues are normally like, hey, is the data in this row? I can do that. I can query that quickly. I can do it fast. Yeah, but you go like, okay, now we need to do a bunch of lookups with like views. And I'm like, card pass, let me go find the syntax. But actually the other day I was just, you know, I need to cut to MySQl database for cloud pod. I need to make some changes in one of the tables. And it's very rare that I get into the command line in mysql to do that kind of stuff. But I didn't want to set up my jump post to do it. And so I was just like, I'll just do the command line. And I was like, oh yeah, what's the syntax for that again? And normally I would go find, you know, find me a mySql connection string that I need to basically copy paste into my bash file and then I would just do it that way. And this time I just went to chat DpT and I said, hey, I need to connect to mysql docs. Please write me the command. Here's the host name, here's the username and then I'll enter my password and it literally spit out exactly what I needed. [00:49:11] Speaker C: This is so awesome. [00:49:12] Speaker B: So it's so nice. Just saves me so much time sometimes, which is what it's supposed to do, right. Makes me more productive. [00:49:17] Speaker C: This is exactly right. Yeah. And you know, like I think as technology has evolved, like everything's become too complex to have details and a solid understanding of everything you need. So this is kind of a great way to, to manage all that. [00:49:33] Speaker B: Well, even like getting into some more complex stuff. I need to drop a bunch of tables that started with a certain prefix and it's like, oh yeah, I, here's how I think I would write that. And so I asked copilot, I was like, hey, write me this sequel. And it gave me a much more elegant way. And I was like well that's interesting. I hadn't thought about that. Wrapped in transaction away. I hadn't thought about again for the cloud pod database, not a big deal if we take it down, but if I was doing it for production, I'd appreciate the advantages. I started all that safety profile and just ran it. We do it live in prod and it worked just fine. But I appreciated the fact that it tried to make it safe for me to run. [00:50:09] Speaker C: Yeah. [00:50:12] Speaker B: All right. And our last story for the week. Aks at build they announced AKS Automatic, which provides the easiest way to manage the Kubernetes experience for developers, DevOps and platform engineers. AKS Automatic is ideal for modern AI applications, enabling AKS cluster setup and management and embedding best practice configurations. This will ensure that users of any skill level have security performance and dependability of their applications. With AKS Automatic, Azure manages the cluster configuration including nodes scaling, security updates and other pre configured settings. Automatic clusters are now optimized to run most production workloads from provisioned compute resources based on k manifests. With more teams running kubernetes at scale, managing thousands of clusters efficiently becomes a priority. The Azure Kubernetes fleet manager now helping platforms schedule their workloads for greater efficiency. Several new skills are available for AKS and copilot for Azure to assist platform operators and developers building intelligent workload scheduling. Copilot in Azure for skills for AKS auto insertation for Azure monitor application insights and Azure portal now supporting key desk. [00:51:12] Speaker C: Scaling finally, I've been waiting for these management features of kubernetes for years now because it's so difficult to operate at Kubernetes scale and you're seeing this now with GKE for enterprise, I think it's called now what was Anthos and, and now AKS Automatic, which I love the name. [00:51:36] Speaker B: Yeah. Well, and then the fleet manager and then on EKS, you got all the EKS automatic stuff as well. So there's, there's lots of stuff coming up for this. But the AKS automatics may be a little too close to AK 47, but. Yeah, just me on that one. [00:51:49] Speaker C: That maybe, I mean, I like it more because of that, but, you know, AKS automatic, like. Yeah. [00:51:58] Speaker B: Well, that's good. I'm glad to see that one coming to all. Supporting, of course, Kubernetes one 30 zillow now support it. Well, Ryan, we did it. Two of us still manage 51 minutes. [00:52:10] Speaker C: I jinxed us by saying it was going to be a short show. [00:52:13] Speaker B: You did. You did. You messed it up. But thanks for joining me today. And we covered a bunch and I appreciate you once again here for the episode. [00:52:22] Speaker C: Bye, everybody. [00:52:23] Speaker B: Bye. See you next week. And that's the week in cloud. We'd like to thank this week's sponsor, sunry. Check out our 14 day free trial at Sunry Co Cloudpod. Check out our website, the home of the cloud pod, where you can check out our newsletter, slack team. Send feedback or ask questions at the cloudpod.net or tweet at us with hash poundthecloudpod.

Other Episodes

Episode 255

April 17, 2024 00:37:23
Episode Cover

255: Guess What’s Google Next? AI, AI, and Some More AI!

Welcome to episode 255 of the Cloud Pod podcast – where the forecast is always cloudy! This week your hosts, Justin, Jonathan, Matthew and...

Listen

Episode 117

May 20, 2021 00:44:55
Episode Cover

117: Justin is out, Peter’s distracted by his parents, Jonathan is just British and Ryan is probably tipsy…. But we had one job and we’re recording!

This week on The Cloud Pod, Justin is away so the rest of the team has taken the opportunity to throw him under the...

Listen

Episode 227

September 14, 2023 00:51:58
Episode Cover

227: The Cloud Pod Peeps at Azure’s Explicit Proxy

Welcome episode 227 of the Cloud Pod podcast - where the forecast is always cloudy! This week your hosts are Justin, Jonathan, Matthew and...

Listen