[00:00:07] Speaker A: Welcome to the cloud pod, where the forecast is always cloudy. We talk weekly about all things aWs, GCP, and Azure.
[00:00:14] Speaker B: We are your hosts, Justin, Jonathan, Ryan, and Matthew. Episode 263 recorded for the week of June 4, 2024. Ticketmaster gets a snow job. MFA matters, folks.
Good evening. Matt and Jonathan. How you doing?
[00:00:30] Speaker A: Good. Loving the heat wave.
[00:00:33] Speaker B: Yeah. You're going to change your recording location because your un air conditioned garage may get very hot very quickly. Yeah, in this global warming situation that.
[00:00:42] Speaker A: We'Re in, it's the noise in the house while he's in the garage. I guess it loses a couple of pounds this way.
[00:00:49] Speaker C: We're going to start seeing Jonathan show up in, like, a sweatsuit for the episode. Yeah, it might time box us because we have to call 911, but, you know.
[00:00:58] Speaker B: Yeah, could happen.
Well, we have a bunch of news, so I guess we should probably just jump right into it. First up was hashy days happened in London where you could attend all the hashi ness. They had several announcements that we'll talk about here today. First up, the AWS cloud control provider, or AWS Cc. Built around the AWS cloud control API and designed to bring new services, terraform faster, is now generally available. It's only been in beta since 2021, so I appreciate that it's finally reached 1.0.
I didn't think it was ever going to come because I thought they forgot about it. Honestly.
It's interesting looking at the syntax examples that you can mix basically HCL terraform and AWS CC by specifying the different types of resources. All of the new AWS CC resources store start with AWS CC, of course, to make it easy to tell the difference. I'm pretty sure when you want to twitch from the CC version to the HL version, it's going to destroy everything you did and blow it away. But maybe they'll now think about a way to migrate things between resource types in the future, which would be something I wish for a long time. So be careful adopting some of these if you don't understand exactly what you're doing.
[00:02:08] Speaker A: Yeah, I've kind of forgotten about this. I remember it was kind of cool at the time because it was the solution to not having to give the deployment role permissions to actually use resources only to deploy them, which seemed like a good, least privileged thing, but then it kind of vanished.
[00:02:22] Speaker B: So, yeah, we hadn't heard anything about it forever. I mean, it wasn't the middle of COVID so I'm not entirely upset that people forgot. So.
[00:02:29] Speaker A: Yeah, that's fair. I mean, I'm not sure whether bringing new services to terrible faster is really the main selling point, but I guess.
[00:02:37] Speaker B: I mean, that was one of the selling points. It just wasn't the most important one to anybody else.
[00:02:41] Speaker C: Yeah, I thought it was also for just AWS's APIs in general. This was their general API that everyone had to conform to. I thought that was the original purpose of cloud control, but yeah, so they.
[00:02:53] Speaker B: Basically said on launch they would have every service in this control plane, but that also was about fixing the IAM permission problems behind a bunch of them. So.
[00:03:02] Speaker C: Yes.
[00:03:03] Speaker B: Yeah, because you were doing a lot of weird things to make that happen.
[00:03:06] Speaker C: I mean, three years for Hashicorp to GA something isn't that bad.
[00:03:11] Speaker B: I mean, true, but still longer than I would have expected.
The other thing at Hashi days was the service lifecycle manage or security lifecycle management products. Vault and boundary got some updates, so Vault will be getting workload identity Federation coming soon to Vault Enterprise, which enables secret list configurations for vault plugins that integrate with external systems supporting the work identity Federation protocol, including things such as AWS, Azure and Google Cloud. And by enabling secret list configuration, organizations reduce security concerns that come with using long lived and highly privileged security credentials. With WIF, Vault no longer needs to access to highly sensitive root credentials for cloud providers, giving operators a solution to the secret zero problem.
Secret Syncs, which we talked about on a previous show, is now available more broadly, and the Vault secrets operator provides native Kubernetes integration with vault now supports Openshift Olm and secret templating, with instant updates coming in June. Instant updates are basically exactly what it sounds like. A zero day attack happens. You need to rotate your passwords immediately, it will allow you to do that. And then the secret templating is for those apps that don't like the vault format of passwords, and you need a more different format for that. Templatize that format, and as Vault exits the password to the application, it'll provide it in the format that you specified in the template, which will save you some transformation code in your application perhaps.
[00:04:33] Speaker A: Is this just a fancy way of saying that vault can now basically assume a role to get credentials instead of having to have hard coded credentials? Yes, okay, fair enough.
[00:04:48] Speaker B: I was waiting for, I was like, he's gonna have some point. That's gonna be amazing. Nope, fair enough.
[00:04:54] Speaker C: That's why I paused and I was like, what's he gonna say?
[00:04:59] Speaker A: It's a lot of words to say something that's actually quite simple to explain a different way I guess.
[00:05:03] Speaker B: Well, yeah, but they're using workload at any federation so you got to add an extra hundred level 100 words. It uses Oauth two and stuff in the background I think so it's slightly better than assuming roles, but complicated. Well, next up is HTTP waypoint actions. We'll be entering public beta soon. These actions enable platform teams to seamlessly expose day two operations and workflows to developers. HTTP Waypoint is designed to empower platform teams to define golden patterns and workflows for developers to enable management of applications at scale. And the addition of actions helps organizations define and execute golden workflows, such as building an application, performing a rollback, or executing operations in private environments. Additionally enhancing waypoint templates and add ons as well. Trying to make this a solid competitor to backstage cool.
[00:05:48] Speaker C: Stupid question.
Day zero, we haven't started or go live.
What's the difference between day one and day two?
[00:05:57] Speaker B: Plus day one is when you deployed it to production and day two is when you need to actually change it without doing deployments. Yes.
[00:06:07] Speaker C: Day zero is set up. Day one is go live. Day two is everything after that. Got it correct?
[00:06:12] Speaker A: Yeah, that's cool. It's a nice addition to the suite. Really?
[00:06:16] Speaker B: Yeah. And I was looking at it a little bit earlier because I was curious because it was like, well, it integrates with GitHub actions. And I was like, what is waypoint exactly? Because I kind of forgotten that it was backstage and so I was pigging Ryan on the side. I know he's not here tonight, but I was like, this guy sounds kind of cool and kind of solves one of the problems. He's like, well, basically it's this, which is backstage, which means you just said you like service catalogs. I was like, damn it.
He's like, so my real question is, what happened to Justin?
And who are you?
[00:06:45] Speaker C: How I know your imposture.
[00:06:46] Speaker B: Yeah, exactly. So there you go. So that was it from hashi days. I don't know how often they do these hashy days. I didn't even know it was a thing until they had a bunch of blog posts about it. But we'll keep an eye out for the future until IBM kills it whenever that acquisition closes.
[00:07:03] Speaker A: I did a new show title which didn't make the cut just now. Monday, Tuesday, hashy days.
[00:07:10] Speaker B: That's very good.
That would probably have won. Yeah, I don't know if I could done it in song, though. I don't think I have the rhythm for that.
Well then the last one, Cloudflare, is acquiring bastion zero to extend zero trust access to IT infrastructure.
Apparently they already had a ZTNA flow that lets you get access to things like applications and different things, and this is now giving them an additional layer to access infrastructure. Cloudflare's goal for yours has been to replace your VPN and bastion zero helps further that vision beyond apps and networks to provide the same level of simplicity for infrastructure resources and provides native integrations to major and protocols and targets like SSH, RDP, kubernetes, database servers and more to ensure that the target resources configured to accept connections that for that specific user instead of relying on networking level controls. I can't wait till these get hacked. Someday the ZTN is gonna get someone's gonna get owned and it's gonna be horrible.
[00:08:04] Speaker A: I thought collapse probably had a similar solution to this, but maybe it wasn't quite so.
[00:08:09] Speaker B: They have a ZTNA that does, like they said, app layer and the networking layer, but didn't do the servers.
[00:08:16] Speaker A: Basically, what's the difference?
[00:08:18] Speaker B: Ssh, key management and rotation okay, unfortunately I had to go to city yesterday to San Francisco to have dinner because I was meeting up with some people who were at the Snowflake conference. So I had to be by Moscone, which is never a fun event because Moscone is always terrible.
So I did see the snowflake, all the logos and stuff, and I can say that it's filled with a lot of data scientists and data nerds who were very happy to be there, but it was definitely not nearly the size of reinvent or any other conference I've been to. But a lot of people are excited about things that got announced. We'll recap some of those real quickly, but first, but worst thing that could happen to your conference three days before is that two of your customers announced they were hacked and your data from Snowflake was exposed.
So basically, hackers targeting clients of AI intelligence platform Snowflake due to the lack of multifactor authentication. Snowflake on Friday admitted that several of their customers, including Ticketmaster and Santander bank, were impacted by this, and the Australian Cybersecurity center published an alert on Saturday warning about the threat. And basically they announced on Monday that their analysis determined that this is people reusing passwords that were hacked and other breaches against a Snowflake platform which you could protect yourself from if you just enabled MFA. So appreciate if vendors and even my own vendor that I work for, you should just enable MFA by default. You must set it up, you must do it and not to do it is just super sloppy at this point. Another thing they called out was that one of the credentials hacked was a former Snowflake employee. When they were able to access his demo instance that was still provisioned on Snowflake. And my other recognition is please disable your former employees demo accounts.
[00:10:05] Speaker C: Definitely haven't ever dealt with that one before.
[00:10:09] Speaker B: I've dealt with it before.
I think we all have. That's not an uncommon pattern, because no one wants to integrate your demo accounts with single sign on and things that they should burn those people all the time.
[00:10:20] Speaker A: It's almost getting to the point where, I mean, I know, I think the EU at least, or parts of Europe have legislated against things like insecure default passwords on devices. So I know like routers and things get unique passwords instead of, you know, admin 123 as the default.
[00:10:40] Speaker B: Is that why that's happened? Darn it to you?
[00:10:43] Speaker A: Well, I mean, maybe. Maybe I should go a step further and mandate MFA, because people aren't doing it when they clearly should.
[00:10:51] Speaker B: I mean, like, every time I set up a Google workspaces account, the very first box I check is enforce MFA. And then basically, if you get a new Google workspace account from any of the companies I help do that with, you have to set up MFA within seven days once your account gets disabled. And it's a good practice. I mean, you have so much data in your email that you need to protect even my personal emails. I'll have MFA enabled too. It's just good practice. And then now, with all the breaches of things, you definitely don't want to reuse a password on multiple sites. So if you're not using something like one password or lastpass, although they've been breached their own ways, or another type of password manager, you should probably be doing that for your personal stuff. And then moving to passkeys is an even better potential security opportunity, as long as you don't lose your phone, which makes passkeys a bit of a problem. But you definitely this is an area for your sensitive systems. You should be enabling MFA. You should be trying to do everything you can to protect yourself. And there's been several breaches that I've gotten notified of where then within a few weeks of the breach happening, I got notifications from coming saying, hey, you've had so many unauthenticated logins, or even one case they were able to log in, but they couldn't get past my two factor, and that was a password that I just didn't realize got compromised. And so I just immediately jumped on and went and changed it and that was. I was protected. But that two factor saved me in that particular case. So it can save you in certain conditions.
[00:12:11] Speaker C: Yeah, I almost think every system should just have MFA, you know, enabled at this point. I don't think I really use anything that doesn't. Well, that I know of, but I mean, in a work environment, I feel like it's almost from like looking at our vendors that we leverage. I think it's just a hard requirement that they have MFA or SSO at this point.
[00:12:32] Speaker A: Yeah, when the option is available, I tend to not even reuse the username across different products. I mean, obviously if you use single sign on to Google or Apple or something else, then that's different. But given the choice random username and.
[00:12:47] Speaker B: Random password, yeah, that's not a bad call. Most of the ones that I want my email address, which is not a great security practice either, but that's what they do. But yeah, if you have the option that's maybe also doing that too. That's a good call.
All right, well, let's get to their user conference because I teased that part first, then talked about their hack. So Monday at 05:00 they had their keynote where they had basically said, this isn't our fault, it's only MFA.
But basically they have several things they want to talk about, of course, related to AI.
So first of all, they announced Cortex search. All these are preview, by the way. Nothing was generally available at the launch on Friday. Perfect keynote conference announcements. Cortex Search will make it easier to talk to documents that other text based data sets such as wikis and fax, as well as running SQL functions in your database. Cortex analysts will allow app developers to create applications on top of analytical data stored in Snowflake so business users can get the data insights they need to simply ask their questions in natural languages because everything can be done with natural language. Snowflake AI and ML studio gives you a no code AI development. Snowflake Studio is accessible with Snowsite to access interactive interfaces for teams to quickly combine multiple models with data and compare results and accelerate deployment to applications and production. Snowflakes notebooks are available to empower data teams, permission in SQL, Python, or both through an interactive analytics, train models or evaluate lms in an integrated cell based environment. This interactive development experience eliminates the process limits of local development as well as the security and operational risk of moving data to separate tool document AI is available and soon provides a new framework to easily extract content like invoice amounts or contract terms from the documents. Using Arctic Tilt, a state of the art built in multimodal LM from Snowflake, Cortex Guard is the one general available feature which allows users to filter harmful content associated with violence and hate, self harm and criminal activities and safety controls can be effortlessly applied to any LLM in Cortex AI by using the guardrail setting that is part of the complete function. Snowflake Horizontal ML lineage is in preview, helping teams trace end to end lineage of features, datasets and models from data to insights for students reproductibility and feature store integrated and centralized lineage features, datasets and models from data to insight for seamless report reproducibility and model industry to govern all your ML models for those trained in snowflake or other ML systems. Now there's any other option other than Snowflake. So the big announcements on the LLM.
[00:15:09] Speaker A: AI front, I'm surprised that we've gone into document analysis when you're already hosted on cloud, so you already provide those as services. So it's a weird market to go after.
[00:15:23] Speaker B: I assume the idea is that they don't want you to have to take your data to other APIs. They want you to live completely in the snowflake environment.
It improves security by making you not have to go call other APIs outside of your data lake or data systems.
[00:15:38] Speaker A: Yeah, data is already an s three or a storage bucket or something, but.
[00:15:43] Speaker B: Now it's a storage bucket in s three managed by Snowflake.
So now you can blame them for the security of your system versus saying, oh, it was Amazon's fault.
[00:15:51] Speaker A: Well, sold.
[00:15:52] Speaker B: Sold well, if that was. Oh, sorry, go ahead, Matt.
[00:15:55] Speaker C: I was gonna say, when did all these keynotes be preview or private beta items? Cause I feel like a while ago everything was like, you know, I remember even an AWS one where they were like, we are actually releasing everything, unlike our competitors, and now they've fallen back into the same cycle where nothing is actually released at their keynotes.
[00:16:14] Speaker B: Well, it's all started when OpenAI beat them all to market with a superior product that made them all look like idiots. And so they all realized they had to now compete with it. And so they're all rushing to develop the stuff as fast as humanly possible to copy snow OpenAI. And that's why all this is vaporware, because it's not done yet.
[00:16:33] Speaker C: I was even thinking, for years now it's been like that.
[00:16:35] Speaker A: Yeah, I think it's reasonable where more.
[00:16:37] Speaker C: And more things are private preview and public preview and betas and whatnot.
[00:16:42] Speaker B: I mean on the Amazon side I'd say it's, it's definitely increased a bit, but the things that they made more preview are because they're more, you know, very complicated or very niche. And so I think they're really trying to find product market fit and some of the things on Amazon side. But I agree with you, I think the amount of vaporware on keynotes is only increasing in modern days.
[00:17:01] Speaker A: I think it makes sense though. I don't enjoy it, but I think it makes sense. I mean, which business wants to invest millions or tens of millions of dollars in developing a product just to sit on it for three months or six months until the big keynote announcement. They need to start monetizing that straight away to get the return on their investment. So I'm okay with a forward looking keynote with roadmap items and things like that, just to drive excitement around the future. I suppose it helps retain customers if you think that the next great thing is coming and it won't be too long. I mean, in the case of AWS, if it's literally three years, then maybe that's a little too long. Be nice if there was some kind of commitment to having it done in the next twelve months or something.
It's in preview or it's, it's on the roadmap, but some kind of commitment to when it's going to come.
[00:17:51] Speaker B: Yeah, I mean it would be nice though. Like at least Amazon when they general go generally available on something they like re announce it to the world of what it is or like how it's, you know, here's what we announce with, but then here's what we added. One thing that drives me crazy about Microsoft in particular. They everything comes out in preview and the preview blog post where they announce what it is. Then when they go ga, they only tell you here's the things we added since the preview.
They're missing so much opportunity to relaunch these things. I think that's one of the reasons why it's hard to discover services on Azure, because you can't rationalize behind what's private preview or not private preview.
The same thing. Google doesn't do it quite as bad, but they also have the same problem.
[00:18:34] Speaker A: Yeah, they definitely do.
[00:18:35] Speaker B: I'm really just complaining for myself because I'd like to not have to go review four articles to get a simple show note in about what this ga thing is from Microsoft.
I'm just really complaining for myself.
Well, if AI models and training them was not enough, you want more development tools? Snowflake also had you covered a little bit. So first up, they mentioned that Snowflake notebooks earlier, which is Jupyter notebook implementation on top of Snowflake that allows you to use Python, but they also give you a new CLI and a Python API, making it easier than ever to do upgrades, automate CI CD and work with objects directly via Python. Snowflake tasks have been improved to provide better pipeline orchestration and job scheduling, and you can leverage serverless tasks for Python service tasks, flex or event driven trigger tasks, as well as new dynamic tables that can use at every stage of the processing pipeline to generate FUD data, simplify delivery lifecycle database change management makes it easy to declaratively manage changes across snowflake objects at scale directly from your git repo and finally, they have Snowflake Trail, a rich set of curated observability capabilities that provide enhanced visibility into data quality pipelines and applications, empowering developers to monitor, troubleshoot and optimize their workflows, which they will not do. They will just call Ops.
[00:19:48] Speaker A: It all sounds great, I just wish it was a fraction of the price. It's just way too expensive.
[00:19:53] Speaker B: Yep, Snowflake's expensive.
And then Snowflake takes a turn in the keynote where they go down a path that I did not really expect because they have a pretty strong partnership with AWS. They've been on there for a long time. They have expanded to GCP and to Azure as well over the years, but they are giving you continuous ways to expand the abilities to build rich applications using your data. They released the new snow park container service, soon to be generally available on AWS and in preview on Azure, which empowers app providers to build, efficient, efficiently build and operate sophisticated generative AI apps in containers containers running on Snowflake. There is no need to move governed data outside of Snowflake in order to be used as part of the AI ML models and your apps. And for those who are looking for alternative to elasticsearch, the Snowflake full text search gave way, gives you a new token based search function used for log analytics and other high volume data search applications directly inside the snowflake data lake, as well as they're finally announcing the general availability of Snowflake native app for GCP making. Now available on AWS, Azure and GCP, providers can build their app once and publish it to customers across all three major clouds and multiple regions, with a single listing, removing the operational burden of keeping your app updated in various clouds. I go I look at this and I'm like, really? You're getting into running containers and you're getting into doing all this other stuff. Are you just going to build your own cloud at some point? Why even leverage AWS if you can build this at scale now?
[00:21:18] Speaker C: They only not want your data that they're going to control, because that's what I hear too. Between the pipelines, we're going to control your data from end to end, but now we're actually going to run your platform. So I feel like they're also getting to the platform as a service and just sitting on top of all these cloud providers.
[00:21:34] Speaker A: I mean, why not? They're certainly not passing it on at no cost.
They'll get their percentage on top of the actual cloud cost. So it's a great tax on compute.
[00:21:44] Speaker B: There's no native app framework. It was kind of cool. I was doing a little bit of research on it after I read the article because I hadn't heard of it. But basically, if you think about applications and selling on marketplace where you want to build infrastructure on a partner, on a customer's own accounts or things, this basically allows you to build those applications and be managed through your. There's basically a control plane for those. So you can basically deploy snowflake components into other cloud accounts and projects that aren't owned by them, but they are so managed by Snowflake remotely capability. And you can have extended that to basically any application that wants to do this with their control plane for. This is actually a really cool feature that if you're trying to really get big on the marketplace and you're trying to grow customer spend with your solutioning, this is a pretty good solution for that.
[00:22:30] Speaker A: Yeah. If you didn't want data services, a tool like that will be useful for anybody who's trying to deploy apps into their customers accounts.
[00:22:39] Speaker B: Yeah, exactly.
[00:22:40] Speaker A: Yeah.
[00:22:42] Speaker B: Well, AWS had an interesting X blog post, which is a new feature of X where you can write long form content and I hope they never do it again because it was terrible. But the first one they did was ten things you need to know about Matt Garmin, the incoming CEO of AWS. And so there's ten things that they thought we might want to know about him. And so I thought I'd share them with you and see what you guys care or not care about these things. So number one, Andy Jassy originally sold Garmin on AWS when he was just an intern out of business school.
He already had some experience with startups, but one thing he wanted to know was, how does a bigger company actually invent inside that larger organization? How does it start new businesses and how does it drive innovation? And so Andy Jassy pitched him the aws deck. Do you guys care about that?
[00:23:26] Speaker A: No.
[00:23:27] Speaker C: Nice. Fun fact.
[00:23:28] Speaker B: Fun fact, right? Yeah. Okay.
[00:23:30] Speaker C: Yeah.
[00:23:31] Speaker B: Garmin was hired in 2006 as a full time product manager at AWS. And at that time they had three people in sales.
He said at the time he did everything. I was the product manager and I wrote product detail pages, came up with pricing plans, ran product naming meetings, whatever was needed. We all saw there was something there, but we really didn't know how big Ada was going to be. We were really excited to go and build things.
[00:23:53] Speaker A: So it's his fault.
[00:23:54] Speaker C: Yeah, so it's his fault.
[00:23:57] Speaker B: Networking, some of those names that we blame Andy Jassy for, maybe we should be blaming Matt Garmin for like, that's what I heard here.
[00:24:03] Speaker C: That's why networking is still expensive. Nobody's looked at it since him.
[00:24:07] Speaker B: Exactly. The third thing Amazon taught him the importance of knowing things in depth and in detail. And he uses EBS as an example of this. It was one of the things that really struck him was the depth of knowledge everyone expected him to have on EBS. I needed to know operational metrics, latency, where exactly we were seeing performance gains, what were the things customers wanted us to do better and precisely how we plan to fix them.
And it's that level of detail that has focused him on customers, how much we listen to their feedback and then change our roadmap because of our customers.
Good. I mean, this one I'm happy about. Like, listen to us, we don't care about AI. Like, yeah, you can still talk about it, but can you please give us other things that are cool? Because AI is cool, but not the only thing that's pretty cool.
[00:24:49] Speaker C: I feel like what the bullet point is versus what it comes out in the actual paragraph is two different things. Bullet point is knowing things in depth and in detail.
And the text is, I feel like, more like what its customers want. Maybe I missed reading.
[00:25:05] Speaker B: He talks about, he goes, I needed to know operational metrics, latency and work that we were seeing performance games, as well as what the customers wanted us to do and how we plan to fix the things that customers want. So it's that level of detail and how we focused or we were on customers, how much we listened to their feedback and then change our roadmap. And first of all, we had to deliver great products and value for customers. I mean, yeah, a little bit, but, yeah.
[00:25:24] Speaker C: I mean, I actually think as a CEO, you don't want to be that entrenched in the details because I think otherwise you'll get stuck in details on specific things.
[00:25:32] Speaker B: I don't know that Amazon's ever believed in that. Amazon is very detailed all the way from the top to the bottom.
He sees his job as the one to remove blockers. Yeah. Thank you. And every other manager said the same thing, so that one. That one's a throwaway. Number five. Garmin apparently loves a good debate.
He says it's very hard for a single customer or a single person to deliver on any sort of scale. So having a talented team is incredibly important. And if your team gets into a situation in which people are more worried about who owns what or who's going to get the credit, that's when you start getting sidetracked. I'm most excited and I'm at my best when we're focused on delivering for customers and iterating for the business. That doesn't mean there's no room for discussion. But I love being on a team that can debate the merits of something, make a decision and then go after it. Yeah. I mean, have a backbone, disagree and commit. Those are leadership principles. Great. Who? You've only worked at Amazon since being a business intern, so makes sense now. Doubling down on the one that Matt complained about earlier, which was Amazon, taught him the importance of knowing things in depth with number six. Diving deep is one of his skills. And Garmin thanks his family for that.
I can't even explain that one. Next.
[00:26:37] Speaker A: That's kind of weird.
[00:26:39] Speaker C: Thank you, family.
[00:26:41] Speaker B: But, like, there's no mention of the family. And, like, I, like, I read through the paragraph, there's nothing about the custom. Like, his family, like, oh, you know, our family, you know, owned a store or whatever, you know, he says at the very end. Maybe it was because when I was growing up, my family was full of big talkers who like to deliver a point that got heard that I developed a skill for cutting through and getting to the ground. Truth. Like what? So you argued at Thanksgiving dinner. Good. Thanks. So did everyone else about politics.
[00:27:02] Speaker A: I would have argued that thanksgiving, except I was busy dealing with a lambda outage.
Thanks, Matt.
[00:27:08] Speaker B: Thanks, seven. Security will always be AWs's and Garmin's number one priority. Good. Good job. Zero.
[00:27:15] Speaker A: Should have been number one on the list.
[00:27:16] Speaker C: Yeah.
[00:27:17] Speaker B: Number one on the list.
[00:27:18] Speaker A: Bad placement.
[00:27:20] Speaker B: Garmin wants to make sure AWS customers can take advantage of generative AI. Yeah, we know. It's all you talk about AI.
[00:27:25] Speaker A: AI.
[00:27:26] Speaker B: Even reading the exit interview that the geek wire did to Adam Slipski, which we're not talking about today because it was even more boring than this article.
It was all about AI. I was just like, you're leaving the company. You fail on AI, you're done. Adam. Just exit the building.
Number nine, he enjoys entering new situations to understand what makes things tick.
[00:27:46] Speaker A: It's not the same as diving deep on something.
[00:27:48] Speaker B: Yeah.
[00:27:51] Speaker A: Number ten, he likes reiterating things.
[00:27:55] Speaker B: Number ten, he actually likes to pay it forward. And he talks about his anthropopic work and donating through one acre fund and a couple other things, which is good, you know, so nice. He's got some philanthropy, philanthropy, philanthropic, philanthropic feelings. And then he digs deep. So that's what we know about Matt Garmin. So we'll see how it goes. Maybe he'll dig deep into getting us AI solutions that actually I care about versus, you know, copying OpenAI and Gemini.
[00:28:21] Speaker A: Yeah. I wonder what leadership development advice will offer to kids who have very different prospects of jobs when they're older, given his AI. AI is the world.
[00:28:32] Speaker B: Yeah. You think his kids will have the opportunity to go be an intern to get pitched AWS as a product? I don't think so, biologically.
[00:28:40] Speaker A: Kids? No.
[00:28:44] Speaker B: I'm doing a real time kill on the story because I don't want to talk about it, which is that contact center Amazon Connect has an analytics data like this generally available. I don't care enough after the last story. I just can't. I can't muster it. Sorry.
AWS analytics service streamlines user access to data permission settings. Auditing this is kind of cool. So this basically allows you to use bi tools like tableau to propagate end user identity down to Amazon. Redshift simplifies the sign on experience, allowing data owners to define access based on real end user identity, and allows auditors to verify data accessed by these users. Trusted identity propagation relies on standard mechanisms like OAuth two and JWT tokens and allows you to now map your IAM permissions to your users and actually allow them to get the data they're supposed to without having a lot of fun troubleshooting those access issues, which I've done before, do not recommend.
[00:29:30] Speaker A: That's pretty cool. I thought that's how SSO was supposed to work. In general, though, if you SSO into tableau and Okta was the provider of that, then you would. You redirected to redshift, for example, you would automatically be logged in. Isn't that the point of checking the box that says stop pestering me to login. I don't know.
[00:29:50] Speaker B: I think what happened before and it's been a while since I've done this, so if I'm wrong, someone can correct us. I believe you logged into tableau with your single sign on account and then when you went to go hit that data layer that required authentication, you logged in again with your single sign on account to the redshift. And so now it does that passing for you automatically.
[00:30:07] Speaker A: Okay, well that's good. So there's a pet peeve of mine here having to single sign on like 15 times a day.
[00:30:14] Speaker B: I mean I don't, the one that bothers me is I have to put my pin in every day that one.
[00:30:19] Speaker A: Tries to crazy the VPN.
[00:30:21] Speaker B: No, the pin on you're going to outlook or you're going to teams on your phone.
[00:30:24] Speaker C: Microsoft authenticator.
[00:30:26] Speaker B: Yeah, you have to re authenticate Microsoft every 24 hours in that pops up.
[00:30:29] Speaker C: Yeah, yeah.
[00:30:30] Speaker B: Or the fact that Microsoft then change the two factor authentication instead of you enter a code now like, oh, you have to go look at the number on the code. But then on the, on the phone. I don't know if it does on Android, but on iPhone you have to unlock with your face twice.
Same thing to then check the thing. And I'm like, why? Like, yeah, okay.
[00:30:48] Speaker C: Yeah, your phone and then one into the app. I figured out, spend way too much time trying to figure this out one day. Why keep doing it?
[00:30:56] Speaker B: Yeah, that's, that's an annoying scenario. I don't prefer not to do that one either.
[00:31:00] Speaker A: It's the same on Android. Get notification, unlock phone.
Open notification. Type in the number, press okay, then ask you to re authenticate. I literally just did it. Yeah. Fingerprint.
[00:31:11] Speaker C: And my problem is I normally get distracted halfway through it. I come back five minutes later, I have to start holding process over again.
[00:31:17] Speaker B: And they get mad at me all the time. It popped up on the thing. I did the first step, I put my phone down because something distracted me, a shiny thing. And I was looking at that and then I like, oh yeah, I was supposed to be looking at that report and I, oh, I did I start all over? Yes, I have done that one as well.
All right. AWS is releasing a new sustainability scanner to fit easily into a developer workflow. It provides a sustainability score and report with sustainability improvements that can be readily implemented in code. There are dumb things like move from intel to Graviton. This can be run on your local machine or part of a CI CD pipeline which then you can ignore it in the CI CD pipeline. So I prefer it that way. The SUSs scanner as it's called is once you install it, it is S u S scanner or SUS scanner. And the young kids these days are saying everything sus. So this is a great name. Can we locally run against your local cloudformation template and provides a report with recommendations right in the console? It only supports cloudformation today, but I do hope to see it eventually get to terraform and CDK.
[00:32:14] Speaker C: I think this is nice that people are hopefully are starting to think about sustainability day one like security and moving all this. And I say this as I vomit open my mouth to the left a little bit more. But also at one point when you throw every single tool in front of a developer, all they're going to do is just get mad at the tools. So you should make sure that if you are implementing this, it is in the right time in your actual software development lifecycle. Otherwise it's another thing to ignore.
[00:32:43] Speaker B: I bet Ryan would have comments on the one other example they had in the document, which was ensure all buckets have lifecycle configuration turned on.
And as someone who did turn on lifecycle management and then racked up a very large bill on s three, I'm sure he'd love this.
[00:33:01] Speaker A: Yeah, pro tip. Turn it on before putting objects in the bucket.
[00:33:06] Speaker B: Exactly. That's not so bad. But when you do it after the fact, it's very expensive.
[00:33:10] Speaker A: I guess to be fair, it would cost the same.
[00:33:12] Speaker B: It would cost the CFO notices. The million dollar charge transition cost that you didn't would have been a rounding error for other months.
[00:33:27] Speaker C: Those are really only two examples I was trying to look through for other examples.
[00:33:31] Speaker B: They're the two that are in the little video they put in there and then the one I showed you. But yeah, I wish they would have provided some additional examples that would have been helpful, but they didn't. So what else did you get? Oh, actually there's one in the GitHub flow rest API compression to max.
This is a medium severity finding. Consider configuring the payload compression with minimum compression size. Compressing the payload will in general reduce the network traffic. I don't know how that makes it more sustainable. I mean, it definitely makes it less expensive, but when you say sustainability, it's more cpu cycles.
[00:34:02] Speaker C: So wouldn't that be.
[00:34:04] Speaker B: Oh yeah, see that's a good point. This is all based on the well architected pillar for sustainability, by the way. So if you don't believe in those, then you're not gonna believe in this tool either.
Moving on to GCP cloud three. Opus is now generally available on Vertex AI, and with vertex AI you can enable subscription based pricing with guaranteed performance and cost predictability, or the existing pay as you go option remains as always, Jonathan, I assume you have used Opus and can say nice things about it. I will not turn it over to you.
[00:34:31] Speaker A: I have used Opus. I subscribe through Anthropics website to Opus $20 a month and I wanted it for a very specific use case. I had some large documents which were medical reports actually, and I also had some legal documents and some California education board guidelines and things. Anyway, I ingested all those things into Claude and asked it to write me some very interesting emails and kind of legal arguments and it was fantastic. And I obviously read through what it said and verified everything that it said was was good. And I was incredibly impressed by like the size of the context window and the amount of context it can keep in mind. That's a crushable word to use when when answering questions. It was super impressive.
[00:35:29] Speaker B: That's very cool. So are you saying I should consider canceling my Gemini subscription and my chat GPT subscription, which is starting to feel like my streaming services subscriptions? I've got too many of them at $20 a month.
[00:35:41] Speaker A: The thing is, they're all changing so fast they kind of overtake each other. Bopus comes out and it's fantastic. And then GPT 4.0 comes out when I haven't used that yet at all because I'm kind of more interested in the features that they haven't released yet. Llama three I'm still waiting for my GoFundMe to reach the max there for the $15,000 graphics card I need.
I know I'm kind of excited to have subscriptions to multiple things right now just to see what's changing and keep up with things, but I am really impressed with Claude Opus. I'd be curious to know what the pricing is like, really through vertex directly instead of going through anthropic, although I don't mind supporting them for the work they do.
[00:36:23] Speaker B: I think you're paying on per token basis for vertex. Depending on how much you're using it, you could save money or you could spend more than $20 a month, quite possibly, yeah.
[00:36:32] Speaker A: I don't think I use it enough. I think it's not on a limited plan. I think you still will get throttled if you abuse it. But I've done an enormous amount of work through it for the $20 and I've been happy.
[00:36:43] Speaker B: So chat GPT this week, or maybe last week, released a Mac app and that has made me use chat GPT a lot more than I was using it before because it's a quick little command key, like a apple shift key, and it pops right up and you can type in your chat GPT command and then it supports the remembrances feature so you can start setting preferences. Like remember that I'm a technology executive, or remember that I host a podcast. You can start giving it the remembrances. So I'm waiting for it to become sentient at some point in the future, like Westworld. But that's all I can think about. When I think about the remembrances, just.
[00:37:18] Speaker A: Remember to say please and thank you.
[00:37:19] Speaker B: Yes, well, so Google is announcing the external source support for pub sub with the first one they're going to be supporting is Amazon Kinesis data streams. One of the use cases that Google cited about is taking your business with variable volume residing in Kinesis data stream and using this capability to ingest the data to bigquery, making it easier and faster than ever to analyze the data that impacts your business without ETL or other transformation methods to get the data from kinesis to bigquery. So me personally, I can't wait to see they support Kafka and event hub and everything else, and then creating a really nice Rube Goldberg machine where I take data from kinesis, put it into pub sub, and then send it over to event hub and then back to kinesis. And I can just rack those bills up as I sent those events around this token ring of doom.
[00:38:03] Speaker A: This kind of screams, we've got a new product coming and we want to take data from other clouds. Yeah, probably a security thing, probably.
[00:38:11] Speaker B: But I think it's a nice idea. I'm excited about the idea of like, because in my mind I prefer kinesis over Kafka just because I don't want to manage all the fiddly bits of Kafka. But everyone loves Kafka. But if I can basically have cross event streaming between Kafka and kinesis, I can choose the best one that I want, which would be kind of nice.
Pub sub is a nice protocol as well, also much easier than Kafka.
And so these are nice opportunities to be able to make them more interchangeable. I think that's a cool story.
[00:38:41] Speaker A: I think architects like Kafka, I don't know anyone who actually has to use it on a daily basis that still agrees with it.
[00:38:47] Speaker C: Operations people do not like Kafka.
[00:38:50] Speaker A: No.
[00:38:50] Speaker B: Well, anyway, so operations people, it's very simple. Does it rely on Etsy D or zookeeper? If the answer to that question is yes, then operations does not like it.
[00:39:00] Speaker C: Gets the hell out.
[00:39:02] Speaker B: Which is also why kubernetes is sort of not fan either, until they got rid of some of the zookeeper stuff.
[00:39:08] Speaker A: Yeah, it's like the poster child for things not to use in cloud because I don't know, an application developer which would build an application that uses Kafka to ship data places that would actually sort of code in sensible management of sharding or replicas or any of those things, that's always left as an operational task. And, you know, I'd stay clear of it simply because of that. Because one of my s three commandments is we don't build products or we don't use products that add operational burden. And Kafka is terrible for that.
[00:39:48] Speaker C: I will say I did help set up Kafka, but it was for an on prem install for a massive company that had edge computing in multiple countries. And we're using it mainly for log aggregation, streaming back to their multiple colos.
And that was the edge case, I think, of the only time I've ever seen it used in a way that like, I'm like, okay, this kind of makes sense.
[00:40:14] Speaker A: I think if you were going to design a product and you know what the data is going to look like and you know what the volume of data is going to look like, or you can build it in pods and scale it that way, that's one thing. But when you run Kafka as a platform service and then random new product comes along and says, oh, we want to use Kafka too. Except now we've got 2 million messages a minute, it becomes very difficult to scale. I think it's like of all the products, of all the open source products that people use, that's one that you should use managed service for. So, kinesis, yeah, I just saw an.
[00:40:51] Speaker B: Article that was sort of fascinating to me and it was talking about the way that most people are using Kafka right now is a pretty major anti pattern.
It's basically recommending we need to rethink how we're using message eventing and I'll define that and I will post it in a future episode. But it was fascinating. Take well, the NetApp volumes are getting some new features this week. They were first announced in August of 2023.
The new features include an SLA, so you now get 99.9% availability for zonal Flex storage service or 99.99% if you do the regional option with the NetApp Flex volumes. NetApp volumes have been certified as a data store now for Google Cloud VMware engine, making it easier to get off of your on prem vmware environment to the cloud and to scale storage and compute for all kinds of vm based applications as well as they now are enabling auto tiering for NetApp volumes which is now in preview, which allow you to save a bunch of money by getting that pesky storage off of those NetApp drives onto objects storage. Appreciate all of the enhancements to NetApp volumes Flex.
[00:41:59] Speaker C: It still always amazes me how many companies use NetApps in the cloud.
I understand why conceptually, but it amazes me how many do still.
[00:42:10] Speaker B: Well, if Google and Microsoft would build a SMB service or sift service that was cloud native on their platforms, Amazon did.
[00:42:21] Speaker C: We wouldn't need to use the NetApps SMB on Azure.
[00:42:27] Speaker B: Do they really?
[00:42:28] Speaker C: Azure file share.
[00:42:30] Speaker B: Okay, so sorry, network. This is a Google rant. I unfairly rant put Azure into that. If Google would come up with an SMB sep solution that was good and existed, we wouldn't need to use NetApp so much. That's not how it works today on there.
[00:42:48] Speaker A: Yeah, I think it's just anti pattern in general is mounting everything as a file system locally rather than using object store. But I've been benching many times, like.
[00:42:57] Speaker C: I mean, just that, what was the tool years ago? It was like s three fs or something that somebody, like somebody tried to mount s three as a file share. Could understand why their, why their list commands were like costing millions of dollars a month, maybe millions, was excessive. But there was like some article I read where someone's like, yeah, we did this tool and it cost us like doubled our bill because they were trying to write a SQL database on s three or something. And I was like, what is happening here, guys? This is not good.
[00:43:30] Speaker B: That's not going to end well.
All right, moving on to Azure. Speaking of the devils. So they've created this week the Azure virtual network managers virtual network verifier. Which reminds me of that situation where you see the TikTok memes where people go to the airport and then go to the gate to make sure the gate exists, then go back to with their food, verifying that the gate exists, because of course it would not be existing until you see it with your own eyes. And that's basically what this Azure service is with the Azure virtual network. Manager virtual network Verifier enables you to check if your network policies allow or disallow traffic between your Azure network resources because I mean, I already set them that way. So I think I already know. Helping you answer simple diagnostic questions, triage why reachability isn't working as expected, improved performance of your Azure setup to your organization's pesky auditors and compliance requirements.
[00:44:17] Speaker A: That's cool. It's a good tool, especially large complex networks. Even better if it extends beyond just the azure stuff. If you've got on prem that you can import configurations into that or not. I douse it, but that's a great start.
[00:44:33] Speaker C: Yeah, I've used the corresponding feature, which now I don't remember what it's called, as I'm saying on AWS a few times with, you know, cross network VPC, peering and security groups and knackles on top of it to kind of quickly pinpoint where the networking problem is.
[00:44:49] Speaker A: It's always a network, it's always DNS.
[00:44:53] Speaker C: AWS reachability or something. Yeah, reachability reachability analyzer, yeah.
[00:45:01] Speaker B: VM hibernation, a feature that has existed on AWS forever, is now available and generally available in Azure. I think it's been preview for a while as well. It does support both Windows and Linux, which is nice because I think when it first came out only supported Windows, allowing you to hibernate and save on your compute costs by shutting off those pesky running vms.
[00:45:20] Speaker A: Cool. Even better for spinning up machines quickly. Yeah, once the domain joined because Windows.
[00:45:27] Speaker C: Boots up so fast.
[00:45:28] Speaker B: Just got Ga Jonathan it has not necessarily gotten integrated yet into auto scaling scale sets yet, so I'll let you know when scale sets support vm hibernation. I don't know if that's the case yet.
Maybe it wasn't included in their general availability announcement, and I didn't go back to the preview one for this one, so can't help you. All right. Azure Bastion Premium is a new SKU for customers that handle highly sensitive virtual machine workloads. His mission is to offer enhanced security features that ensure virtual machines are connected securely and to monitor virtual machines for anomalies. The first set of features includes ensuring private connectivity and graphical recordings of virtual machines connected through Azure bastion. The advantages of this are enhanced security with the previous queue providing a public IP address, while this one does not at the point of entry for their target virtual machines. However, Azure bastion takes security to the next level by eliminating that pesky public ip. Instead of relying on the public ip address, customers connect to private endpoint on Azure bastion, eliminating the need to secure that ip graphically. Recording virtual machine sessions aligns with your internal policies and compliance needs. Additionally, keeping a recording of virtual machine sessions allows customers to identify anomalies or unexpected behavior and expect to see an AI service sometime in the future around this I'm sure.
[00:46:39] Speaker A: Oh, it sounds like recall will plug into this beautifully.
[00:46:42] Speaker B: Oh yes, I'm sure recall will do quite well. Now, I went to go look for pricing on this but was not able to find it because it had not updated in my Azure front door service. But apparently it had updated in Matthew's front door service. So Matthew, would you please tell us the premium price of the Azure bastion? Premium, since I was unable to reach it due to caching of the Internet.
[00:47:00] Speaker C: Well that's because you didn't have the enable CDN. False.
[00:47:05] Speaker B: I did, but it still didn't work for me.
[00:47:08] Speaker C: So the normal one for azure bastions. Twenty nine cents per hour per instance and I believe you have to have two of them. So really it's $0.58 an hour. This one is $0.45 an hour. Time to so $0.90. So it's not a massive increase. I think it's a couple hundred dollars a month, but I think it's actually a really nice increase. To get the recording sessions the public IP address, you still have to have expressroute or whatever the corresponding into the network is. So if you're a smaller business and you want the recording, really that's a feature you're going to be paying for.
[00:47:47] Speaker B: So I mean, my quick math here is that at month you're looking at basically $324 versus the standard at $0.29 times 720 is $208. So about $140 more for that.
[00:48:04] Speaker C: Yeah, it times it by two though, because you have two nodes.
[00:48:06] Speaker B: Times it by two though because you need two new nodes. But yeah, again, not bad. The reality is that people who need this recording session type capability are typically buying things like beyond trust, which is much more expensive than.
So if you get a basic recording of your sessions and you can put some analytics AI around that to identify anomalous behavior without having to pay the heavy price tag to beyond trust or other legacy vendors in that space. You can do that, or you could buy strong DM, which is a much more modern version of beyond trust and much better product. But it's definitely an option and capability that exists now in the space.
[00:48:43] Speaker C: Yeah, I'm curious to see if they're going to add more to it over time, like in their premium tier.
[00:48:49] Speaker B: It's sort of annoying to me that just to get rid of a public IP address, I have to pay for graphical recording. So I would like Azure Bash and premium light. That just gives me the IP thing.
[00:49:00] Speaker C: My biggest pet peeve of Azure, which is anytime you want security, you have to go to the premium model. It's like the opposite of the AWS blog post, where security is a core value highest priority, where it feels like security is an add on, a core.
[00:49:15] Speaker B: Value not turned on by default. That is the Amazon preference.
You should encrypt everything. But we encrypt nothing by default. Well, thank you.
[00:49:23] Speaker C: Yeah. Versus here it's like pay us ten x the same service just to have the security feature that you need looking at you front door.
[00:49:30] Speaker B: I like the idea of having to explain why I can't use the free dev tier because as a developer I probably get it for free now. Of all the people that I don't want to be logging into servers for free with a bastion is the dev team.
That part I was like, can I maybe flip this around a little bit?
[00:49:50] Speaker A: Yeah. Sometimes it kind of seems like the model is, let's think of the things that people really shouldn't be doing and then find a product that we can sell them. So should you be rdping into servers and making random changes in the cloud? No, absolutely not.
Are we going to charge you for a product that lets you watch people do things they shouldn't be doing? Absolutely.
[00:50:07] Speaker B: Absolutely. We will.
Thank you. Thank you. Take your money.
[00:50:11] Speaker A: That's a great business model. I like it.
[00:50:14] Speaker B: Speaking of taking people's money, Broadcom is here.
Microsoft and Broadcom, yeah, Microsoft and Broadcom are going to support license supportability for your VMware Cloud foundation on Azure.
Expansion of their partnership with plans to support VMware Cloud foundation subscriptions on Azure VMware solutions customers that own or purchase licenses on Azure VMware solution and their data centers, giving them the flexibility to meet changing business needs is the right additional purchase option for Azure VMware solution, which previously was only sold by Microsoft and only available to you on the Azure VMware solution. You now move those licenses right back and forth all day long if you wish to do so. And I remember this happened in 2019. I remember when they first announced this, that was early days of the cloud pod, and we said, huh, this is weird. And now, five years later, we're just sad. VMware is screwing over all their customers, which we predicted was not how it was going to go. We thought the hypercloud providers are going to put all vmware out of business, which maybe they kind of did, which is why Broadcom bought them.
Well, for those of you out there who have ridiculous I O storage needs, I have a solution for you. You just have to sell your soul to Oracle.
Oracle is pleased to announce that they have shattered the million IOP's barrier in the cloud. With OCI block storage, they've been able to achieve an aggregate of 1.3 million I O operations per second, up to twelve gigs per second of throughput per OCI compute instance.
Because you can't do the math and run this in your own data center. Apparently if you need this, this is not a cloud workload. OCI block volume service has you covered. You can now attach up to 32 ultra high performance volumes to a single compute instance. And they say this is great for high performance IO workloads such as AI ML training, 3d modeling and simulation, as well as demanding blockchain processing. I have no idea who's demanding blockchain processing is using IO, but apparently if that's the need you have, they put right there in the article, 1.3 million is a 63% increase over the prior industry leading 800,000 IOP's, which was also owned by Oracle. So it makes me wonder, this probably isn't that important to everyone else for reasons, because they're not running Oracle.
But Oracle definitely needs the IOP's and the throughput for sure. And they have all that expensive sun hardware they got to buy and sell.
[00:52:24] Speaker C: It's just they can now actually maybe run Oracle databases without it crashing and having problems.
[00:52:31] Speaker B: An Oracle database crashing is actually pretty rare. There are things about Oracle, the company I hate, but Oracle, the database is like SQL Server will scale up and then fall over. Oracle just keep on going and degrade performance.
Yes, you still need more capacity, but you're not, not working. You're just not working performantly in Oracle, which is more preferred than the crashing option of the SQL server.
[00:52:57] Speaker A: I think the real thing they've done here is just allowed you to attach more high performance disks to the same instance.
[00:53:05] Speaker B: Correct? I did go look this up because I knew you were going to ask this question. The previous 800,000 IOP's was achieved with 24 of the ultra high performance disks. So they added more. They got here.
[00:53:15] Speaker A: Yeah, it's about 40,000 IOP's per disk and they can attach 32 disks. And I don't think, you don't think you get 12gb a second with 1.3 million IOP. So you'd have to have a much larger packet size. I think you may get 1.3 million IOP's.
[00:53:32] Speaker B: That's the key thing. You understand that when they give you 1.3 billion IOP's, that doesn't mean you get twelve gigs per second at the same time. You get one or the other.
[00:53:39] Speaker A: Exactly. Yeah. You can. You can have 1.3 million apps or 12gb a second. Yeah.
[00:53:48] Speaker B: Yep. So that's. That's oracle for this week. And that is everything for the week in cloud, guys.
[00:53:53] Speaker A: Awesome. What a week.
[00:53:55] Speaker C: What a week.
[00:53:56] Speaker B: Snow if they got hacked. Nope, they didn't. MFA. Although. Yeah, that's the same. Just. That's the same defense that 23 andme used, and that just got worse and worse and worse for them. So I'm looking forward to future news later on this week about how. Oh, actually, no, it wasn't just MFA. It was some other terrible thing. Maybe they go with the okta, blame their third party vendor next. Or they can go with Microsoft. We're just incompetent. And security. Whatever model they want to go with for the blame game of this, I look forward to seeing.
[00:54:26] Speaker A: Yeah, it's not a good place for snowflake to be in.
[00:54:29] Speaker B: Definitely.
[00:54:30] Speaker A: It's quite a trust issue.
[00:54:33] Speaker B: Yeah. I mean, definitely. The company is trying to take all your data. Yes, we had a breach.
It's pretty terrible. Anyways, well, I'll talk to you guys next week and we'll see what happens next week on the cloud.
[00:54:44] Speaker A: See you later.
[00:54:48] Speaker B: And that is the week in cloud. Check out our website, the home of the cloud pod, where you can join our newsletter slack team. Send feedback or ask
[email protected]. or tweet us with the hashtag hash poundthecloudpod.