Episode Transcript
[00:00:07] Speaker A: Welcome to the cloud pod, where the forecast is always cloudy. We talk weekly about all things AWs, GCP and Azure.
[00:00:14] Speaker B: We are your hosts, Justin, Jonathan, Ryan and Matthew.
[00:00:18] Speaker C: Episode 264 recorded for the week of June 12, 2024. AWS audit manager. Because even AI needs a babysitter. Good evening, Jonathan and Ryan. And actually technically Matt is off babysitting right now. His daughter, he's trying to get to go to bed, who's not cooperating. So he may, he may pop in at any time here in the recording, but. Or he may be fighting a baby for the next 5 hours. Babies, they're fun.
[00:00:42] Speaker D: Yeah.
[00:00:43] Speaker C: Have children, people. It's rewarding and satisfying.
[00:00:46] Speaker A: Allegedly you can't even call it babysitting if it's your own.
[00:00:50] Speaker C: I mean, I guess not. He's parenting.
[00:00:53] Speaker A: Parenting.
[00:00:53] Speaker C: Yep. Well it's been a little bit of a busy week. Today was the reinforced keynote. So if they announce anything cool tomorrow or the next day, we'll cover it next week.
[00:01:05] Speaker A: I'm gonna be very surprised.
[00:01:06] Speaker C: I'd be very shocked. Yeah. But they did have some good stuff this week even before re invent. So first up, this one apparently didn't make mainstage reinforce, but audit manager is introducing a common control library that provides common controls with predefined and pre mapped AWS data sources. This makes it easy for your governance risk and compliance teams to use the common control library to save time when mapping enterprise controls into audit manager for evidence collection, reducing their dependence on those pesky it teams named Ryan. You can view the compliance requirements for multiple frameworks such as PCI or HIPAA socially with the same common control in one place, making it easier to understand your audit readiness across multiple frameworks simultaneously.
[00:01:46] Speaker B: It's the dream automated evidence generation. And now with the context of like.
[00:01:51] Speaker C: Known frameworks, what control it's for. Yeah.
[00:01:55] Speaker B: Cause that's, that's always the challenge is, you know, or the last step of that, of the translation of like this is the control. Hey, we need all these controls to do this level of compliance.
[00:02:04] Speaker C: Like how would you be to implement that control? Oh, in the worst way possible. Let me try again. Let me give you something that's reasonable that we can actually work with.
Always the joy.
But yeah, no, it's amazing to me how much time compliance teams take mapping controls across each other like ISO to PCI to CSA. Like I mean I think half their job is just mapping controls and PowerPoint spreadsheets. Like, I mean I don't mean to belittle them because I'm sure they're doing a lot of other valuable things, but I've started a lot of control mapping spreadsheets in my day.
[00:02:37] Speaker B: Well, they have to, right? Like think about, like, they have to evidence the controls for each one, right? And so it's like, it's the only way to mapreduce that task.
[00:02:46] Speaker C: You know, it's like that XKCD comic where they're like, there's eleven control standards out there and then, you know, you're trying to fix it so you create another one and now there's twelve control families out there. That's sort of how I feel about a lot of these too, because like why do we need ISO and CIs and CSA and PCI and HIPAA? Like can we, it would be nice if we could consolidate some of these, but then we'll just get another control family. So it's not, but it's a dream that I have that someday some governing board of smart people is gonna be like, this is just too complicated.
[00:03:16] Speaker A: Whatever happened to the cloud Security alliance?
[00:03:19] Speaker C: There's a lot there.
[00:03:19] Speaker A: Cloud control matrix, I haven't heard about that for a long time. I know c exists, but it's not, not a hot topic anymore.
[00:03:27] Speaker C: Well, I do have noticed the CIS is pretty rapidly dropping new frameworks for things. They already have an AI framework out, they already have snowflake frameworks. It seems like CIS is giving a lot of guidance now and I don't know if CSA just sort of has fallen behind.
I'm not part of that group, so I can't say what's going on inside of that one, but it definitely exists out there. I do see it in a lot of the products you can buy to do cloud security. They talk about mapping to the CSA standard, but yeah, I don't know what's really happened with it. Yeah, let us know.
[00:04:03] Speaker A: This is name feature though. Number of times I've had to go in and prove that, take screenshots, prove that the backups are being backed up, prove that things have been done the same way, prove that it has scaling. This is great.
[00:04:19] Speaker C: So I think I figured out the average length of time for PFR requests to get rolled out because in the last two months now, a bunch of the things that I know Ryan or Jonathan or myself or other colleague Andrew basically had submitted to AWS PFRs have all gotten released, which is sort of interesting in exactly right, about five or six years.
And so this week we got another one this week. It's now easier for AWS organization customers to essentially manage the root email address of member accounts across the organization using the CLI, SDK and organizational console. They previously made it possible to update primary and alternative contact information and enable at regions for their accounts using the organization CLI. However, you still need to log in as the root account, find your MFA process if you're properly following the CSA, and then change those things, which was really a big problem. But not anymore as the SDK, CLI and organization console have all been updated to allow this to be done at the organizational level, and the API require customers to verify the new root email address using a one time password to ensure that you're using an accurate email address for the member accounts. The root email will not change until the new email is verified. I would like to also maybe add an old address verification just in case a hacker got into. My organization is trying to change all the root accounts, but I appreciate this regardless that this is a good step in the right direction.
[00:05:37] Speaker A: Why do we even need an email address associated with if you have an organization set up, why even have an email address?
[00:05:43] Speaker C: There's questions that date back to 15 years ago when Amazon started and they thought it was important and they didn't have any idea of an organization or an account structure or that people would want multiples of them.
There was mistakes that were made.
[00:05:56] Speaker A: Like, at least they separated the AWS logins from the Amazon.com logins, so you couldn't have somebody ordering pairs of shoes on your AWS.
[00:06:08] Speaker C: Bill, that reminded me of something happened today. So I went to the office today.
Sorry, I know. And I happened to have a box on my desk. And I was like, what is this box? And I pick it up and it's very light. I'm like, well, whatever it is, you know, maybe it's, maybe I haven't served. Who knows what this is? So I open up this box, and inside is a shoebox from Rubrik Inc. Which is a security vendor. And inside of the shoebox, I did not find shoes. I found a little postcard that said, please scan this QR code and set up a demo with us and we'll buy you a pair of shoes. And I'm like, you guys shipped this to me from wherever you guys came from.
This is a terrible marketing plan. So not only did you, you know, hurt the environment shipping a box that was basically, you said it was two pounds, but there's no way that box is two pounds. That's what the label said. You paid a crap ton of money to ship it to me, and now I'm not gonna do business with you because I think you're terrible at marketing. So bravo.
[00:07:07] Speaker B: I thought the empty box was a joke. Like this is your cloud security that's empty, right?
[00:07:12] Speaker C: I mean, like there's a lot of metaphors you can use cloud security of empty boxes, but if you'd like to check this out, it's out on my Twitter feed. I tweeted about it earlier today because I was so annoyed and I decided this is a marketing fail. But yeah, if you like to see the box and the empty box. Yeah, it's right there, up there on my Twitter feed.
[00:07:32] Speaker A: Then we better off sending you the shoes, right? And then you only get the laces once you've had the demo. Like at least give me something to come looking forward to.
[00:07:40] Speaker C: Someone tweeted they should have stolen my shoes to force me to do the demo.
That would have probably been more effective.
[00:07:46] Speaker B: Yeah, yeah, yeah.
[00:07:47] Speaker C: But anyways, empty boxes shipped to me at whatever expense.
[00:07:52] Speaker B: It's good.
[00:07:53] Speaker C: I don't know who their vc is, but maybe they should look into that.
[00:07:57] Speaker A: It's probably surplus shoeboxes from a, I.
[00:07:59] Speaker C: Mean it's Rubik branded, it was their colors, their logos, plaster like they paid a lot of money for. I used to work at a box company, I know how much they paid for that box. All right, the box is not cheap. And then you shipped it. Well, actually, and it was a shoebox inside of a box. So it's two boxes that you're burning in this that have to be recycled.
[00:08:16] Speaker A: Now if only this had been a story in the thing here, we could have written a show title about it. Sounds like Doctor Seuss shoebox two box.
[00:08:27] Speaker B: The empty box inside of the other box is a pretty good metaphor for cloud security too.
[00:08:31] Speaker C: Yeah, that's a good one too. There's lots of metaphors.
All right, well, getting back to quality of life improvements from Amazon requested many years ago, Amazon is now announcing the ability to filter ec two instance type finder directly from the console, enabling you to select the ideal Amazon EC two instance type for your workload as long as your ideal one is one that you know what the specs are using. Machine learning, which is an interesting choice to help customers make quick and cost effective selections such as types before provisioning workloads. This is done by using the management console, specifying requirements and getting recommendations directly from the ML. The finder is integrated into Q so you can ask use natural language requirements like hey, I need twelve pre cpu's and 25 gigs of memory and I'll give you instance family suggestions and I mean, this doesn't seem complicated at all for something that would be a relatively easy filter grid view thing, which is what I would have asked for five years ago.
But apparently we're going to take simple problems and I'll throw AI at them because logic's hard.
[00:09:29] Speaker B: Me and Jonathan were scheming, you know, in our cynical ways before the show, like trying to figure out like, you know, what, what's the angle here, you know, and I'm like, of course Amazon's going to steer you to the largest, most expensive, you know, instance. And Jonathan's like, no, no, it's much more insidious.
Yeah, no, I, you know, like most of these enablement tools, I'm very skeptical. I think they're a good thing to have, I guess. But I, without playing around with this and my avoidance of Amazon, Q, I'm never going to play around with this.
[00:10:06] Speaker C: And it's hard to avoid Amazon queue. It's kind of front and center all the time.
[00:10:11] Speaker A: Yeah, I'd like to see it replace Google searches because everyone Google searches AWS, Ec two instance, blah blah blah. Or how do I do this aws. And it's always Google that gets the hit and always Google gets the, gets the answer for you. And then Amazon are paying if you click on one of their sponsored links or something. I'd like to see Q just replace Google search. If I got an Amazon question, I want to go to Q and ask you the question. I want it to be as good as the Google search has ever been.
[00:10:39] Speaker B: I don't know if they're going to be context specific. As a user, I don't want to go to the specific AI bot. I want to ask a question and to the ether essentially, and have it generate that. So I think that's going to be the search engine release. What's going to be? It's going to just go to open AI and the same transactions can occur.
[00:10:59] Speaker C: So even worse, I hadn't looked at this earlier, so while you guys were babbling on about this, I decided to go try it.
So this is kind of hilarious to me. So basically it's in the launch an instance wizard, which is always my favorite place to start out and build.
So when you get down to the instance type sections of this form, it's a dropdown of all the instance types and it tells you the on demand pricing, windows, red hat, Linux, etcetera. And there's a little button that says get advice. And so if you hit the little get advice button, it opens another window which has four questions what's your workload type? Which has a long list of things like web app server cache, data warehousing, deep learning, ETL, sudoop, kubernetes, et cetera. So I chose NoSQL because why not use case you had to then tell it web hosting, application hosting, backup disaster recovery, et cetera, et cetera. So I'm going to pick one that's I'm going to go because they don't say database, of course, because why would you do such a thing like that? I'm going to go application hosting with my NoSQL database workload. Then I can choose a priority price performance, high performance, low cost, or no preference I'm going to go no preference because why would I put a preference in? Then I can tell it cpu, Amazon, Intel, AMD, or an Apple Mac.
[00:12:15] Speaker A: That'S cost effective to run MongodB.
[00:12:17] Speaker C: Exactly. I'm going to go no preference there again then advanced parameters I can then add a generation, I can add number of vcpus, I can add amount of memory instance storage required or network performance. That's an optional thing you don't have to do. I'm not going to do that this time just because I don't feel like clicking more buttons. I just hit get instant type advice. Then I can tell that it's literally sending this out to Q and returning back the response from queue.
[00:12:42] Speaker A: That's what it's going to say.
[00:12:44] Speaker C: Yes, go ahead. What would you like to guess?
[00:12:45] Speaker A: I'd like to guess an r five X large.
[00:12:48] Speaker C: You would be wrong.
[00:12:49] Speaker B: T three.
[00:12:51] Speaker C: You'd be wrong as well.
The recommended so the additional information is the recommended IM 4G in is four g four gen and r seven g instance families are based on AWS graviton processors suitable for hosting NoSQL workloads. For application hosting use cases with preference for price performance the image IM four gen and IAS four gen are storage optimized instances offering the best price performance for storage intensive workloads, up to 40% to 40% lower cost per terabyte of storage. The seven reals G instances, powered by the latest graviton three processors, provide the best price performance for memory intensive workloads like NoSQL databases offering up to 25% better performance over the previous Graviton two. Which why do I care? And then image four gen and iOS four gen are designed for high network bandwidth and then r seven g or local NVMe SSD's for available workloads requiring higher network bandwidth and local storage. But like none of that actually helped me make a decision.
[00:13:42] Speaker A: All right, graviton, find me some questions, answers those questions that do not result in graviton being the answer.
[00:13:48] Speaker C: Yeah, I can go back, hit try again, which is nice. I dont go reanswer all the questions and I can change this preference to intel. Would you prefer intel or AMD?
[00:13:58] Speaker B: Well, what were trying to do is see how much AWS is steering our choice.
Intel is too big of a hammer.
[00:14:06] Speaker A: Because theres no preference for that. But change the workload type.
[00:14:10] Speaker C: Worth the type. What would you prefer? Would you like spark? Would you like streaming video encoding? SAP. SAP hana.
[00:14:18] Speaker A: Let's do video encoding.
[00:14:19] Speaker C: Video encoding. All right. You have a priority preference. You'd like price performance, high performance or low cost?
[00:14:27] Speaker A: Middle of the road.
[00:14:28] Speaker B: We did no preference before, right?
[00:14:30] Speaker C: Yeah, I did no preference before. Yeah. You want to change the use case from application hosting to something else.
I can do application development, testing, gaming, content delivery networks or no preference on.
[00:14:40] Speaker A: Use case content delivery.
[00:14:43] Speaker C: Let's do it.
For this one. It's recommending to me the c seven g. C seven gn and the c seven g. The instance families powered by Aws graviton three pipes.
[00:14:54] Speaker A: I think Ryan is right about steering people in a certain direction here.
[00:14:59] Speaker C: Yeah, I don't know that Graviton is the best choice for video transcoding.
Maybe, I don't know.
[00:15:06] Speaker A: Price, performance, that's all it counts.
Apparently they got a surplus of graviton twos.
[00:15:11] Speaker C: They're trying to go high performance. I get c seven G's. If I go low cost, I get c five G's. So just downgrade the instance of the graviton to the graviton two s. Thanks anyways. So this feature, I appreciate that it's there.
I'm going to question some of it.
[00:15:32] Speaker A: Graviton appreciates that it's there too.
[00:15:34] Speaker C: Yes it does. The graviton team makes more margin, is very happy about this new feature.
All right, next up, Amazon ECS on AWS Fargate now allows you to encrypt ephemeral storage with your own KMS key.
[00:15:49] Speaker B: Fantastic. Except for I don't want to manage my own keys unless my customers absolutely make me.
[00:15:56] Speaker C: Well, you're going back to that pesky security team that wants you to use your own keys so Amazon can't access your data. Because that makes sense.
They would like you to think otherwise. Yeah, that's it for Amazon before we head into reinforced territory.
And could you guys perhaps guess what the very first announcement on mainstage at reinforce was about?
[00:16:17] Speaker A: Was it AI?
[00:16:18] Speaker C: It was AI. Of course.
First up was AWS audit manager. Now includes an AI best practice framework on AWS audit manager. This framework simplifies evidence collection and enables you to continually audit and monitor the compliance posture of your generative AI workloads through 100 standard controls, which are pre configured to implement best practices requirements. Some examples of those requirements are gaining visibility into potential PI data that may have not been anonymized before you being used in training models. How would you know that?
Validating that. MFA is enforced to gain access to datasets and periodically testing backup versions of customized models to ensure that they are reliable before a system outage occurs, among other things.
[00:16:59] Speaker B: I mean, I think this is the only way you're going to be able to secure something like AI workloads. So it's cool, but yeah, what a mess. Yeah.
[00:17:08] Speaker C: I would like to know how you've determined this is potential PI data because you had a PI data somewhere else that you used to train your model of PIi data. Like, hmm, who's buying all that dark web information?
[00:17:18] Speaker A: Yeah. Oh, John Baker. I've seen him before.
[00:17:21] Speaker B: Yeah.
[00:17:22] Speaker C: His password exposed in the 2002 Home Depot hack.
[00:17:27] Speaker A: Yes, I did.
[00:17:27] Speaker B: Yeah, so did I.
[00:17:29] Speaker C: And so did Ryan.
[00:17:30] Speaker B: Yep.
[00:17:31] Speaker C: So I picked it.
[00:17:33] Speaker B: Yep.
[00:17:35] Speaker A: That's interesting.
[00:17:37] Speaker C: It was the first time I ever got credit monitoring and I was like, this is a dumb remedy for me. Okay.
[00:17:43] Speaker B: And now I just sign up for the new one. Whatever one does you want.
[00:17:46] Speaker C: Yeah, whatever. Whatever one I get this week. I decided for that one, and I hope, I hope they don't all expire before I get the next one.
[00:17:53] Speaker A: All right, if the first one was AI, is the second one bi?
[00:17:56] Speaker C: No, it's also AI.
[00:17:58] Speaker A: Oh, curses.
[00:17:59] Speaker B: Nice.
[00:18:00] Speaker C: You can now use good dad joke, though.
[00:18:01] Speaker B: Yeah, that gives you fair dad joke.
[00:18:03] Speaker A: Oh, wait, wait till the follow up.
[00:18:08] Speaker C: You can now use generative AI powered natural language query generation and AWS cloudtrail lake, which is a managed data lake for capturing, storing, accessing, and analyzing AWS cloud trail activity logs to meet compliance, security, and operational needs. You can query things like tell me how many database instances are deleted without a snapshot, or how many errors were logged during the past month for each service. And what was the cause of each of those errors? Like it would know. Come on.
[00:18:34] Speaker B: I mean, that said, having spent countless hours generating Athena queries and indexing, I love this feature because this is really where I think generative AI is, is helpful as that sort of last translation layer.
[00:18:50] Speaker C: I mean, I think of all of the terrible regex formats you had to your hcl type, things you had to learn over the years to learn these crazy search parameters. Yeah, this is my favorite feature of AI in all cases. And I was just doing Athena the other day on working at some LB logs because, of course, that's all in s three. And I was struggling with syntax on something, and I was like, man, I wish you guys had to put AI here because this would be an area that I would actually use it. I tried Q, but it failed me. Yeah, I didn't know enough about my data set to help me, so I was like, I'm trying to figure this out. And here's the example from the docs.
[00:19:22] Speaker A: Isn't they cache those results before they pass it? Obviously, between the AI rag call and Cloudflare, they actually have some kind of caching because it could get very expensive very quickly if you're scanning gigabytes or more of logs for every time somebody asks the dumb question a question. Sorry.
[00:19:41] Speaker C: I mean, you would hope so.
[00:19:44] Speaker B: I mean, I think it's, you know.
[00:19:46] Speaker A: Why is my bill so high as.
[00:19:48] Speaker B: Expensive as the billions of Athena SQL queries I also generated while trying to struggle through syntax? I'm not sure.
[00:19:56] Speaker C: Yeah, I mean, I fought with that Athena problem for a good hour, writing all kinds of weird queries that ran for I don't know how many seconds, and it cost me $7.
Didn't really bother me so much.
Yeah, I was looking at the pricing examples. There's no special pricing for the AI usage of this, but it uses Athena under the hood, too. So you're paying for Athena.
[00:20:21] Speaker B: I mean, I think that's going to be the real kicker, though. Is it $7 plus this AI surcharge?
[00:20:25] Speaker C: Right?
[00:20:25] Speaker B: Or is it just $7?
[00:20:27] Speaker C: Right, because it's cloud trail insights, too. I just like to use that. Just give me the knowledge that I need to have. I don't want to think about this so much.
[00:20:36] Speaker B: Well, insights is different, right? Insights is. I tell you what's important, which is fine.
[00:20:41] Speaker C: I'm okay with that.
I don't know a lot about security in some of those areas, so you probably know more than I do. And then when you tell me something that I don't understand, I can ask you dumb questions. That's the ideal scenario.
Well, I'm pretty sure I had this as a prediction for reinvent or some other thing many moons ago when they were building a bunch of pre and post processors for s three object uploads. And then I was just disappointed, and I just swear it off, they're never going to do it. But now they're here again, doing the thing I needed many moons ago. They're announcing the general availability of Amazon Guard duty malware protection for s three this is an expansion of Amazon Guard duty malware protection to detect malicious files uploaded to select s three buckets. Previously, they only do malware protection on scanned EBS volumes attached to EC, two instances, or your container workloads. The guard duty malware scanning uses multiple AWS developed and industry leading third party malware scanning engines to provide malware detection without degrading the scale, latency, and resiliency profile of Amazon on s three.
Unlike many existing tools, this managed solution from Guardduty does not require you to manage your own isolated data pipelines or compute infrastructure in each AWS account and region where you want malware analysis. You can configure post scan actions on guardduty, such as object tagging, to inform downstream processing or consume the scan status information provided through Amazon eventbridge to implement isolation of malicious uploaded objects. Esther objects will get a predefined tag as well, such as no threats found that's found unsupported, access denied, and failed. And you can find the results of the scans in the guardduty console as well, telling you what it's failed on. And the pricing is based on gigabit volume of the object scanned and the number of objects evaluated per month. It comes with a limited free tier with includes 1000 requests and one gig each month pursuant to conditions for the first twelve months of account creation, or until June 11, 2025, resisting AWS accounts. It's sixty cents per gigabyte scanned and 21.5 cents per 1000 objects evaluated.
[00:22:32] Speaker A: It's not terrible, not bad.
The kind of kicker about this though is that the types of organizations that would want to pay for something like that are the types of organizations that would want client side encryption or something else, which would completely prevent guard duty from scanning any of the objects that got offloaded.
[00:22:49] Speaker C: That's just one more reason not to use client side encryption for this.
[00:22:54] Speaker A: I think it's the case of the security controls modernizing slightly and finally getting around to trust that these cloud providers can be trusted.
Unless they Google or Oracle.
[00:23:13] Speaker C: I mean, I did send this to my account rep at Google and I said hey, can you just copy this? Yeah, because this use case is a big pain in the butt anybody who has them. So I appreciate all the cloud vendors copying this one. Just do this.
[00:23:27] Speaker A: That's neat. It beats trying to build in some free antivirus into a lambda function, which I remember doing a few years ago.
[00:23:32] Speaker C: You don't want to use a vast I don't understand. So weird. Or clam AV was the one I did.
[00:23:38] Speaker B: Yeah, that was only because I was running it in the container, but it was the exact same workload event trigger scan. Yeah, cool.
[00:23:48] Speaker C: Well, Amazon is extending IAM access analyzer to make it more powerful by extending custom policy checks and adding easy access to guidance that will help you fine tune your IAM policies. Both of the new features are built on custom policy checks and the unused access analysis launched in 2023. New custom policy checks uses the power of automated reasoning. The new checks help you detect policies that grant access to specific critical AWS resources or any type of public access. Both of the checks are designed to be used ahead and deployed. A deployment possibly is part of your CI CD pipeline and will help you proactively detect updates that do not conform to your organization's security practices and policies.
Guided revocation IAM access analyzer now gives you guidance that you can share with your developers so they can revoke permissions that grant access that is not actually needed. This includes unused roles, roles with unused permissions, unused access keys for IAM users, and unused passwords or IAM users. And I'm really just disappointed that they didn't announce AI for IAM because if any place I would want IAM with AI, it would be, or AI would be with IAM if I could get the letters right.
[00:24:51] Speaker B: It is interesting how they stop just short of that. It uses automated reasoning.
[00:24:56] Speaker C: Mm hmm. Yeah, go on.
[00:24:59] Speaker A: Yeah, that's provable. That's the difference.
[00:25:02] Speaker C: It's provable by math.
[00:25:03] Speaker A: That's Colm's work for you right there. That's, that's provable. That's AI is not.
[00:25:09] Speaker C: I wonder if Colm's opinion of AI is.
That'd be a fun drinking conversation at home.
[00:25:14] Speaker B: Yeah, I mean, this is definitely something I've always sort of wanted to build in, like the CI CD example, use cases, exactly the type of thing you want because the feedback is in the right place.
The implementation, I'm a little not sure of if you get that implementation in the right place unless you're in the console, which is cheating.
[00:25:36] Speaker C: Well, this is the, I mean, this is also the thing that you and I got really excited about when we went to reinforce the first time. They showed us all the math proofs. And the fact that you can actually create custom policies and do your own math now is like, yeah, it's probably took, you know, however many reinforces have happened since we went to the first one for that to get to this point, but I appreciate this is here.
[00:25:54] Speaker B: To be fair, I still don't understand the math.
[00:25:56] Speaker C: I don't either, but I can now apparently get close to writing it well. Amazon is adding a passkey and multi factor authentication for root and IAM users. Passkeys will enhance security and usability of your AWS usage.
Basically, for root they will be turning on automatically or not automatically, but they'll be forcing you next when you log into your root account to enable MFA. And one of those MFA choices can of course be passkeys.
This is a big change as they've been had a lot of breaches, perhaps recently like snowflakes, where they did not enforce MFA usage. And that has to be a problem. For those of you who are not aware what passkeys are, they are a general term used for credentials created for FIDO two authentication. A passkey is a pair of crypto keys generated on your client device when you register for the service or website, and the key pair is bound to be the web service domain and unique for each one. And the whole idea of this is that you can get away from those pesky passwords, so if your phone gets stolen, you're completely owned.
[00:26:56] Speaker B: Yeah, I was trying to think of the implementation because I haven't heard passkeys link to that Fido two.
Now I've connected those dots and now I'm trying to think through management of these things because they're like, it's all stored on your client device, no problem.
[00:27:08] Speaker A: I'm like yeah, but you have another key which is your fingerprint and your face which decrypts the passkey which is the one that they use to authenticate the other thing with.
[00:27:16] Speaker C: So it's I've only had experience with one passwords implementation of it and with apples. So Apple, when you turn enable passkey, it pops up a QR code. You pick up your phone, you have to scan it with the camera and then it has a face id authentication and then unlocks it through that. And that one I feel pretty good about the one password one is just, it pops up a button says do you want to use your passkey? And you hit enter. It just doesn't.
So that one I'm a little less excited about. But it's interesting.
[00:27:48] Speaker A: It's interesting. It's really old technology, which is really funny. I mean it's like GPG things where website provides you with something which is encoded or encrypted with your public key and you have to decrypt it and send it back again. And that's been around for decades. It's just funny that it's only just getting to be adopted by mainstream. Like the dark web websites have been using this kind of technology for logins forever.
[00:28:13] Speaker C: It's sort of like when, when Ryan told me that DNS sec has been around forever and I just an idiot.
[00:28:20] Speaker D: Well, simplified enough for the average consumer to use it now. Also.
[00:28:24] Speaker B: Hi.
[00:28:24] Speaker C: Hi.
[00:28:25] Speaker B: Hey man.
Yeah, branding is everything, so now we can call it passkey and you know, we're referring.
[00:28:34] Speaker C: Yeah, and I'm not fully sold on the whole pass key thing quite yet. So again, you can't have it as your only MFA option. You have to have another MFA option as well, which is interesting. Apparently Amazon doesn't trust quite yet either.
[00:28:50] Speaker A: I can't like, I mean MFA is great, but I want, I want multiple, multiple factors of that. So if I do lose my phone, just the password won't be enough. Just face id won't be enough. Just a fingerprint won't be enough. I want to have like a physical. If my phone restarts, I want to plug a USB, like a yubikey or.
[00:29:10] Speaker C: Something in I have a titan key which is Google's version of the Yubikey and it has USB C and I can plug it into my phone and plug into my computer and it's pretty nice, but I haven't quite fully adopted that use case in the world because I'm like I always had to have my keys with me then and that is not a use case that I want.
[00:29:28] Speaker D: I have the UB key, the small micro one that just attached to your.
[00:29:33] Speaker C: Pure, it's very light mean the one that you're supposed to leave your computer all the time because that's not insecure at all 100%.
[00:29:39] Speaker D: But at least I'm the same type of thing when I need I attached to my phone and multifactor in because USB C is on the phone so nice and easy.
[00:29:47] Speaker C: All right, next up is AWS is announcing service insertion, a new feature of AWS Cloud WAN that simplifies the integration of security and inspection services into cloud based global networks. Using this feature, you can easily steer your global network traffic between Amazon VPCs, AWS regions, on premise locations, and the Internet via security appliances or inspection services using central cloud WAN policy or the AWS management console. Of course, you deploy the inspection services or security appliances such as firewalls, ids, ips and secure web gateways to inspect and protect your global cloud WAN traffic services and customers can easily steer that traffic, as mentioned. And this will simplify create you having to create multiple and complex routing configurations or third party automation tools. You can define this in a central policy document and make it apply to all of your cloud wan or break all of your cloud wan at one time.
[00:30:36] Speaker A: Cool. So apart from Cisco and Palo Alto, who else cares about this?
[00:30:42] Speaker C: I don't know. People using Amazon Cloud WaN service versus Cisco.
[00:30:47] Speaker A: I wonder if it solves the bandwidth limitation per instance. I wonder if they somehow do some magic so that they can route enough traffic to enough instances. Probably not without having that like ten.
[00:30:58] Speaker B: Gig constraint for when.
[00:31:00] Speaker D: Yeah, yeah, I assume they do something like an ingress through.
Oh, what's the third type that you never use? It's not the application, it's not the network load balancer. The network load balancer.
[00:31:13] Speaker C: Oh yeah, that and the transit gateway.
[00:31:15] Speaker D: Yeah, so they probably use something like a gateway load balancer and then from there out. Because the whole point of the gateway load balancer is really for like isvs to leverage to solve that problem that you're talking about.
[00:31:27] Speaker C: I don't know. And then the last thing they decided to announce during reinforce. AWS announces the general availability of Amazon Cloudwatch application signals, an open telemetry compatible application performance monitoring feature in Cloudwatch that makes it easy to automate instrument and track application performance against their most important business or slO. For applications on AWS with no manual effort. Bullshit. No custom code and no custom dashboards, application signals provides service operators with a pre built, standardized dashboard showing the most important metrics for application performance, volume, availability, latency, faults and errors for each of their applications on AWS.
[00:32:01] Speaker B: Was this specifically a reinforced announcement or just because that's an odd one, during.
[00:32:07] Speaker C: The middle of the reinforce thing so it got lumped in? I don't know.
[00:32:09] Speaker B: Okay. I was just curious because it's just.
[00:32:12] Speaker C: Sort of like, I don't know. I don't know if it was in the keynote because I watched 20 minutes of the keynote and I fell asleep, so.
[00:32:19] Speaker B: Nice. Yeah, I thought it was tomorrow, so I'm no better.
[00:32:24] Speaker C: I mean, the recording is available to you at any time you'd like, tomorrow included, so.
[00:32:28] Speaker B: Uh.
[00:32:28] Speaker C: But yeah, reinforce is continuing, so we'll see if anything else comes out the rest of the week. Uh, but, uh, I doubt it.
[00:32:35] Speaker B: I mean, this is going to give you a certain amount of cardinality right out of the box, just like most APM tools do.
[00:32:40] Speaker C: Yeah, I mean, it's better than nothing.
[00:32:42] Speaker D: Yep.
[00:32:43] Speaker B: Better than logging every, every request. So Cloudwatch will still get their money.
[00:32:52] Speaker C: Moving on to GCP. Google's Kubernetes engine is announcing a game changing feature for Google's Kubernetes engine. Enterprise customers built in, fully managed GKE compliance with GKE posture management. Now, achieving and maintaining compliance for your Kubernetes cluster is easier than ever before. With GKE compliance, you can easily access. Sorry, assess your GKE clusters and workloads against industry standards, benchmarks and control frameworks, including CIs benchmark for GKE and the pod security standard.
It also gives you a very handy centralized dashboard to make your reporting easy, updated every 30 minutes.
[00:33:26] Speaker B: If it wasn't for that dashboard, you would have completely lost me. But now, again, I can just point compliance people at a pre built thing and be like, look, green, green good.
[00:33:34] Speaker C: Yeah, I tried to, I tried to send this to the team earlier to see if they could use it, and they could not because there's some new permission that you need to enable in the cloud center of excellence. So if you could, you could get on that. Great.
Now, now it comes full.
[00:33:47] Speaker B: Now it's full context. I understand what's going on now.
[00:33:50] Speaker C: Yeah, it does look cool though. But, you know, it's not that fancy with dashboard. It doesn't tickle my executive brain properly.
[00:33:58] Speaker B: Had to fail.
[00:33:59] Speaker C: I mean, I think it makes my person happy though. Cause it's got green and orange and I like line grass more. So just.
[00:34:09] Speaker D: I see, yeah, it's probably more targeted towards the auditor and everyone else. You'd be like, look, we're good. See, even Google, see, all green.
[00:34:15] Speaker C: Green is good, go away. Green good, go away.
[00:34:19] Speaker B: That's what we want. Yeah.
Signals.
[00:34:23] Speaker C: Well, Google is boosting developer activity with a new pipeline validation capabilities in data flow. Apparently, data engineers building batch and streaming jobs in dataflow sometimes faced a few challenges, including things like user errors and their Apache beam code sometimes go undetected until the job fails while it's already running, wasting engineering time and cloud resources, then fixing that initial set of errors that are highlighted after job failure is no guarantee of future success. With subsequent submissions of the same job may fail and highlight new errors that require additional fixes. This is just software engineering. This is every day of my life writing code like it didn't work. Let me go fix it. And then I tried again. Oh, that worked. But now this is broken.
To solve this, Google is announcing pipeline validation capabilities in dataflow. Now, when you submit the batch or streaming job dataflow pipeline validation performs dozens of checks to ensure that the job is error free and can run successfully once the validations are completed, you're presented with a list of identified errors along with the recommended fixes in a single pane of glass, saving you time you would have previously spent on iteratively fixing errors in your Apache BMW code.
[00:35:25] Speaker B: I think you could absolutely measure the maturity of a data engineer by their ability to write a new feature and then test against a subset of a data and how refined that is, because I know the first thing I do is learn by doing. And then you point that at a petabyte data set, you're like, oh, no, it's like, undo, undo, stop, stop.
That's not how that works. So it's, this is cool. Like, it's sort of building that in where it's like, I'm sure it's doing exactly what I'm saying, which is testing against a smaller data set that it's carved off somewhere.
[00:36:03] Speaker D: I'm just imagining every Jenkins pipeline or every CIC pipeline I've done, where it's like, okay, I built a pipeline. How many commits does it take for me to get the pipeline to run?
[00:36:17] Speaker C: How many where it works?
This is my job. This is software engineering 101. Like, you write the code, you debug the code, then you put it in the GitHub repo, and then the Jenkins job fails. And then I fix that, and then the Jenkins job fails.
This is. Why would you make this easier for me? Like, come on now, take all the.
[00:36:37] Speaker D: Fun swearing out of my job.
[00:36:38] Speaker C: I know this is where my most inventive swears and rants come from, is this type of fighting.
[00:36:43] Speaker B: And they get captured for history and they get.
[00:36:45] Speaker C: They do.
[00:36:46] Speaker D: Well, depends if you forcibly push over your gin history.
[00:36:51] Speaker C: Oh, no, no, I don't do that. I leave the get shame intact for everyone to see. That way they can see the pain that I went through to get this code working.
[00:36:59] Speaker D: There was an old website that was like, git commits from last night.
[00:37:04] Speaker B: Yeah.
[00:37:05] Speaker C: So good.
[00:37:06] Speaker D: I don't know, is filter GitHub or in public repos for swear word today?
And that is it.
[00:37:14] Speaker C: It still works. Late night commits.com.
[00:37:19] Speaker B: I'M sure it serves the same purpose.
[00:37:21] Speaker C: I love that the third one I got was Django. Shit. Yeah, I could see that. That one resonates with me.
[00:37:30] Speaker D: Yeah.
[00:37:31] Speaker C: Damn, this is hard. I feel your pain. Zach is beautiful. I don't know who you are, but I do agree it can be hard sometimes.
All right, well, Google is announcing the Google built Pam that they announced at Google Cloud next is now available for you to play with in preview. Pam helps you achieve the principles of least privilege by ensuring your principals or other high privileged users have an easy way to obtain precisely the access they need only when required and for no longer than required. Pam helps mitigate the risk of allowing you to shift always on standing privileges to on demand privileged access with just in time time bound and approval based access elevators.
[00:38:07] Speaker B: I'm super excited by this. I think that this is something that will change the way we structure permissions.
It's a great compromise from the old windows style where you had your two accounts where everyone shared the same password between the two accounts, but you had two, so it was separate. Cool.
[00:38:27] Speaker D: Don't tell your compliance person that.
[00:38:29] Speaker B: I know, but this model of being able to sort of like make a request flow or a pipeline where you're like, well, you normally, you don't have right access to your production whatever, but, you know, you can have an approval flow and have that reach out to a source of truth. So there's a approved change window, or it can be approved by someone or, or it could just log it somewhere and it's just fantastic because then it prevents that.
[00:39:01] Speaker D: Yeah.
[00:39:02] Speaker B: Use of having too many permissions and then also getting your credentials, like fished or something. Yeah, I love it.
[00:39:11] Speaker C: And they can get fish still. Just, they fish it and use it. Right.
[00:39:14] Speaker D: Then it kind of reminds me a little bit of, this is how, you know, I've been in Microsoft too long. Like their p two licensing, which has like a just in time permission where like essentially a third party, like, if you want to, uh, elevate your permissions, it sends an email and says so. And so it's elevated their permission. Just somebody approve it. So it kind of feels like the same type of thing, but this feels a little bit more native to the cloud. That's like, uh, email approval still, because you want email.
[00:39:41] Speaker C: How dare you call a Microsoft thing not native to the cloud. How dare you, sir?
[00:39:46] Speaker D: I can read the rcs I get from them.
[00:39:49] Speaker B: Yeah, yeah. The, the individual, like sort of, uh, entitlements, um, are configurable, right. So if you wanted to be the same email thing, you could absolutely do that. If you wanted to have it generate an event and, and wait for an approval, you can do that too. So it's kind of, it's kind of great that you can do that. And then the fact that you can set it to auto expires, I love that so much. Like, that's my favorite.
[00:40:17] Speaker C: I definitely wish I was at a company where I could do it day one now with all these toolings because when we first implemented this at a prior job, we had to invent a lot of this and then it's gotten easier over the years, but now it's like, okay, now it's really easy to do this. Yeah. And then you couple this with zero trust networking and some of those things and you get some really nice security workflows that I think are great.
[00:40:37] Speaker B: Yeah. Well, the funny thing about, you know, so if you think about our previous implementation, it actually wasn't compatible with like the newer SSO config per cloud either. So it's like you, we're sort of forced to choose your favorite, whether you wanted to do a much better identity provider SSO flow or do just in time access. So this is sort of because this is in a different part of the flow directly at the cloud provider. You could have both.
Exactly.
[00:41:06] Speaker D: At one point we should have a, maybe we do as an aftershow of like if we build from day one brand new company starting off, what would we do and how would we do it might be a fun conversation.
[00:41:18] Speaker B: Oh, I think I would cry.
[00:41:21] Speaker C: I just wouldn't.
Just do serverless.
Yeah, nothing but serverless. Would I like to do something with Mongo? No, serverless. Yeah, that's all serverless. Or managed services. Screw you, I don't care. It has startup problems. Don't care, make it work. Yeah, not managers not doing kubernetes. We're not doing any of that. Just serverless.
[00:41:44] Speaker D: That is what I told one company that I worked with. At one point they were starting net new. I got in like the first day and they built them a CICD pipeline that just went to Fargate.
[00:41:56] Speaker B: Nice.
[00:41:56] Speaker D: All I think they still have to today, Fargate and OpenVPN or something like that.
[00:42:02] Speaker C: Even that's too much serverless.
I appreciate Fargate as serverless as that. That is with still having servers, but.
[00:42:10] Speaker D: I don't have to manage them or automatically.
[00:42:13] Speaker C: That's true.
So this afternoon a groundbreaking multicloud partnership was announced. And I only know this because my sales rep from Google texted it to me and go, oh my God, did you see the news? And I was like, no, he sent me the link and the first thing I saw was Oracle and Google. And the first two, three words and I was like, oh my God, if they sold Google Cloud to Oracle, I'm going to murder someone. But that was not what it was.
Oracle and Google today announced a partnership that gives customers the choice to combine OCI and Google Cloud technologies to help accelerate their application migrations and modernizations. Leveraging the Google Cloud Cross cloud interconnect customers will be able to onboard in eleven global regions, allowing customers deploy general purpose workloads with no cross cloud data transfer charges. And later this year a new offering, Oracle database at Google Cloud, will be available with the highest level of Oracle database and network performance along with the feature and pricing paired with OCI. So clearly my Google rep, and sorry I know you're probably listening, doesn't follow the Azure news because this is what Oracle and Azure did about six months ago. So I appreciate that. Groundbreaking. It is not, but it is appreciated. If you are a Google customer who uses Oracle databases and have been frustrated with the lack of offerings, you can now take advantage of that directly from the OCI cloud and get probably some discounts because Oracle is still trying to get you to use OCI and eventually you'll have an Oracle database offering directly from Google Cloud, probably managed by Oracle, just like Azure has.
Both companies will jointly go to market with Oracle database at Google Cloud, benefiting enterprises globally and across multiple industries, including financial services, healthcare, retail and manufacturing, and more. Larry Ellison has a quote here. Customers want the flexibility to use multiple clouds and to meet this growing demand, Google Cloud and Oracle are seamlessly connecting Google Cloud services, the very latest Oracle database technology. By putting Oracle Cloud infrastructure hardware in Google Cloud data centers, customers can benefit from the best possible database and network performance.
Sundar Pichai says Oracle and Google Cloud have many joint enterprise customers and this new partnership will help these customers use Oracle database and applications in concert with Google Cloud's innovative platform and AI capabilities.
As a customer of Oracle and Google Cloud, you get lots of flexibility here, ability to bring your own licenses of Oracle to the process and get new discount programs such as Oracle support, rewards and a unified customer experience for all your Oracle needs, as well as access to all the Oracle application services and capabilities. So congratulations to your Oracle customers who've been mad that they chose Google instead of Amazon or Azure. You now have the same capabilities.
[00:44:46] Speaker B: You think Amazon will ever adopt this? Like ever?
[00:44:50] Speaker C: You know, I mean, so I was talking because my second wasn't this, like, it's not something. They're the third people to support Oracle and I'm like, well technically they're the second because I don't count Amazon's deal as the same thing because Amazon was so early that Oracle still thought cloud was a fad and stupid and so they gave them a sweetheart deal to allow them to host Oracle on RDS and so that's their original contractual negotiations. All that and so I suspect that because the relationship at that time was very good between the two companies. Although Amazon always hated Oracle, it got very bad afterwards. And Andy Jassy's the CEO of the company, who said very negatively negative things about Larry Elson many times on stage at reinvent, I'm going to say no, but, you know, I could be surprised. You know, business is business, money is money. And if, you know, Oracle and Amazon wanted to get together and do something, but the whole idea of, you know, Amazon not charging a per gigabyte fee for data transfer anywhere in the world would be shocking to me. So I just doubt it.
[00:45:50] Speaker B: Yeah, no, I was, I was leading the same way because it is one of those things. There would have to be a huge influx of customers that were choosing to go on Amazon or moving from Amazon for this specific workload, which I'm sure there are plenty. And these types of enablements when you need them are fantastic because the alternative is terrible.
[00:46:16] Speaker C: Managing oracle myself is terrible. You are correct.
[00:46:19] Speaker B: Yeah, well, I mean, yeah, I have been blessed in my career to never have to do that at the software level, only the hardware level.
[00:46:27] Speaker C: I mean, it's not a bad product, like I said all the time, it's a great database. I just don't pay any license fees.
[00:46:31] Speaker D: Even managing Oracle on RDS is painful.
[00:46:34] Speaker C: It is. It's not great. I mean, honestly, Amazon would be in the best interest of their customers. If they say their customer focused and obsessed would offer a database service from Oracle that they manage and care for, it would be a better experience.
[00:46:51] Speaker D: My only question about this is what is Oracle support rewards? And why does it sound like I'm checking out a grocery store?
[00:47:00] Speaker C: It's not that, but I hear you. It sounds like a loyalty problem.
[00:47:03] Speaker B: If you have two rat clusters, you're, yes, silver status and you get a third one for free.
[00:47:08] Speaker C: I don't, yeah, I think it's just the, I think there's certain amount of money you spend on Oracle licenses. They give you a support reward, basically of support they offer for free. It's basically like, here's the free support we gave you. They just call it a reward. Now it's my, my rough understanding of it and I will tell you, I've did not read the tease and season this. So I don't know how Oracle is screwing you on that deal, but that's how they've been calling it for the.
[00:47:30] Speaker D: Last couple years, you know. Right. To be able to check me out being like, ooh, you spend another $500,000, you get another reward that gives you something.
[00:47:40] Speaker C: I mean, you can get cloud pod rewards and we'll just give you stickers.
[00:47:43] Speaker B: That's how it works.
Stickers, stickers.
[00:47:46] Speaker D: The two year old will appreciate that.
[00:47:47] Speaker B: Yeah.
[00:47:49] Speaker C: Moving on to Azure. I had one announcement that it wasn't AI relevant that I didn't care about, but it's still kind of boring, so I apologize. But Azure is adding onto the successful open sourcing of the retina cloud native container networking observability platform, which I had forgotten about with a new offering called Advanced Container networking service is a suite of services built on top of existing network solutions for aks to address complex challenges around observability, security and compliance. The advanced network observability is now available in public preview. Advanced Container networking service is a suite of services built to significantly enhance the operational capabilities of AKS. Clusters suite is comprehensive and designed to address the multifaceted, intricate needs. Okay, I get it. Log out all the buzzword. What does it actually do? Well, I still don't know because the service brings the power of Hubble's control plane for both psyllium and non cilium Linux data planes to unlock Hubble metrics. Hubble Cli and Hubble UI on your aKs clusters, writing deep insights into your workload.
It uses EBPF, guys. It's CNI for container network interfaces. It visualizes it for you. That's what it does. And they could just said that, but they didn't.
[00:48:56] Speaker B: I mean, this speaks to the root of why I don't like Kubernetes in general, which is like, I like workloads where you're delegating responsibility boundaries and isolating things. And this type of networking, but sweet is because you're hosting multiple workloads and multiple different business entities and all kinds of things on your Kubernetes clusters. And so you need this visibility.
You've got your different service mesh things running and trying to debug all these things where it's, well, you could simplify.
[00:49:28] Speaker C: But I did learn about Hubble from this. And Hubble has cool graphics. Yes, it has a really cool service map and it has some really cool charts that are very, very up and down, very chardy.
[00:49:40] Speaker B: You are becoming a caricature of yourself.
[00:49:43] Speaker C: I know, but I was intrigued by Hubble, but not enough that I would make anybody implement it. But because other side is like, you can use Hubble metrics with Grafana. I'm like why would I just Grafana?
Why all this complexity? Just use Grafana for all of this problem.
[00:49:59] Speaker B: Well, that's meeting you where you are, right? If you already have Grafana, you want do thing. But yeah, no, it's, it is, it's just, it's so complex and it's, it is cool. I have definitely seen some cool stuff. And when you need to have the visibility because you're, you know, you've already made your bed, then these are nice tools to have.
[00:50:17] Speaker C: Hopefully our one azure person on the podcast will use this and tell us it's cool.
[00:50:23] Speaker D: I've still avoided kubernetes for merge here.
[00:50:26] Speaker C: And you're better for it. Let's you're already bald enough without the Kubernetes work. If you add a kubernetes to it, it turns gray. That's what I learned. Yeah, that's what happened to Ryan. His goatee started turning gray when he started looking at kubernetes.
[00:50:40] Speaker B: You were not wrong. It definitely did indeed.
[00:50:44] Speaker C: Well, Oracle, I do have a couple other announcements other than their groundbreaking partnership with Google.
So Finops X is next week, Thursday and Friday. This episode will drop just right before that. I will be there. If you happen to listen to this on your flight to Finops X, I will be there. I'll have stickers tweet me or just find me. I'll be around the show floor. I'll be all over the place.
I will have stickers everywhere but at the midway. I probably will not take stickers to the midway party, but if you see me on the show floor, I will have a few. And so you can ask for a flatpod sticker or a lambda spacle sticker or whatever I decide to bring. I might even have some pins. We'll see. See, but basically Oracle is announcing ahead of that that they are fully in support of focus with the 1.0 spec being now added to their OCI cost reports. And they are excited that you can now run all your supplemental cost reports to the focus schema, allowing you to get standardized transactional data that now you can use to determine your spend across multiple clouds in the same way. Yay.
[00:51:46] Speaker B: This is so hard to do, even with the unification.
[00:51:49] Speaker C: Yeah, it's a step in the right direction.
[00:51:52] Speaker B: Sure.
[00:51:53] Speaker D: Yeah. Now they all have it. So in theory, if you are running multi cloud or using the brand new groundbreaking partnership that nobody else has done between Google and Oracle, you can at least see your bills kind of roughly the same way.
[00:52:07] Speaker C: So Oracle's heard our complaints about their semi truck data centers and it's how they open regions so fast and they decided to write a full blog post about how they build the touchless cloud region.
They basically walk you through everything from this foundational section, which they call the first mile activities. Again, not helping with the truck assumptions, but overall, this is a cool article. It's a pretty geeky description of how they build out their cloud regions. They talk about suspecting the site, building the data center facility, the cooling the generators. How do they go about building the cloud region software remotely? They don't even go to do it, how it gets orchestrated, et cetera. If you are a tech nerd who loves infrastructure, this is the article for you to check out and read. If you're curious how one of the cloud providers does their cloud provisioning of new regions.
[00:52:55] Speaker B: Oh, man.
[00:52:55] Speaker C: Which makes me kind of sad because I kind of just want to go to all the cool cloud regions to build them out. I'm like, I don't want to be touchless. I want to go. But maybe that's why they can go faster than others.
[00:53:04] Speaker B: Yeah, I mean, how touchless is it? Because it is sort of that thing, like, all right, I mean, rack and.
[00:53:10] Speaker C: Stack of server, rack, that's not so touchless.
[00:53:12] Speaker B: Not so much. Yeah.
[00:53:14] Speaker C: But I assume that they're saying we can have people, we pay $5 an hour in third world countries, rack and stack this stuff as long as all the specifications are met. And then once we have network access to it, we can just hit this button. Magic happens. Which is good.
[00:53:27] Speaker B: Yeah.
I lived this life for a long time, building out data centers in new regions and doing variations of this forever. So I'm super stoked by these things.
It is.
[00:53:39] Speaker C: It was a lot of fun.
[00:53:40] Speaker B: I miss it a lot. I don't really want to travel that much, and I'm older now, so I don't think I could really spend as.
[00:53:46] Speaker C: Much time doing it.
[00:53:47] Speaker B: The same amount of like 16 hours days on the data center floor and the cold and the noise.
[00:53:51] Speaker C: But, yeah, I definitely remember many, many data centers, you know, breaking down cardboard, breaking down styrofoam, huffing it out to the thing, and then coming back and racking and stacking, getting cut by those little rack nuts, you know, all the fun things.
[00:54:06] Speaker B: I still hate those things.
[00:54:09] Speaker D: Oh, bad. Never is a bleeding finger.
[00:54:13] Speaker C: I didn't get at least like three fingernails. Like, just totally scratch the crap out of my rack nut on every day, sort of build out. I knew I'd failed.
[00:54:19] Speaker D: So there has to be a better alternative.
[00:54:22] Speaker C: There is. There's, there's a company now. I'll send it to you offline, but yeah, there's a company now that makes the better rack nut that you don't have to do that terribleness anymore. But it didn't exist when I did rack and stack.
[00:54:33] Speaker B: So, slightly fun fact, the first introduction that Ryan had to torrent was actually a version of this. Because Facebook, they would spin up a rack and they would plug it in and then all of the applications in stack that were targeted for that rack would actually, via the torrent protocol, be fetched from multiple sources and configure that rack.
[00:54:53] Speaker C: Super cool.
[00:54:54] Speaker B: And then I was like, oh, and then people use this for like, what, pirating stuff. Cool.
[00:54:59] Speaker C: I was able to find it faster. They're called rack studs.
I didn't think my Google kung fu could be that fast, but Ryan talked for a minute on something.
[00:55:08] Speaker B: You're welcome.
[00:55:09] Speaker C: I don't know what you said, but.
[00:55:11] Speaker B: Can always be counted on that.
[00:55:13] Speaker C: Yes, but yeah. So rack studs are the new alternative to rack nuts, which very male, very male oriented, you know.
[00:55:20] Speaker B: Yeah, I was going to let that one go.
[00:55:23] Speaker C: Rack nuts and then studs. Yeah, not very gender inclusive. Sorry about that. I didn't name it. All right, well, I do have a cloud journey for you guys, only because Google has written up a couple blog posts I thought we should maybe chat about. So they started out a couple weeks ago with five myths about platform engineering, what it is and what it isn't. And Ryan happened to be gone that day. So I said, well, we'll save this for the next time. And between that time and now, this time with Ryan, they had five more myths about platform engineering, how it's built, what it does and what it doesn't. And so I thought we'd talk about these. So the first myth they had, which I thought was sort of interesting, because if this is all you think platform engineering is, then I'm sorry for you. Myth number one, a developer portal and an internal developer platform are the same thing.
And they say on this one, basically that a developer portal is an interface that helps software developers find and use the various services and tools that the IDP provides. And developer portal typically provides execution of self service templates, a service catalog, yay. Visual representation of the application service status and documentation on the platform, application APIs, or even the code itself. While an internal developer platform, on the other hand, enables developer self service through the use of golden paths, the platform abstracts away technical complexities with approaches such as codified practices, setting company wide configurations, and applying security controls. Each new service deployed from an IDP. For example, Google, Kubernetes is made immediately available on demand, waiting for any tickets, manual approvals or meetings. So that's the first myth. What do you guys think?
[00:56:51] Speaker B: I mean, I made a career out of building the second. So, you know, I like that.
Yeah, I do like the distinctions a little bit. I've never heard like a developer portal before defined. And so like, I sort of get the distinction they're drawing there. It's what I just sort of duluth lump together as service catalog.
[00:57:14] Speaker C: It's sort of like a moment that you and I were at that cloud center of excellence talk at AWS, and they were talking about all the things we've been doing for years and it didn't have names for, and we're like, oh my God, it's a framework I can use.
Yeah. Sort of like, yeah, platform versus. Okay. Yeah, it makes sense. You know, turn on developer platform versus developer portal. Yeah, it's a good distinction.
[00:57:36] Speaker B: Yeah.
[00:57:37] Speaker C: Myth number two, we don't need an internal developer platform. And they say it depends. You really might need one. In fact, you probably already have one. Whatever your current method to get your code production is your platform, and that might be a mess of different process systems and people. And most likely Jenkins, for example, if you have a set of tickets to file to different silo teams, we say that you have a paperwork as a platform, or if you speak to a specific individual to launch something, you have a people as a platform, which they lovingly mentioned. Brent from the Phoenix project.
[00:58:05] Speaker B: Nice.
[00:58:05] Speaker C: Platform engineering starts with modeling the processes or an improved version of it that you have today, and building software to do it for you instead of requiring every team to become experts in DevOps practices.
[00:58:15] Speaker B: So I'm totally steely. Paperwork platform.
[00:58:18] Speaker C: That is awesome.
[00:58:19] Speaker B: Whoever wrote that at Google, just send me a bill, I'm stealing it. Yeah, that's fantastic.
[00:58:25] Speaker D: I like the journey, though, of like they talk about, which is something I've never thought about, but essentially what I do, which is like, you know, it's people slash paperwork as a platform and moves on to, you know, then, you know, my life goal is to automate myself out of a job because I don't like to do the same thing many times over and over again.
You know, it becomes insanity to me. So, like, I kind of like that journey and I, I like kind of where they go with it, which is you do have a platform, you just don't realize it yet, which I, which I do like.
[00:58:58] Speaker C: Yeah, you're doing this thing, but you're just, you haven't automated it, you haven't made it self service. You haven't made it easy. You've made it, you know, not scalable, which, you know, is the big problem with those type of things.
Myth number three, platform engineering is just advanced DevOps.
[00:59:15] Speaker B: No, no, it's, it's the thing that the poor DevOps team created in self defense.
[00:59:21] Speaker C: For people who think this in their article here, they go on to say that DevOps. We can all agree that DevOps is an organizational and cultural movement that aims to increase software delivery velocity, improve service reliability, and build shared ownership, where DevOps practices include version control, continuous integration, trunk based development teams testing yada, yada, yada. But the DevOps model can face scalability challenges due to an overwhelming management of infrastructure, and this may lead to cognitive overload, developer burnout, inconsistencies between teams, or cultural resistance. Platform engineering is happening today as a natural evolution to DevOps at scale. DevOps practices used with platform engineering include taking a developer centric approach, automating an instructor's code, security and compliance, observability and continuous improvements. Platform engineering takes select DevOps practices and codifies them to software. So no, platform engineering isn't simply advanced DevOps. Think of it more as shifting DevOps down into the platform.
[01:00:13] Speaker B: I couldn't agree more. That's exactly, you know, I feel it is a natural progression based off of how complex things can be, how many different technologies you got to be an expert in.
And I think for a while, DevOps being a philosophy that turned into a role, I was always doomed for failure because of how complex it was. And this is exactly how you combat that. Just like any software is automating tasks, this is the same thing.
[01:00:46] Speaker C: Myth number four, platform engineering is just automation.
Automation lets teams reduce the need for human intervention when managing their systems. This can be done simply as in a shell script, or it can be much more complex, nuanced and scalable. Generally, when people deride something as just automation, theyre referring to large numbers of trivial shell scripts that often break. And its true automation can in fact introduce new, more difficult to solve problems. Due to this, avoiding unchecked automation is understandable where systems begin to fail, not due to the quality of automation, but due to sociotechnical pressures like misinterpretation of signals, misunderstood designs, poor assumptions, or misaligned incentives. But whereas the bag of scripts approach automation is piecemeal, platform engineering takes a much more holistic approach by considering the full system lifecycle, from service creation, configuration, deployment to monitoring, scaling, and eventual tear down. And so yes, platform engineering is indeed automation but the platform is delivered as a product and designed for full service lifecycle, not just bolted on reactively.
[01:01:40] Speaker B: Yeah, yeah, your janky work, you know, combination of Jenkins workflows, that's just automation.
That's exactly how you start, you know.
[01:01:52] Speaker D: But that's how you start the platform. And then you actually take a look at it and go, okay, none of these things are scaling. Let's now revamp and actually let's build the platform now that other teams are going to leverage because everyone's copy and pasted these, all these janky scripts 17 times made their small tweaks. So let's build a real platform out of it. So I think it starts out of automation and moves into a platform as the business needs it.
[01:02:20] Speaker C: Number five in the first article. Platform engineering is just the latest fad in recent years, the complexity designing and managing modern infrastructure and software stack has led to significant increasing in the cognitive load place and software development teams. So much so that it's become essential to abstract the deployment and management of the services. Today's platform engineering addresses some real world problems, namely DevOps overload or aka cognitive overload, when teams become bottlenecks as they try to manage the growing complexity of modern software. Kubernetes developer friction a team's quantifiable slowdown from interacting with infrastructure, tooling, policy and processes. Platform engineering caters to specific needs, processes, workflows, security and infrastructure of organization. Its goals reduce friction and cognitive overload, thereby improving developer experience.
Related platform as a service predates platform engineering, but it was also, but it was not always the right fit for some teams. And so you might ask, how is platform engineering different from platform as a service? While platform service provides the technology to efficiently build applications to configure the necessary underlying integer, it seems the surrounding ecosystem of people and processes to be adopted successfully. And platform engineering provides a method for a team to build an abstraction or extensions of products, including a paas, implementing the constraints or freedoms that are appropriate for the organization. In other words, platform engineering is the response to the technology centric model of platform as a service, which doesn't work for organizations that might need further customization.
[01:03:36] Speaker B: Yeah, the latest fad to me is, you know, signifies something that's going to go away as well that's popular, but only in the short term. And I feel like because this is such a reaction out of things that aren't going away like complexity and difficult environments, I can't see this being just a fading in that sense, but a definition of terms of stuff we're already doing a way to label ideas and bucket them together so that you don't have to re explain everything. Yeah, sure.
[01:04:12] Speaker C: Myth number six, platform engineering eliminates the need for infrastructure teams.
Even if you have the best developer platform on the planet, it still runs on top of complex infrastructure, which always requires ongoing maintenance by specialists who understand it. After all, someone needs to architect, manage, scale, troubleshoot and optimize that infrastructure. And try as you might, that infrastructure will continue to fail, just as it did before you introduce platform engineering. A common mistake is to eliminate the infrastructure team and to expect a totally new team to make up for that loss. Infrastructure teams already have the expertise to handle these responsibilities, and such are good candidates to become platform engineers. By using the team with institutional knowledge of the underlying infrastructure, you're more likely to adapt your current system into a viable engineer platform. However, what platform engineering does change is how integer specialists prepare for and respond to failures, as the platform training role is more focused on platform development and less on manual repetitive tasks. So while platform engineering changes the nature of infrastructure work, it doesn't eliminate it altogether. And you need to build a self service catalog of golden paths that developers can select to deploy their applications. And that catalog needs to be documented and refined, advocated for within the organization, and introduced to new engineers. And improvements to the platform also need to be rolled out to existing tenants.
[01:05:19] Speaker B: It's weird because I kind of agree and disagree with a lot of those statements, like right next to each other, because I've said for a long time that I believe that infrastructure teams are going to become infrastructure service teams. And another way to say that would be platform engineering, because they're taking their business value that they provide with their expertise of that infrastructure layer, providing it back in a service, if I format typically through some sort of tooling. And so it's, I agree with them on that, that part, you know, some of the.
Does it eliminate the need? No, which they're not saying, but yeah, it's sort of strange. Like, I don't know, I don't like, it's 601, half dozen of another, I guess.
[01:06:07] Speaker C: Yeah, well, they kind of, I didn't read it. But finally, even the most mature platforms have components that fall outside the scope of automation, and infrastructure experts will still be responsible for them. Yeah, okay. Like, I get it. But yeah, this one, I think it really depends on how you define an infrastructure team.
The guy who runs vmware, probably sysadmins.
[01:06:27] Speaker B: Yeah, okay.
[01:06:28] Speaker C: Yeah, not those people. But yeah, your hardcore network engineers who understand OSPF and BGP and routing and how those things get automated. They have a tremendous amount of value in a pro, in a platform type setup, I think.
[01:06:40] Speaker B: Yeah, and someone, you know, similarly, if they're deep knowledge of like an OS and the transactions at that level, like there's still going to be that, that specialty and that need. And I think that those people will be on platform engineering games instead of in the basement.
[01:06:57] Speaker D: They'll still be in the basement, but.
[01:06:59] Speaker C: You know, yeah, we're always in the basement. That's where the cloud.
[01:07:02] Speaker B: That's true.
[01:07:04] Speaker C: Number seven, introducing platform engineering will dramatically impact staffing costs. And they say this part of the building platform engineering is taking the people with the most DevOps skills, aka very expensive, but then an organization evolving into the new structure. This allows them in the organization to better apply DevOps principles with fewer people using self service automation and golden paths. A common concern is that the platform engineering team will require a lot of additional personnel. A platform engineering team indeed needs to be staffed, but that staff can come from existing operations and software engineering teams. Over time, the resulting platform should more than pay for itself by leveraging gains from shared services. In other words, the platform engineering team is an investment that you can fund from existing in house teams. And one model considers Google's SRE history of sublinear scaling. The teams responsible for ensuring availability set objectives to grow their headcount to a lower growth rate than the system they run. One interesting platform engineering and anti pattern would be to expect a reduction of operations, staff or developers out of the gate, and retaining existing teams work well because they're already familiar with business needs and have lots of experience with underlying infrastructure, whether it's exposed directly or via platforms. In fact, they observed that teams that adopt platform engineering can end up finding that more work can be done by the same individuals because there's a platform that they can leverage. And when implemented correctly, platform engineering increases efficiency and reduces operational overhead, including automating, deployment, pipelines, infrastructure configuration self service model reduces bottlenecks and workflows become streamlined, allowing teams to do more with the same or potentially even fewer resources.
[01:08:28] Speaker B: What's funny is that I impact in what way? And then I was thinking about this, and I think one thing that is discounted or I haven't seen mentioned anywhere else, is that when you're forming a platform engineering team and you're charting out your roadmap and all the things you're using the same terms, that if you're working in software development or any kind of SaaS company, you're using the same staffing and money and budgetary planning guides as you're product application teams. So you're now all of a sudden, in my opinion, or at least in my experience, taking a very understaffed sort of operational side of the business and then converting that to using the same metrics and the same milestones in terms of delivery and trying to then try to solve that problem with the mystical man month, etcetera. And then it does lead to a bit of an expansion, but it's also just because it's been so horribly underfunded for so long. So it's sort of interesting.
[01:09:32] Speaker D: Yeah, I also like the, you know, then essentially the, the Google model, which I've never actually heard the word for, but supplemental scaling, where in theory, as you have built out this platform for other people, do you need less people? But I've kind of seen the opposite, where once youve proven yourself with an organization, you end up gaining more responsibility. So while your platform team starts running a, B and C right now, oh, theres this other business need we need to solve. And youve optimized a and B. So now you can move on to c and D and you end up just acquiring more things as you automated the maintenance and everything else that needs to be done. For a lot of the first things within the, within the team, you, you might end up with subliminal sub sub linear scaling, but not in the way that it actually is, but you're really just gaining more responsibilities.
[01:10:29] Speaker C: Can you say sub linear scaling five times fast?
[01:10:31] Speaker D: I don't think I can say it once, as we just proved fast.
[01:10:36] Speaker C: Number eight, adopting platform engineering today will quickly solve all my biggest problems. Of course it will, right? And any complex.
[01:10:43] Speaker B: It'll also cause all your biggest problems.
[01:10:46] Speaker C: In any complex environment. Hoping for a quick fix is almost always wishful thinking. Changes take time, and the time of that change needs to account for identifying your organization's constraints and how quickly it can curate relevant solutions. Nor is there a one size fits all approach to platform engineering. It needs to be tailored to meet the specific needs of your organization. However, you can achieve fast results by building out a minimal viable platform starting from subset of your user base and creating a fast feedback loop. Starting with some pre made MVP's can help bootstrap a team, but it's important to not make the mistake and think that you can buy your way out of this by adopting a fully built platform and presto, improving immediately. Investment, research and introspection are the right path forward here by starting with an MVP and adding capabilities based on early adopters feedback, you can iteratively build a platform that starts delivering value quickly. Joan tried to design the perfect platform with a five year plan. In short, platform engineering is a journey that requires a change in mindset across development operations, a cultural shift to embrace the platform, golden path engineering and tooling to address friction in the development process. All which takes time to get right. And you will make mistakes, I will say right now, oh yeah, learn from them and make less mistakes than they.
[01:11:51] Speaker B: Make new, different mistakes. Yeah, yeah, I agree with, you know, like, I think this is one of those things you can't let perfect become the enemy of good. You really have to focus on building a foundation that is modular and can be expanded upon. Because there's new technologies all the time, there's new patterns and yeah, it's going to have to change. And because of where it is in the business, its agility is key for the business to function because it can block all innovation if you let it.
[01:12:23] Speaker C: Number nine, you should apply platform engineering practices to every application. Platform engineers actively analyze identify tasks process which create a high cognitive load on development operations team, taking targeted actions to alleviate the burden that does not describe all tasks and processes in a software delivery organization. And as such, consider applying platform drain to applications where developers are overwhelmed by infrastructure complexities or the operation team faces constant friction. In these situations, a golden path approach can streamline development and management, and this typically involves selecting suitable cloud services, automating deployments, and establishing standardized configurations.
First, focus on abstracting things that have the highest usage and toil, that is, services that both take a high cognitive load and are frequently used. Make sure your abstractions provide value sensible defaults and along with guidance and explanations for why you made certain choices. Having break glass methods for stepping outside a platform, if needed, is highly encouraged, at least initially. Think in terms of building a platform for depth rather than breadth. Satisfy and automate common use cases as completely as you can before moving on to new ones. Similarly, don't start with the biggest, most important service to your organization. An anti pattern is to adopt the biggest bang application first to maximize gains over time. This will likely fail as teams haven't had the time to develop confidence in your nascent platform, but the platform doesn't yet have a requisite capabilities. Instead, start with smaller, less demanding services and a team doesn't need to deploy every service. When adopting the platform, you can aim to adopt some large percentage of them. There always be strays.
[01:13:43] Speaker D: Oh yeah, I like the sentence you said of the depth rather than breadth because you need to choose something that you're going to provide to the business and really make it work for them. You know, and say, we're not going to solve every use case out there. Like, you know, all the cloud providers don't say they solve everything. You know, their databases only support these four. They're not running couchDb on RDs or any of the other. They chose the three that they're doing or the four that they're doing and they're really making it work all the way through. Yeah, there's always edge cases in there of something that might not work. Microsoft can't run their own ssrs on their managed SQL for some reason, but we're going to bypass that. And, but they've proven that it works at the scale where they can. And that's what you need to do internally too, is prove that you can do it and then move on and then add on features, as you said, like Ryan said, it's a very modular approach.
[01:14:44] Speaker C: And then four guys on a podcast will complain about your mvp that doesn't do everything we want.
[01:14:48] Speaker B: Oh, yeah, 100%.
[01:14:50] Speaker D: That's what we're here for.
[01:14:54] Speaker C: Yeah, no, I agree. I think I like the idea of depth. And you know, the one thing I kind of disagree on, this is the biggest bang application first, because one of the things that you run into in cloud modernization, and I think it's a little different in platforms. I'll clarify this in a second, but in cloud monitor, everyone says, oh, well, yeah, you can move those greenfield applications, but you can't move the monolith to the cloud. It's too complicated. And it's like you have this natural anti pattern of, I'm not going to do it because it's the big moneymaker, it's the one that doesn't add any value. It's too hard.
I agree that in the platform case here, I think you do want to focus on smaller applications first. But I do think you shouldn't wait to do the biggest bang application till last. I think that's the mistake for failure because then you built this thing that supports 20% of your workload and you don't have anything solving the big problem in your organization and you're not getting the return on investment you need.
[01:15:45] Speaker B: Yeah, I think a common mistake is that they, they do exactly that. Right. They either start off with these small little use cases and just exclude your giant monolith and thinking that they'll get there eventually. Right. Or thinking that, you know, the new microservice strategy is going to strangle that monolith and it won't become a problem in the future.
I don't think starting with monolith is a good idea, but I think building your strategy to that monolith and having that path forward is the way to go.
[01:16:14] Speaker C: Because can I always have it in the horizon of where you're going is.
[01:16:19] Speaker B: Yeah. And so, and then staying along that journey as flexible as possible so that you can adapt to the changes that you're going to occur. Because it's going to take me many years in most cases, right, to develop these things.
[01:16:32] Speaker C: But I think the other thing too is as you build these new patterns and you have these new opportunities, that product or that monolith or that other big bang application isn't standing still either how many applications are going through monolithic to microservices transitions, or they're adopting serverless, or they're doing these things. So if you're providing some of these new patterns in the platform, they might be able to start opting it sooner than you actually realize. But again, it's all about making sure you design with the angle that you're eventually going to get there. So you're thinking about, okay, when we get to that, we need to have this capability and it needs to be in your roadmap, needs to be in your architecture, et cetera.
[01:17:06] Speaker B: And I'm the king of de scoping projects. But like that's, that one is one I won't sacrifice. Like, there's a problem I can solve for that monolith and build into my platform, even if it's just partial, I've fixed a tiny little portion of it. I absolutely include that because that is the key to a lot of businesses. That is the money maker and that is the thing. And if you can provide incremental value there, it's a great way to get exposed to that workload and to build confidence in your platform.
[01:17:36] Speaker C: And our last one, all cloud services map to platform engineering when people begin their platform engineering journey, they often ask us, does this cloud service map to platform engineering? Don't mistake adopting a cloud service for practicing platform engineering. This meterstanding hinders effective implementation and suggests that there's an unclear understanding of what platform engineering actually is. While you can use any cloud service with platform engineering, what matters is how you integrate that cloud service into your developer experience through the platform.
They have a list here of engineering practices and processes that you can decide for yourself whether a cloud service or product is fit for your purpose. DevOps practice used for pop up engineering developer centric approach example processes measuring developer experience golden paths self service capabilities automation and infrastructure code automate everything infrastructure code tooling device practice security and compliance security by design guardrails compliance code so as you can see, these aren't all cloud things, but they have cloud capabilities that you might be able to leverage to adopt those things like in observability, which is one of the concepts. Centralized monitoring, alerting and troubleshooting tools. Your cloud provider probably has something. Cloudwatch, Google cloud monitoring azure, whatever Azure uses, if they monitor anything. I don't think they do monitor, not original or like in continuous improvement. Uh, you know, that's, again, that's a more of a PM type thing. So that's a good piece of advice as well. So yeah, overall I think this is a really good set of ten things to get you thinking about platform engineering if your company isn't doing it. And so I thought I'd share it with you guys.
[01:19:04] Speaker B: Yeah, no, this is fantastic. I'm happy to see things being written.
You know, it's, it's like you said, you know, it's like these things that I've been talking about but not really thinking. I had to communicate the idea like it was a new idea to see them sort of picked up and see them out in the world where, you know, other people smarter than me have, you know, had the same idea. It's validating and it's a great way to see that, you know, you're on the right path. I'm really happy to see it.
[01:19:32] Speaker D: It also put a lot of words to thoughts I've had, and I haven't really ever had time to kind of articulate it in such a way.
[01:19:39] Speaker C: Yeah, I mean, it's still a pretty, I mean, the whole concept of platform engineering, like the big guys do it, Fang does it, Yahoo does it.
But I think it's becoming more and more common now that the tooling is getting cheaper and easier to do. You have things like backstage available or at least encompass and things kind of be starting places again. They're not full solutions, can't just buy that tool and have a platform. But I think that's becoming more and more of how do you get DevOps to scale? Because the cognitive load is a big burden for a lot of engineering teams. And if you think about it, you're trying to have them shift left. Do you really need them to understand networking? Do you really need them to understand storage? Do you want them to be able to adopt something that's opinionated and a golden path all right, guys. Well, it was great. I will see you both next week here at the cloud pod.
[01:20:27] Speaker B: Bye, everybody.
[01:20:27] Speaker D: Bye, everyone.
[01:20:32] Speaker C: And that is the week in cloud check out our website, the home of the cloud pod, where you can join our newsletter Slack team. Send feedback or ask questions at the cloudpod.net. or tweet us with a hashtag, pound the clap pod.