[00:00:07] Speaker A: Welcome to the cloud pod, where the.
[00:00:08] Speaker B: Forecast is always cloudy. We talk weekly about all things AWs, GCP, and Azure. We are your hosts, Justin, Jonathan, Ryan, and Matthew.
[00:00:19] Speaker A: Episode 265 recorder of the week of June 19, 2024 swing and a whiff. Good evening Matt. How's it going?
[00:00:28] Speaker B: Good. How are you today?
[00:00:29] Speaker A: Well, doing good. Happy June 19 to you. Juneteenth, as they call it now, which I keep forgetting is a holiday until it stumbles across me and then I remember it's a holiday and I'm glad we have it and honoring all the people around Juneteenth. But it was a bit of a whiff this week on co hosts just me and you.
[00:00:49] Speaker B: Yeah, too many people thinking PTO.
[00:00:52] Speaker A: I guess they're taking advantage of that Juneteenth day off they might have from their employer, apparently, and coupling it with a day off at the beach or in the woods with their children, which doesn't sound like a vacation, but yeah, we'll see.
[00:01:05] Speaker B: I had a coworker come back from a week off last week and he was like, I am exhausted and ready to get back to work. A week off with two kids is definitely not a bright vacation.
[00:01:16] Speaker A: Yes, indeed.
Well, I'm about to leave for PhenOps X. I fly out tomorrow during June 19, so I'll be heading down to San Diego to join all my fit ops practitioner friends to see all the cool stuff that all the vendors are going to have, the keynotes. I'm looking forward to it.
Just going to San Diego is nice in addition to that, so I'm having to be flying on my travel day. There is a reception at 05:00 apparently the Phenops X Foundation did not get the memo about Juneteenth either, but it looks like it'll be good. It'll be a good overall session, so looking forward to it. I'll recap it next week here on the show, but let's get into some new news for the week. First up, Hashicorp had a couple of new releases this week. The first one we'll talk about is console 1.19 is now generally available, improving your user experience, providing flexibility, and enhancing integration points.
Console 1.19 introduces a new registration custom resource definition that simplifies the process of registering external services into the console mesh. Console service mesh already supports routing to services outside the mesh through terminating gateways. However, there are advantages to using the new registration CID, including that it automatically sets up your terminating gateway and all the routing rules for it. Console snapshots can now be stored in multiple destinations. Previously you could only snapshot to a local path or to remote object store destination, but not both. Now you can snapshot to NFS mounts, San attached storage or object store on any of the hyperscalers and console API gateway can now be deployed on nomade combined with transparent proxy and enterprise features like admin partitions. I don't know what the UI used to look like. They did say it was a new UI. I saw a screenshot of it, I said looks cool, no idea what it used to look like.
[00:02:56] Speaker B: Yeah, I mean this did release a whole bunch of nice new features. Some of the snapshotting I think is useful, especially with going to your object store of choice versus only being able to do local or remote storage. It gives you a little bit more options. What I was surprised about, which I did not know, was that console API gateway can now be deployed on Nomad. Was it not able to be deployed before?
[00:03:19] Speaker A: Apparently not.
[00:03:19] Speaker B: Just feels weird.
[00:03:20] Speaker A: That seem a little weird.
[00:03:22] Speaker B: Yeah, consoles should be able to be deployed on Nomad where that it's all the same company, but sometimes team a doesn't always talk to Team B.
[00:03:31] Speaker A: Well, speaking of Team B, Vault 1.17 is now generally available with new secure workflows, better performance and improved secrets management scalability.
The most important feature is workload identity federation or the width of our show title, allowing you to eliminate concerns around providing security credentials to vault plugins. Using the new support for WIF, a trust relationship could be establishing and an external system and Vault's identity token provider to access the external system. This enables secret list configuration for plugins that integrate with external systems such as AWS, Azure and GCP.
Two new features around PKI certificates for your iota or a established certificate on boot up whatever est that now supports those use cases as well as a custom certificate, metadata and one feature that I had never really thought about but I'm glad to see it now, is the vault enterprise has got seal high availability previously relied on single key manager system to securely store the vault seal key. This could create a challenge if your KMS provider had an issue such as being deleted, disaster recovered or compromised by a hacker. In such a case, the vault couldn't be unsealed. Now with a new HA feature, you can configure independent seals secured with multiple KMS providers for that doctor need. There's also extended namespace and mount limits as well as the vault secret operator now supports instant updates for those zero day password issues. Or if your password gets found on the dark web, the vault seeker operator can instantly update to prevent that from being a problem. So nice updates to vault as well.
[00:04:55] Speaker B: Yeah, I think overall there's a lot of nice features in it. The enterprise seal, high availability is something I never thought about. Now terrifies me when I look back at a few of the implementations of vault that I've done in the past. Just one of those things I never fully thought about though.
[00:05:11] Speaker A: Yeah. As I was reading through, I was like, oh yeah, if someone gets access to your account and can delete your kms keys and they could seal your vault and then you're totally hosed. Yeah, it was definitely something I had not really considered at all. Even the console feature where they talked about the ability to do the backup to multiple systems. I guess you could have always done remote object store, then set up a custom s three event to basically then replicate that to another bucket. But I'm surprised that took so long for that to come into place too, because having multiple paths is a good three, two, one backup strategy in general, and just good practice. So that was a surprise too. So a couple definitely hadn't thought about that problem features here.
[00:05:53] Speaker B: Yeah, and obviously the work identity Federation is pretty nice, the integrations and because always shoving your own toil into between to make these things work, little lambda spackle or whatever you need to do to kind of keep these pieces in synchronization. It was always kind of a pain, and I feel like we talked about this when it was in beta a few weeks ago, or it was announced at Hashicoff that this was coming out, but it's nice to see it bGA'd and hopefully it streamlines some of the pain of keeping everything synchronized.
[00:06:24] Speaker A: Yeah, I'm still waiting for the sync feature. That's when I was not announced this one. But I don't know which version of vault that's going to be rolled into, but the ability to sync the secrets to your hyperscalers secrets store is going to be just a really great feature I'm looking forward to.
[00:06:39] Speaker B: Yeah, I mean, just the thinking, like if you're using ECS or aks, they'll natively read from all the local stores. So versus having to kind of figure out how to get in there or have a read more toil, you know, natively just say, oh, here's the Arn, or here's the key vault id and the in the name of it, and you're done. You know, it's just so much simpler. The cloud providers have already kind of figured out that workflow, so versus building your own, you know, having these built into something especially such as prevalent as vault, feels odd that it wasn't there before still, but you know, you can't get everything day one.
[00:07:17] Speaker A: Well, in honor of phenops X starting tomorrow, of course, the press releases have started, and there was one ridiculous one that we're not going to talk about, which was some vendor basically patenting the ability to update tags from inside their finops tool. I was like, that's not patentable.
I appreciate that you're trying, but not really. Good luck trying to defend that one in court. But VMware Tanzu Cloud Cloud Health is upgrading its entire user experience and apparently will be showing it off at X at its booth. It's available today as a tech preview for interested customers to reach out to their account team to request more information. The phenops and cloud operations teams will find collaborating easier with the new UI, with all users accessing the same data and using a shared platform. Now, I've used cloud health. I don't remember there being a different set of data sets, but apparently that was an issue in more recent versions. The Tanzu Cloud Health UI is powered under the hood by a unique graph datastore. Significance of the graph data store lies the ability to capture many to many relationships, typically in multi cloud environments. And the new UI includes a vastly enhanced feature set including Tanzu Intelligent Assistant, which is an LLM enabled chatbot that allows users to gain insights with their clouds and services, including resources, metadata, configuration status, your natural language, without following a specific query format. The cloud Smart summary a concise summary of the vast data in your cloud bills, including what drives your cloud spending, why they change over time, and instructions for how you can optimize your costs. Further, the optimization dashboard, a single customizable pane that combines all available committed discount recommendations, right sizing opportunities and anomalous spending across your cloud and services and realized savings, which detailed reporting analysis alongside key performance indicators that quantify savings realized over a desired time frame. Now, I'm mostly impressed with this press release because they said all of that without actually using the words AI or artificial intelligence anywhere. Yes, it did have intelligent assist and it is lm enabled, but someone in marketing should be fired for not specifically having the AI keyword that any investor abroad would of course want to see in this press release.
[00:09:13] Speaker B: Yeah, it's amazing they were able to get through without that. Or I think this was a beta from somebody. Can you write a press release without AI and see if it will get approved to the public? See what happens?
[00:09:25] Speaker A: Apparently it can yeah, but I was, you know, I first scan it, but I was like, as soon as I saw this, I was like, oh, there's gonna be AI on this for sure. And I scanned through it and I missed the Tanzanzu intelligent assistant until I was writing up the show notes. And I was like, oh no, there is AI in here. It's LLM enabled chatbot. But again, at some point LLM enabled chatbots not considered AI either. But we'll go back to ML being the true AI. But having used some of these query languages, ought to say that's probably my favorite feature of all LLMs right now is in Athena being able to use AI through Q to write queries or to be able to write Jira queries now and confluence and Jira where you can actually say, hey, I want to see all the tickets from this person, from these dates of this date. And it just writes JQL. And I don't have to think about it anymore because I will say that I've hacked my way through JQL many a time, but I have not a pro at it by any stretch.
[00:10:16] Speaker B: I think I have a twelve line long JQL query that merges stuff together that I've stolen from and expanded on. It just keeps growing and growing and I'm like, okay, somebody needs to optimize this query because somebody at Alassans eventually going to ping me and be like, how come when you run this query, our entire system goes down?
[00:10:36] Speaker A: Well, I don't think they're multi tenant on the database, so I don't ever notice. You'll just take yourself down.
[00:10:42] Speaker B: Yeah. Details.
[00:10:45] Speaker A: All right, moving on to AI is how ML makes money for this week. First up, Databricks Lakeflow, a unified intelligence solution for data engineers. Data Lakeflow is a new solution that contains everything you need to build and operate production data pipelines includes new native, highly scalable connectors for databases including MySQL, postgres, SQL Server, and Oracle and enterprise apps like Salesforce, Dynamics 365, Netsuite, workday, ServiceNow, and Google Analytics. Users can transform data in batch and streaming using standard SQL and Python commands, and they're also announcing the real time mode for Apache Spark, allowing stream processing at orders of magnitude faster latencies than micro batch. Finally, you can orchestrate and monitor workflows and deploy to production using your CI CD pipeline.
[00:11:26] Speaker B: So about five years ago, you walked around any of these tech conferences and all you saw was cloud health, cloud, spend cloud, whatever, something cloud. And I feel like the new thing is lake whatever lake Flow lake. You're like, I'm like, how am I ever going to fight this in the future when I'm like, oh, I want to look this up. Oh, it's that one that with lake in its name.
[00:11:46] Speaker A: Yeah, well, it's a, you know, I was thinking about earlier and I was like, well, you know, for a half second there, I thought we were going a different direction because they introduced the idea of data Mart, you remember, but the, yeah, you know, and I assume they went to data mart because they ran out of lakes or, sorry, water bodies used, you know, data, ocean data puddle data stream data, you know, data lake.
And it's funny to me that they chose lake flow because one of the things that lakes don't really do is flow. Typically by definition, unless they're like a river that we damned to become a lake to power hydroelectric and things like that. But lakes are typically static.
[00:12:22] Speaker B: Yeah. So this is for all your static data that you put in and you can never get out without getting charged a lot of money.
[00:12:29] Speaker A: No, no, you're not supposed to think about this hard. This is the flaw. You're thinking about the, it's not a lake like that, it's just. Anyways, yeah, it's a little weird, but I hope at some point we'll run out of water things to compare to.
[00:12:43] Speaker B: Yeah, I mean, it is kind of nice. They are doing the streaming, so you can actually orchestrate this nicely. So it does seem like it has some nice features besides just a fun filled name for us to make fun of all the time. But it does have some nice features. And I don't use data bricks right now, so you don't have a lot to say about it, but it feels like it's hitting with the connectors into pretty much almost every SQL database out there and all the big enterprise apps. It should be a good, easy way to get everything into one place and process and do what you need. So they've hit a lot of the big names on the initial release.
[00:13:22] Speaker A: Well, they also announced that they're open sourcing the Unity catalog, the industry's first open source catalog for data and AI governance across clouds, data formats and data platforms. There are four, sorry, five really important pillars to uni catalog vision. One is an open source API implementation of data connectors, multi format support, multi engine support, multimodal, and a vibrant ecosystem of partners.
Looking through this, it reminds me a little bit about when Tesla just released all the patents of all of their electrical infrastructure as a way of making it sound good. And they're open part of the community. All you really see here is that this is just their implementation of iceberg and rest APIs and basically plug in connectors that are probably already available on all your other cloud providers. But this is the open source version so you're supposed to feel better about it. And I appreciate the effort, but it's not really scratching the itch for me. There are a couple quotes here in the article. AWS welcomes databricks to move to an open source unity catalog. It was committed to working with industry and open source solutions that enable choice and interoperability for customers. Yeah, sure do. Amazon and Google had to say Google is committed to open flexible solutions that empower customers to maximize the value of their data. Databrick strategy to open up the unity catalog standard for data and AI aligns very well with our strategy. So apparently they're getting some love there from the cloud providers. They do have some plans to continue rolling out new features of this, including support for the APIs that are critical to your data, including format agnostic table write APIs views, delta sharing models with ML flow, remote functions, access control APIs and much much more. But you can get started with this today. If you're a databricks customer you already have access, and if you're not, good luck figuring out how to integrate it.
[00:15:04] Speaker B: I mean I do like like you said, makes me feel good. We're releasing open standards that, you know, hopefully one day more and more companies will embrace, you know, but our other companies actually brace it. This is one of those remind me in a year and let's see where this is actually at to see if this is actually useful or nothing.
[00:15:23] Speaker A: Yeah I'd love to see some implementations of the Unity catalog that are not databricks specific. And if that happens then sure I'm all for it.
[00:15:31] Speaker B: I mean I know that Amazon and Sagemaker has had multimodal endpoints for a couple of years now. I'm trying to put in the timeline in my head of when I found when they were working on to when it was released with a customer. Still don't really know what that is and I still need to at one point figure that out. But you know I feel like there are standards. Whether theyre an open source standard or not is another story with some of these.
[00:15:58] Speaker A: Dont know. Lets move on to AWS.
Its a little bit slow newsweek for AWS kind of shocking about this. But first up they decided to poke the eye in Chinas eye by announcing that Taiwan will have a new region in early 2025. Of course, assuming nothing bad geopolitically happens, the new AWS Asia Pacific Taipei region will consist of three availability zones. At launch, there was a quote from Marcus Yao, senior executive vice president of Carthay Financial holdings, basically saying Carthay physic financial holdings will continue to accelerate digital transformation in the industry and also improve the stability, security, timeliness, and stability of our financial services. The forthcoming new ATS region in Taiwan, CFH is expected to provide customers even more diverse and convenient financial services in the country.
[00:16:42] Speaker B: More regions are always better.
[00:16:45] Speaker A: More regions are always better.
That one bold choice location wise there.
[00:16:52] Speaker B: I'm sure there's a lot more behind the scenes that's happening that they had reasons to do it there or they were getting pressured to do it. Because I feel like Amazon such a big behemoth that they probably are one of the few companies that can actually pull this off.
The other hyperscalers probably can do, but my company opening a business there probably wouldn't go quite as well. So be interesting to see how it all goes.
[00:17:20] Speaker A: And then, in one of these confusing multiple options to do the same thing, Amazon is now seeing support for Maven, Python and Nuget package formats directly in Amazon code catalyst package repositories code catalyst customers can now securely store public and publish and share maven, Python, Nuget packages, popular package managers, MVN, Pip, Nuget, and more. Through codecatalist package repositories, you can now also access open source packages from six additional public package registries. And then I just like, why do you have code artifact if this is what you have instead? Like, very confusing.
[00:17:55] Speaker B: Wait, this isn't their code? Oh, I mixed that up in my head. I thought this was code artifact.
[00:18:01] Speaker A: Nope, this is codecatalyst.
[00:18:04] Speaker B: Oh, now I figure out what the difference between the two are.
[00:18:08] Speaker A: Well, I had to answer this question earlier because I too was confused.
[00:18:11] Speaker B: Okay, so I feel too bad.
[00:18:12] Speaker A: Code artifact is a build and release automation service that provides a centralized artifact repository, access management, and CI CD integration. Code artifact can automatically fetch software packages from public package repositories on demand, allowing teams to access the latest versions of application dependencies. While codecatalyst is a unified service that helps development teams build, deliver, and scale applications on AWS, it provides a managed experience that can reduce fruit friction in the software development lifecycle, allowing teams to focus on building software instead of custom tool chains. Codecatalyst integrates with GitHub and GitHub actions and dev environments, so it's like more of a elastic beanstalk for code development. But code catalyst versus code artifact, which is just one building block.
And I would guess that code catalyst probably uses code artifact under the hood of, you just don't see it.
[00:19:01] Speaker B: Yeah, that's what I was kind of rereading as you were stating. I was like, okay, wait, one kind of contains the other.
[00:19:08] Speaker A: So yeah, I think that's how, I mean, knowing it's Amazon though, they're probably completely isolated products. And so yeah, they don't actually leverage everything. They don't leverage the same thing. So it's not very green.
[00:19:22] Speaker B: It's all on s three, let's be honest.
So you know, everyone uses s three in Amazon.
[00:19:28] Speaker A: Moving on to GCP, several new features for Cloud SQL for MySQL this week. First up is support for vector search to build generative AI applications and integrate with MySQL. Embedding data. Vectors allows AI systems to interact with it more meaningfully. Leveraging LangChain the cloud SQL team built a vector LangChain package to help with processing data, generate vector embeddings and connect it with MySQL. Vector search embedded into MySQL, you can create embedded tables, leveraging an AI to allow you to determine things about your data and your table, like the distance between two addresses, which this is the first time I've actually seen a video of someone using vector. And I was sort of intrigued and horrified at the same time because I'm like, I conceptually understand what you're doing. You're creating this embedding table, you're using this data source to then add data that's relevant from a vector perspective. Or this particular case was adding distance between these two addresses.
I'm like, how do you keep that up to date? That was my big question.
I get the first time. And I guess as you build your app, you have to basically keep updating the vector embeddings. That's some overhead you're going to be adding to your database over time.
[00:20:35] Speaker B: Yeah, I feel like keeping this inside of your database isn't probably the best place inside your main MySQL database just because I feel like you're throwing too many things in one place.
[00:20:47] Speaker A: Well, I think they're saying the data is already living in this database that you want to add this bit of embedded data for from AI. And so there's this benefit of not having to store in a different place and work their microservices.
And so I assume that's why they think it's a good idea. Again, I have a little bit of confusion around use cases. I still get this one. I get the use case. I understand what they're trying to do and they're showing routing between two restaurants from where your location is to this restaurant. But I yeah, it does seem like a lot to put into your SQL database.
[00:21:25] Speaker B: Yeah, I mean, I always worry when you put a large more and more things embedded in SQL databases, you kind of slowly build more and more of a single point of failure within your application because then your SQL database becomes computer resource constrained more and more. And with SQL, scaling horizontally is a little bit harder, is a lot harder than scaling vertically. So you just normally end up scaling vertically and then you become less and less cloud native.
[00:21:54] Speaker A: So I mean, I'd say in my sequel I'm doing more horizontal than I would with Microsoft SQL. But yeah, it's still harder to do in MySQL. I mean the advantage you want is like using alloydB or using aurora where you can decouple the compute from the storage, then things get easier. But, and maybe, maybe that's kind of the use case you think about too. But if your queries do slow down, the nice thing is they've now added Gemini to optimize, manage and debug your MySQL database leveraging index advisor identify queries that contribute to database inefficiency and recommends new indexes to improve them within the query insights dashboard and then debug and prevent performance issues with active queries and monitor improve the database health with the MySQL recommender.
[00:22:36] Speaker B: These are always nice, but definitely you should pay attention to what they're doing before you blindly follow these. We've looked at a few of them. Microsoft has it embedded in at the day job.
Microsoft will automatically analyze your queries and do the same thing. And we definitely found a few things that were like that actually will do nothing. So keep an eye a little bit on them before you just blindly follow.
[00:22:59] Speaker A: Again, I think that's the rule of thumb for pretty much all AI is. It's a good place to start, but don't just put it into production without reviewing what it's actually doing or what it's trying to do because it's not always perfect.
[00:23:11] Speaker B: I feel like you should probably remind developers that on a daily basis could.
[00:23:15] Speaker A: Be well Google is hosting a one day virtual gathering on June 26 for AI and cybersecurity. The gathering will begin with a keynote session by Brian Roddy, VP of Google Cloud Security engineering. Roddy will discuss the latest product updates from Google security operations, Google threat Intelligence Security Command center, and Google workspace security and share their vision for how AI and security can interact. After the keynote, there'll be several sessions, including securing the future, the intersection of AI and cybersecurity. Work smarter, not harder, with Gemini and security ops actual threat intelligence at Google scale, breakthroughs in building a risk centric strategy of cloud security and secure enterprise browser, your endpoint's best defense. And so I set up for this because there's several of these sessions that I'm interested in, and so either I will miss it and then they'll send me video links in YouTube, or I will try to attend on the 26th.
[00:24:05] Speaker B: I'm doing the same thing as we speak, is signing up and uh, looking forward to listening to some of these things because I think that this is going to be a very interesting talk. Um, I definitely like the work harder, not smarter. Whether it's just a, a fun filled title or what, I think it should be interesting to kind of hear how they are embedding Gemini into just their Soc operations.
[00:24:27] Speaker A: Yeah, I mean, I think Soc operations is a great place for AI to be able to add a lot of value for people. I'm also really curious about the secure enterprise browser. There's been a lot of chatter about different vendors who are creating secure enterprise browsers all of a sudden. I'd like to learn more about that too. I'm just intellectual curiosity at this point.
[00:24:44] Speaker B: It's like securing your computer, just unplug it and you'll be better off.
[00:24:49] Speaker A: Sure, yeah. Don't use the Internet, all your problems are solved. And then Google's last answer for this week the cloud storage with hierarchical namespaces is now available.
Data intensive and file orientated applications are some of the fastest growing workloads on Google cloud storage. However, these workloads often expect certain folder semantics that are not optimized in a flat structure or of existing buckets. To solve this, Google announced hierarchical namespaces for cloud storage, a new bucket creation option that optimizes folder structure, resources and operations. Now in preview, HNS can provide better performance, consistency and manageability of cloud storage buckets. I think we've talked about this before, but it was not available yet in preview. Existing cloud storage buckets consist of a flat namespace where objects are stored in one logical layer. Folders are simulated in the UI and CLI through slash prefixes, but are not backed by cloud storage resources and cannot be explicitly accessed via the API. This can lead to performance and consistency issues with applications that expect file orientated semantics such as Hadoop, spark analytics and AI ML workloads. It's not a big deal until you say you need to move a folder by renaming the path and in traditional process and the operation is fast and atomic, meaning that the rename succeeds and all folder contents have their paths renamed or the operation fails and nothing changes. In a cloud storage bucket, though, when you do this rename operation, each object underneath the simulator folder needs to be individually copied and deleted. And if your folder contains hundreds of thousands of objects, this is slow and inefficient and also non atomic. The process fails midway, your bucket is left in an incomplete state. A bucket with a new hierarchical namespace has storage folder resources backed by an API, and the new rename folder operation recursively renames a folder and its contents as a metadata only operation. This is many benefits, including improved performance, file orientated enhancements, and platform support.
[00:26:28] Speaker B: Yeah, I mean it's a great feature, you know, just kind of getting rid of all the day to day stuff. More one of those things that it just feels like they're really just announcing a rename feature, but then they've kind of set it up. So you can only use this API call if you set it up in a very specific way. So I'm kind of more concerned that like, okay, they've optimized it. They've like over optimized it in one way and then can it cause performance issues on the other they don't talk about. So be kind of curious to see like how this actually works and if there's other issues. So you know, they've traded one pro and one con and swap them and now the probe from blob storage before it becomes a con or you know, whatever it is. So curious to see how this actually is in production.
[00:27:15] Speaker A: Yeah, I'm curious too. I mean, again, it's just metadata manipulation.
It's sort of the same thing that they used to do just now. They're using a different layer to calculate the folder pathing, but apparently that's faster than what it was doing before. So I am curious to see how it works in the real world. But I like the idea.
[00:27:30] Speaker B: Yeah, I'm curious if like s three has the same issues because I know I've definitely renamed stuff and never had that type of a problem.
So I'm wondering if they've solved it in a different way.
[00:27:39] Speaker A: It was definitely one of those things I was curious about, I assume.
I mean, in s three even, it's a flat object structure, so I would assume they have similar challenges. Just maybe they've always used a metadata layer to store pathing data.
[00:27:55] Speaker B: I don't know. Yeah, I mean, kind of makes more sense. I feel like s three has gone now to like optimizing storage last cost less speed at like sub milliseconds, like with like one zone and stuff like that where I feel like this is like just daily operations. Google is still working on but Microsoft still works on a lot of those too. So it's all good.
[00:28:17] Speaker A: Well, Azure had one announcement and I can't say a lot about it, so let's get through it. Microsoft is announcing generative chemistry and accelerated DFT which will expand how researchers can harness the full power of the platform. Generative AI will, together with quantum classical hybrid computing, augment every stage of the scientific method. And I, that's all I got out of this. I don't know what else it does. I don't know how it works. And we need Jonathan here, but he's not here today. So do your own research on this one.
[00:28:44] Speaker B: I had nothing else for you. I tried to kill it because I didn't understand the story. So. But we, we felt bad for Azure.
[00:28:49] Speaker A: We had no stories for Azure and yeah, yeah, yeah, I know it has something to do with quantum and I know it has something to do with science and apparently generative chemistry and that's as far as I got before. I was like, I don't understand what you're trying to do.
[00:29:05] Speaker B: I feel like I need like an hour to read the article, read all the links in it because as it is a Microsoft article, there's links to 50,000 other articles in the middle of it. So you figure you like actually process it because it sounds pretty cool what they've done and they have really good graphics in it. I just don't understand it.
[00:29:23] Speaker A: Yeah, there's a quote here from Alberto Prado, global head of R and D and digital partnerships at Unilever. Digital tools are unlocking an unprecedented age of scientific discovery. Using advanced computing power and AI, we're able to compress decades of lab work into accessing a new level of insight we could not previously have imagined. The technological leap coupled with our vast repository data, essential expertise in personal and household care means our scientists are able to lead the industry in developing the next generation of consumer goods. Yes. So I have no idea how that worked out. So anyways, let's move on to the next story, shall we?
[00:29:55] Speaker B: I don't think he said anything, but yes, let's move on.
[00:29:58] Speaker A: I don't know that he said anything either, but it's going to help them develop new products. So good for Unilever. I guess my next bottle of shampoo for my non existent hair will be quantum entangled again. I have no idea.
[00:30:11] Speaker B: It will be created by. It'll be AI powered shampoo.
[00:30:14] Speaker A: Exactly.
Oracle has a couple of things for us this week. First up, their fourth quarter and full year financial expectations. We're done. We don't do the horns. I see, Matt.
[00:30:24] Speaker B: I'm just being careful. I don't trust you.
[00:30:26] Speaker A: You shouldn't trust me. We don't do the horns or oracle. They're not worthy because they wait so long to announce their earnings. But their fourth quarter results fell short of Wall street expectations with earnings per share of $1.63 adjusted and $14.29 billion in revenue. Revenue, cloud services and license ports were up 9% to 10.23 billion, and cloud infrastructure came up to 2 billion, up 42%. But slower growth than the prior quarter where they had 90 49% growth, et cetera. Wall street was not happy about this, but they overshadowed it by announcing their partnership with Google, which has driven their stock to new highs after the announcement, even though they missed earnings by a few hundred million dollars.
[00:31:05] Speaker B: I mean, when you're talking billions, what's a few hundred million?
[00:31:09] Speaker A: Yeah, exactly. It's a rounding error. Yeah, it's fine.
The next one from Oracle, they're committed to helping organizations with continuous improvement and innovation, and so they're releasing the following new features to help you next gen access dashboard with details on who has access to what and when, support for expanded identity orchestration with Oracle's Peoplesoft hrms and configurable Oracle cloud infrastructure email delivery service for customized notifications.
When you look at this, it looks very similar to an IAM dashboard, but as you click into it, you get all kinds of visibility into what groups and roles you have, as well as nice, pretty charts about policy statements and details. And overall, just a lot of nice, pretty graphs, which I appreciate.
[00:31:52] Speaker B: Does the executive in, you like those graphs?
[00:31:54] Speaker A: I do, yeah. I don't know what they mean, but I like them.
[00:31:57] Speaker B: Don't worry, no one knows what they mean either. The lawyer that's charging you extra for them knows what they mean.
[00:32:03] Speaker A: Yeah, it's sort of weird. The Oracle access government utilizes internal email delivery service notifications. Like, what else would it have leveraged? I would hope you're using internal email delivery services to deliver email to me.
[00:32:16] Speaker B: I thought that was interesting of like here we built a new dashboard and all these things. Oh, and by the way, there's emails. Yeah, oh, okay, cool, thanks.
[00:32:25] Speaker A: I do like the idea though that they're integrating with their hrms because one of the things that you sometimes are lacking in a lot of these things is that you lose the context of the people and what their structure is. By coupling it with the HRMS capability. You start doing more interesting dynamic access things like, oh, if this person is part of this group, then he needs this permission. If he gets moved in the organization, this automatically gets updated based on the metadata from Peoplesoft.
It's authoritative, it's a managed system. It's just overall a nice way to integrate your things together and pay Oracle more money if you don't have it.
[00:32:57] Speaker B: Yeah, no, I'm thinking about that a lot. You know, based on like, you know, my day job and a few other things we've talked about, like how do we get permissions and when people move and how do we remove them? And, you know, from a full onboarding to offboarding perspective how you handle it. And I thought about it at some level in the cloud of like getting added to groups and have the group based authentication. But a lot of that's still a little bit manual between Azure and our HRMs tool. So having that full integrated and having that pretty much automatically update the permissions they get. So if you move from team a to being, how many times have you been in an organization you moved and still retain those old permissions, which then comes up on an audit and then somebody has to remove them and ask you if you're still using them and you have that whole song and dance you have to do. So it's actually a really interesting idea of if it can fully integrate and your HMS HRMs system can really become your identity management for your cloud provider straight without using an interim IDP or anything else like that. That's an interesting use case to set everything up properly.
[00:34:07] Speaker A: Yeah, I wouldn't mind seeing this from other people, like integrating with workday and with successfactors and other HRI systems. I think it could be really an interesting approach because typically those systems are authoritative and today that authoritative typically goes through like an ad sync and that typically replicates it. Microsoft ad. And then from there you had to figure it out. But again, that's not always enough context of what's in the ad because you don't always put all the reporting structure. You don't put in multiple levels of hierarchy that you might need to know that, hey, if Justin's leading the cloud organization, anybody, just as organization, should have access to the console. But if there's four levers of management between you and that person. It's hard to pick up that influence without having to do a lot of API lookups.
Well, I have a cloud journey for you and this one was brought to us by Google this week.
Basically creating and implementing reliable systems of code and infrastructure forms the disciplines of systems engineering, which is used by of course by Google SRE. They've compiled a list of great resources to help you learn more about systems engineering and the best practices and resources for you. And I thought they had some really good ones here in their course. So first of all is a Usenix course called system engineering side of site reliability engineering. So Usenix paper and basically talks about what is a system engineer and how might be different from a given SRE or a software engineer or a sysadmin. And so it gives you a very clear definition of what that is. They give you a non abstract large system design course and s from the SRE workbook which gives you ability to design systems reliably and scalably, requiring a focused practical approach which we call non abstract large system design. And that'll give you ability to iterate and do a bunch of things.
They have a distributed image server workshop in the SRE classroom which is a self guided workshop to help you code and deploy a large scale system using some of those principles you just learned at the non abstract large system design.
They have a Google production environment YouTube Talk, where they talk for about 15 minutes about all things production of Google, which is just cool, geek out. Anyways, you learn about their physical networking, the Borg cluster manager system, persistent file systems and massive mono repo and much much more bored. Cluster manager is the predecessor to Kubernetes which they still run internally.
They have reliable data processing with minimal toil research paper which talks about ensuring reliability and data processing and how to do it because it can be tricky, especially with batch jobs and automation.
Then they have a how to design a distributed system in 3 hours YouTube talk, which is awesome. Please don't put that in production. But I appreciate that you're trying in 3 hours implementing the services level objectives, which is an O'Reilly book which I have in my nightstand because I've been trying that one out making push on green a reality, a research paper all about how to release early and release often, and the Canary Analysis services research paper all about how to do canary safe deployments which is great. So these are all really good courses all available to you in the SRE classroom and I definitely think you could learn quite a bit. If you're trying to learn more about system engineering from this set of resources.
[00:37:09] Speaker B: No, this is absolutely fantastic list of resources, a few of which I'll go read, versus probably going to sleep at a reasonable hour tonight and kind of read over and kind of see what they have to say. I think the three hour YouTube video, even at two x, might be beyond my capacity to sit down and watch, but most of the other ones really are interesting. I kind of want to read the SLO book. It's kind of the other one that I actually will probably go pick up.
[00:37:39] Speaker A: That's pretty good. I've been reading it for about four months now because it's not that important to me at the moment. But I bought it and I've read through only about half of it and I've learned lots about thinking about slos, which is really helpful.
[00:37:53] Speaker B: Yeah. With SLO specifically, like the error budgets and how you charge back and how you kind of handle all that, how.
[00:37:59] Speaker A: You accrue, how you do, how you diminish. Yeah, it's all really helpful.
[00:38:03] Speaker B: Yeah. It's always an interesting concept. I've never been anywhere that's actually implemented it in that type of way. Always. Either it's the product is like, well, we're still releasing it, so fix it and keep going. Or the other side is, oh, we release once a year and that's its own level of problems.
[00:38:20] Speaker A: So yeah, you have to have teeth in your SLO model if you're going to do Slos, you really have to drive the right outcomes. And if your engineering product leadership team says, well, it doesn't matter, we're at zero error budget, we're still shipping, that does not help you. So you had to have a cultural, that culture that supports slos and Slis.
[00:38:39] Speaker B: Yeah.
[00:38:40] Speaker A: So the how to design a distributed system in 3 hours is only at 1 hour course. So, you know, okay, so two x.
[00:38:46] Speaker B: I can watch as I walk the dog. So we're good.
[00:38:49] Speaker A: Yeah, there you go. But yeah, I definitely check that one out too as well. But very good set of resources. I will check out some of them as well that I haven't already read. Like the making push on green, a reality research paper I'm sort of intrigued by just because that's something I'm working on at the day job right now. So it's good. Well, that's all I have for you, Matt. I know it was a little short show, but Finops X is happening. Like I said, I'm sure I'll have lots of stuff to report next week back on the show. So check that out and we'll see you next time here in the cloud.
[00:39:18] Speaker B: Bye, everyone.
[00:39:23] Speaker A: And that is the week in cloud. Check out our website, the home of the cloud pod, where you can join our newsletter slack team. Send feedback or ask
[email protected]. or tweet us with the hashtag hash poundthecloudpod.