[00:00:07] Speaker A: Welcome to the cloud pod, where the forecast is always cloudy. We talk weekly about all things AWs, GCP, and Azure.
[00:00:14] Speaker B: We are your hosts, Justin, Jonathan, Ryan, and Matthew.
Episode number 271, recorded for the week of August 6, 2024. AWS deprioritizes seven services cloud pod hosts prioritize therapy. Good evening, Jonathan and Matt. How are you doing?
[00:00:31] Speaker A: Great. How about you?
[00:00:33] Speaker B: Therapy has been helping so well.
[00:00:39] Speaker A: Looking forward to time off?
[00:00:40] Speaker B: Yes, I am.
One more week and then I'm gone and you guys are on your own again for another week.
[00:00:46] Speaker A: All right, well, we'll manage.
[00:00:48] Speaker B: Hopefully AI will help you out once again.
All right, well, Amazon did officially prioritize seven cloud services, as long as your official notification comes from Twitter and draft bar directly. He tweeted right after the show, recorded last week. About an hour and a half, actually, after we finished that, they had made a tough decision to deprioritize seven cloud services.
Amazon is obtaining new access to a small number of services in the tweet, but will continue to run them in a secure environment for the foreseeable future. He confirmed the list of services to be s three select cloud search, cloud nine, simple db forecast, data pipeline, and code commit. And it wasn't claimed to business either. The changes were communicated through multiple channels within and outside the company. And I say shenanigans. They were not communicated outside the company in any way that I was aware of. And we're a cloud pod podcast that checks the news of cloud providers, so something I'd be aware of normally.
[00:01:44] Speaker A: Yeah, they kind of took a leaf out of the hitchhiker's guide to the Galaxy book and put the. The planning commission in the filing cabinet downstairs with the broken light.
[00:01:54] Speaker C: All I just heard there was like, you know, doctor mustard in the library with the.
Like with the night for, you know, I don't know what you said, but that was where my mind went to clue.
Yeah, I mean, some of these services haven't really been updated years like cloud search. I mean, I looked it up right after that, the announcement. The last real update, even on their blog, was 2015. Clearly that's been deprioritized for a while.
[00:02:24] Speaker B: And simple DB, I think unless you were an existing client for a long time, you couldn't get access to that either.
Just so you couldn't get access to easy to classic, unless you had an account created in the error that it existed. The only one of these that surprises me in any real shape and form is probably forecast data pipeline. Makes sense to me that they're killing that because I think Eventbridge is superior to it anyways. But forecast, I guess you're thinking that one is that AI is probably a better use for trying to do that type of forecasting they were trying to do, but they were taking basically a sets of numbers and it would basically provide you forecasting data to forecast future needs.
It was an Amazon service. I assumed Amazon was using it for some of their stuff internally, but maybe it didn't work as well as they thought. They stopped using it and then customers also realized the same thing and stopped using it too.
[00:03:12] Speaker C: Yeah, I was a little surprised by cloud nine. I feel like uh, definitely have seen people use it like very niche cases, like small companies, and it, it works nicely in that little ecosystem. Or if you're just like want to play with something, you know, inside of Amazon and do like a quick proof of concept and it can connect to everything. It's nice, secure location of it as their cloud id. I thought it was kind of nice.
[00:03:38] Speaker B: I mean, I think when you started, once they built the shell into the console, I think the need for it started to diminish pretty quickly. And then the versus code plugins for Amazon are so good that I just, I don't know that you need a cloud nine at that point. And because you had to pay for servers, make cloud nine actually work. I just, people weren't always a big fan of it because it wasn't serverless. And so I just think between versus code and getting Amazon shell into the console, I think they just lost both of the needs that they had.
[00:04:05] Speaker A: Yeah, I only ever looked at cloud nine once after the acquisition and I thought, well, this looks, you know, this is okay. Never used it though.
Already have our existing ecosystem of like Pycharm and things like that, which can do remote shells and remote docker and all those other things. So it really didn't add any new value. I mean, I guess it was an easy button to get people into the ecosystem, but that's the way I viewed.
[00:04:29] Speaker C: It as like the quick easy button. Go run this quick poc, follow these instructions, you know, but I think you're, I think Justin's right. Also with the cloud shell integration, you know, I think that that also took over half of its real purpose.
[00:04:43] Speaker B: Yep.
[00:04:45] Speaker A: Yeah.
[00:04:45] Speaker B: Well, rest in peace, s three select and cloud search. And all the rest of you, we won't miss you much apparently because we really didn't use you, which is also why Amazon killed them. But we'll see kind of what happens with other future services, but hopefully they do a better job in deprecation going forward, come out with better notification periods because this has seemed a little sneaky. And again, it deteriorates trust with customers who potentially were building things on these products.
[00:05:15] Speaker A: Yeah, it's really unusual. I don't think they've ever been quite so amateurish around product announcements like this. It's strange.
[00:05:23] Speaker B: Yeah.
[00:05:23] Speaker C: I mean, what other products do you really know that they've completely deprecated out.
[00:05:27] Speaker B: Though, or services easy to classic.
[00:05:31] Speaker C: Yeah, yeah, that's true. I mean, that's the only real one I can think of. So this is them trying to figure out how to actually do this without backlash.
[00:05:41] Speaker B: I mean they've also, then they also, they basically have announced the deprecation of the metadata service v one, haven't they? Like, I mean it's still a long ways out. It's like another year, I think, to go. But they did announce when they came up with the metadata service too that you had the three year window before they were going to retire it.
[00:05:59] Speaker A: Yeah, I guess it is a breaking change in that case. It's not like Aurora where the new one works just like the old one, but better.
[00:06:10] Speaker B: Yeah, yeah. It sounds like even internal to Amazon, there were some people who posted some slack messages from Internal Amazon. Slack that they had been pitching to a customer just the week before using something like data pipeline or using s three select or one of these other tools or code commit. Maybe know which one it was exactly. But you know, they were just pitching that a week ago to a prospect and. Yeah. So even internally they were caught by surprise.
[00:06:34] Speaker A: A lot of training material is going to refer to all these things. So it's a lot of work to get back and fix it. Yes, yes. Especially certifications. I'm most disappointed out of all of them. I'm kind of disappointed about co command. I don't really care for the others. Simple DB's got, you know, a dozen replacements that you could, you could plug in anymore. Document dB or dynamodb.
Cloud nine probably wasn't going anywhere anyway. Cloud search has been replaced by new technology. S three select was good for what it did, but that was his thing.
[00:07:03] Speaker C: That was better. Yeah, more expensive, but better.
[00:07:07] Speaker A: Having a code repository in AWS was kind of nice, especially with IAM protected branches and things like that.
[00:07:15] Speaker B: Well, there's still other places where you sort of still have a code repository. Amplify kind of has a lightweight code repository capability. And there's a couple other places that I thought were always built on top of code commit, to be honest. But apparently we're not.
[00:07:27] Speaker A: Yeah.
[00:07:28] Speaker C: Or they are the next wave of services.
[00:07:31] Speaker B: Yeah, we'll see. All right, well, moving on to general news and earnings season. Once again, earnings were a bit of a bloodbath.
Break it out for you right now. Alphabet revenue was up 14% year over year, driven by search and cloud, with the GCP surpassing $10 billion in quarterly revenue and a billion dollars in operating profit. For the first time ever, GCP cloud revenue was 10.35 billion versus expected 10.2 billion. However, shares were down because YouTube advertising screwed them again, missing by $300 million.
So there you go. Cloud revenue overall was up 29%, which has been about on par with where Google has been on cloud revenue growth last few quarters.
[00:08:17] Speaker C: Amazing how much YouTube was down and how it negatively affected everything altogether. I'm also always fascinated by how much revenue they make from YouTube advertising.
[00:08:28] Speaker B: Oh, it's so much money, so much money. I mean, it's basically eyeballs for children at this point. But the problem is they're competing against TikTok and Instagram reels and Facebook shorts and all those things and they're taking a lot of eyeballs away and so advertisers are spreading out their dollars, especially with macroclimate that we're in. So yeah, it's sort of interesting. So yeah, Google taken down by TikTok. That's great.
Microsoft shares dipped on Wednesday as investors look past the better than expected earnings and revenue and focus instead on disappointing cloud results. Da da da. But executives provided a dose of optimism when they predicted cloud growth speed up will occur in the first half of 2025. I don't know what crystal ball they have on that, but I'd love to know more. Revenue is 64.73 billion versus expected 64.39 billion. And Azure revenue only grew 29% for the quarter. But Wall street had expected a 31% growth which is why they were penalized after earnings on Tuesday.
[00:09:22] Speaker A: Thats interesting. 29% for Azure but expected 31 but 29% for Google was just fine.
[00:09:30] Speaker B: Well, when youre talking about, I mean, theyve been 29% for a couple of quarters so this is just the issues of that. And then finally, Amazon also had a bad day, but they miss on revenue, which is never good. Don't miss on revenue. Revenue was 147.98 billion.
AWS is up 26.3 billion versus a 26 billion expectation. But Amazon suffered an overall lower average selling price, or ASP as they call biz, due to pressure from TMU, which is the reason for the abysmal retail findings. So revenue missed and profitability was down due to lower ASP. That hurt Amazon. But what do we care about Aws up 26.3 billion.
[00:10:11] Speaker A: Nice. Have you ever bought anything from TMU?
[00:10:14] Speaker B: I have not. My kids have.
It's actually not bad. So there was another app before TMU, which I'm trying to remember the name of, and everything I bought from that was just absolutely terrible and horrific. But the t moo stuff has been relatively okay. It comes pretty quickly. And, you know, it's, again, it's made in China, cheap stuff. But it was better than, you know, the previous app that I was using for that kind of stuff.
[00:10:39] Speaker A: I mean, everything's made in China.
[00:10:41] Speaker B: Yeah, but there's different quality in China depending on where you're at.
[00:10:47] Speaker C: I buy, like, cheap things I don't care about from it. Like the pads you put underneath a chair.
[00:10:52] Speaker B: Mm hmm.
[00:10:53] Speaker C: Yeah. That you probably end up should replace once a year because they fall off. And either way, you know, like, things like that. That like, yeah, as long as you have patience to get them. It doesn't bother me that much.
[00:11:04] Speaker B: Yeah. That I was thinking of those competitive team of wish. So I wish many times during the pandemic and was many, many times disappointed in what arrived. So it was. So I kind of stopped using those. But my kids have a bunch of liquidino they bought. In fact, this week. They just got some stuff like these rubber balls they're bouncing around the house, driving me crazy with. And a couple other things.
One of the kids bought a full arm sleeve of temporary tattoos.
And I'm like, how does all this cost? He's like, $7. I was like, all right, that's fine.
[00:11:34] Speaker C: You're not even mad about it.
[00:11:38] Speaker B: I mean, they get allowance through apple pay, and they take apple pay at t moo. So that's what happens.
[00:11:43] Speaker A: All right.
[00:11:44] Speaker B: It's better than them spending it on other junk they could be spending it on.
[00:11:48] Speaker A: Yeah. I mean, I guess with the economy the way it is globally, it makes sense if people get smart and realize that they don't need something next day or in two days, if they can plan ahead and take a couple of weeks to get something for a third of the price, why not?
[00:12:01] Speaker B: Yeah, well, and they are able to track the shipping and all that. So I think he ordered it on a Sunday, and it came on Thursday, so it wasn't that long of a wait for him to wait for it. And you can buy some of that same stuff on Amazon and get in two days, but you're paying twice the price.
[00:12:18] Speaker A: Yeah, I was actually talking to somebody yesterday about buying from the chinese resellers, even on Amazon. And their customer service is actually very good because they value your reviews so much that if you got the slightest problem when you email and say, I've got an issue with this thing, I bought it three months ago, it's not covered by Amazon return anymore. It stopped working. They're like, here, have a new one. What's your address? We'll get it shipped out right away, no hassles whatsoever. So it's an interesting twist of outcomes, really.
[00:12:53] Speaker B: Indeed.
I'll move on to AI is how ML makes money Snowflake is announcing snow park container service is now generally available in all AWS commercial regions and Azure public preview. Now customers can get fax access to GPU infrastructure without needing to self procure instances or make reservations with their public cloud provider. So I can only imagine that if you're buying it from AWS and then reselling it to me through Snowflake, that's going to cost a lot more money than it would have been if I just got it from AWS. But maybe you appreciate the snowflake UI and capabilities they have to table but it's interesting because at some point I could see how snowflake would potentially take something like this and saying, well, we're now going to arbitrage against the cloud providers to get better pricing. Because if I'm not running the containers and I'm an abstraction layer, I now have pressure to push on the cloud providers. Or do they build their own data centers at some point in the future and then start undercutting other vendors?
[00:13:49] Speaker C: Yeah, the hardware first.
[00:13:52] Speaker B: Well, the servers are fine, the GPU's are tough.
[00:13:55] Speaker C: Yeah, the GPU is what I was referencing.
[00:13:58] Speaker B: Well, if intel and everyone else could ever start getting competitive chips to Nvidia, maybe that would be a thing.
[00:14:05] Speaker C: I mean, they may have a true multi cloud story of we don't really care about the ingress and egress feeds because we're running it on spot and we've played the game enough with all the cloud providers that were able to move the data. And even that extra cost isn't that big of a deal.
[00:14:20] Speaker A: Yeah, I'd be surprised if they ended up spinning up their own data centers. I think having the data in the cloud close to other customers existing infrastructure is probably a lot more valuable than the extra monies that make on not paying.
[00:14:35] Speaker B: I don't know if they were offering 1020 percent less for certain services than what AWS is providing to their companies and you're already a snowflake customer and you already have your data lake there. Like I can see the play to save margin.
So we'll see. It's interesting one to keep an eye on because I definitely think there's potential for some interesting things snowflake or databricks can do in the future if they wanted to strike the hand that feeds them in some ways. So we'll see.
Moving to AWS they have a great blog post this week introducing their recent adoption of the Open Container initiative or OCI, image and distribution specification for Amazon Elastic Container Registry. This latest version includes support for image refers as well as significant enhances for distribution of non image artifacts. This now allows you as a customer to more easily manage your container images with the ability for customers to push image signatures, software bill of materials, attestations and other content related to specific image right alongside their images hosted in Amazon ECR. And so I appreciate that Amazon is adopting an open standard here and this blog post goes into a lot of details how you actually implement these things. And just a good article that I got a lot out of, so I thought I'd share it.
[00:15:47] Speaker C: I mean, being able to push these other artifacts next to it is really key, especially as everyone needs to build these s bombs out for all their software as a service that they have. And the attestations are always all useful. So keeping everything kind of in sync with the artifact and being able to say in this version we have this and keeping everything in one place I think will streamline a lot of pain that people have to deal with right now.
[00:16:13] Speaker A: Yep, totally agree. It'd be nice not to have to store that stuff somewhere else in s three for example, and kind of link them somewhere else because now we need a database somewhere else to link the, the objects in the registry with the attestations someplace else. Yeah, that's pretty cool.
I wish it wasn't called OC either. It just makes me think of Oracle all time, all the time.
[00:16:35] Speaker B: I mean, I think they existed before oracle, so technically Oracle should change their name.
[00:16:39] Speaker A: That's what I say.
[00:16:40] Speaker C: I first saw the article on our notes. I was like, why are we talking about Oracle in the AWS section with ECR? What's going on?
[00:16:48] Speaker B: So that would be quite the get, wouldn't it?
[00:16:52] Speaker C: I thought you were screwing me, so yeah, I wouldn't blame you.
[00:16:55] Speaker B: Yeah, well, Amazon is announcing the general availability of Amazon titan Image generator v two model. With new capabilities in bedrock. You can guide image creation using reference images, edit existing visuals, remove backgrounds, generate image variations, and securely customize the models to maintain brand style and subject consistency, there's several new features over v. One including image conditioning or using a reference image along with a text prompt, image guidance with a color palette, background removal capabilities and subject consistency for fine tuning. And I decided to go play with it and I'm going to tell you, it didn't go well.
A couple of things about this model. So, you know, first of all, it's been a while into Bedrock, so Bedrock's got a lot of new features that I hadn't checked out, but you basically have to choose if you want to use, if you're doing background removal or you're doing image creation from text. Like, it isn't just a multimodal model that can just do those things natively, you have to tell it, which is sort of a bit of a challenge. But I said, okay, fine, well let me just give you a prompt and have you draw something for me. And so I first attempted to take our profile photos from the website and put them in there and then it said, well, I can't create, we can't create real people. And I'm like, darn you, AI safety features. So I had to go a little more generic and so I wrote a prompt, which is not good, but we'll get to this in a second. So draw three podcast hosts sitting at a table. They are super annoyed, talking about their thousands AI story and long for the days of talking about new instance types or network switches. The host should be male, one should be bald with a goatee, which would be me. One should be balding with a goatee, which is matte, and one should have normal haircut and no facial hair. I didn't think about british was, so I didn't put that in for Jonathan. And it produced this first image, which you guys can now see.
We all have a very annoyed face. Matt is cross eyed, and these will all be in the show notes, so don't worry. You can take a look at these if you're listening to our audio show. Jonathan definitely does not look anything like Jonathan because he has a slick backed, Elvis style haircut. And I have added mutton chops to mangoatee and definitely lost about 150 pounds. So that's great, I appreciate that part of the AI. So I said, okay, fine, I'll take my prompt and let's go move it over to Gemini. And then Gemini told me, yeah, we still can't generate images of people because it would have made one of us black, would have been unfortunate. So that's still broken, by the way. Since it's been months now, since that was an issue. And then Chad GPT drew us as a cartoon, but gave all three of us goatees, which is sort of funny because I said not to. And then again, I lost 150 pounds here. Matt and Jonathan both have hair, and one is gray and one is like a total mo and curly cut going on for, I think Matt. The middle there, I think that's Matt. I don't know which one is Matt in this particular thing. So then I said, okay, fine, my prompts are probably bad. Maybe I can get this improved. So I asked Gemini to improve the prompt so it could be part of the show here. And it gave us a new prompt, which is in Titan. And chat GPT then tried to generate. So chat GPT's second attempt was a little bit better. I mean, the background, the AI on the table, the computer stuff, I like a little better still. All three of us have a goatee. I have also still lost 200 pounds in this one.
Jonathan is much, much bulkier. You've been working out which. Good for you, man. And then Matt's just leaning on his chin. Sad about his azure days, I think is really what's happening there.
[00:20:12] Speaker C: Knew of all the azure outages that we've been going through over here.
[00:20:15] Speaker B: Yeah, exactly. And then the Titan image generator produced the last one, which for some reason there's two of us bald in this one. All three of us have a goatee or beard. And I think I'm the one in the middle with my mouth wide open, looking like a Corey Quinn photo. And then Matt is very concerned to the left, and Jonathan has his swoop back hair. Also not looking very british in that photo either. So a couple things that were interesting. Number one is Titan is definitely creating images, is trying to recreate photos, which was sort of missed on me when I first read through the article, and I then realized as I was playing with it, and it's got a lot to be desired still in general, I think. But yeah, this is a little bit of fun for you guys to check out.
[00:20:59] Speaker A: I think it's funny how you specifically said three podcast hosts, and in the first picture there's two mics.
Sorry, there's three mics all in front of one person.
[00:21:08] Speaker B: Yeah, that is good too.
[00:21:10] Speaker A: And then there's no one with two mics, and the person in the middle has none.
The last one's pretty good. The last one is actually impressive. Nothing. Not because of the looks on our faces or anything, but it does look a lot more photographic.
I mean, you can choose which style you like. I still think stable diffusion is probably.
[00:21:30] Speaker B: Which is also available to you in bedrock. I did not compete with stable diffusion because I didn't have that kind of time. But yeah, it's sort of interesting, though.
Everyone's so worried about AI taking over everything, and I just sort of look at this and I'm just like, I don't know. I mean, I'm not a great prompt writer. The new prompt that it wrote, the improved prompt, by the way, was a three podcast hosted a table in a cozy recording studio, surrounded by professional microphones and scattered notes. The expressions clearly convey frustration as they discuss their thousandth AI story, longing for the days, and they focus on new instance types and network routing. The first host is a ball of male with a goatee, the second is a balding male with a goatee, and the third is a male with a normal haircut and no facial hair. The studio is filled with the various tech gadgets and equipment, creating an atmosphere that reflects their awareness with AI topics.
So, yeah, just sort of funny. But you know, I. The facial hair thing is interesting because like I specifically told it no go, you know, without a goatee on the one person. And normally it's pretty good about negative prompts, but in this particular case it was not.
[00:22:27] Speaker A: I think instead of saying 1st, second and third person, I think you'd get more success if you like the person on the left person, middle person on the right, because it understands position in the frame, but not.
[00:22:36] Speaker B: That's a good call, but nothing, but not the number. Yeah, well, you should definitely try to take my prompts, Johnson, and see if you can make it better.
I'll give you access to the cloud pod account and you can play with bedrock. It didn't cost me more than a couple dollars.
[00:22:50] Speaker C: You should actually type in our last couple months of podcasts and see if then how it generates us based on what we talk about and how we talk about it.
[00:22:58] Speaker A: What do you think this guy looks like based on what he sounds like, what he talks about?
[00:23:04] Speaker B: Unfortunately, anthropic, which I know is where you're feeding in. Most of our audio doesn't understand, doesn't do drawings yet. But one thing, the strikingness of the fact that the Titan requires you to choose what you want to do. The multimodal models are so nice to have to think that hard about which one I want is not something I want to do. But that's also why those things are cheaper too, because they're specialized in those ways. But when you're trying to do just general stuff with a model, chat, GPT, anthropic, et cetera. They all have a big win there.
[00:23:35] Speaker A: Yeah, it's still awesome. I mean I've seen some fantastic AI generated images and I think the weird fingers show up in the images from last one's titan. Yeah, still got weird fingers. But I've seen some fantastic AI generated images which are virtually indistinguishable from real life anymore.
[00:23:57] Speaker B: Yeah, and the one from chat GPT, which is after the revised prompt, I'm pretty sure I have eleven fingers if I were to count them.
But yeah, it's interesting. But yeah, it's sort of funny because I see a lot of AI generated stuff right now for the political campaign happening in the US right now.
[00:24:16] Speaker A: Surprise, surprise.
[00:24:17] Speaker B: And there's a lot of weird sub right now who just came out with like hey, here's an AI generated image from this campaign and here's what they use it for. And here's how you know it's AI and they point out all the flaws, which is kind of fun. So it's sort of like the old days of like is this Photoshop or not? But now with AI and I'm kind.
[00:24:34] Speaker C: Of enjoying that, which is subreddit because I definitely want to follow that.
[00:24:39] Speaker B: I'll send it to you. I don't have the top of my head but yeah, it's basically, is it AI or not? Or something like that. And it basically, they tell you how it's AI models.
All right, moving to GCP. They're announced at next the compute engine X four and c three bare metal machine types. And now today they're telling you they're generally available. The machine types address unique compute needs with general purpose and memory optimized families with the new export Instance series, including three new instance types to address extra large in memory database needs such as SAP four, HANA, three new c three. Bare metal shapes cater to a variety of applications such as commercial and custom hypervisors that require direct access to cpu and memory resources. Underpinning both of these instances is titanium Google cloud system of purpose built custom silicone and multiple tiers of scale out offloads. By freeing up the cpu, titanium provides performance, reliability and security improvements for a variety of workloads.
[00:25:29] Speaker A: Awesome. Generally available. That means you can now ask your account rep if you can have someone that can say no in any region you choose.
[00:25:35] Speaker B: Yeah, or they'll tell you that you can get them in one availability zone of a region so you can't get redundancy. That's my favorite answer.
[00:25:43] Speaker C: I still always want a good reason to play with bare metal. I've never had a good solid use case. I know if you're big into windows it helps with licensing and stuff like that, or you want straight SQL, but I never had a good reason to. I always find ways around having to deal with this.
[00:26:00] Speaker B: Yeah, every time it comes up with someone saying it's an option, I come up with all the reasons why it's not a good idea and typically talk them out of it.
You lose so many advantages of having the cloud and virtual even though you don't get direct access to the hypervisor. To move things around in Amazon, you just turn the instance off, turn it back on, it gets moved to different hosts and things like that. That just don't happen with bare metal.
[00:26:22] Speaker A: Do you still got a metadata service with bare metal? They still provide that through the okay, that's pretty cool.
[00:26:30] Speaker C: Yeah, well that's not true. I think the only time I've ever had to run I don't know if it's bare metal or not. It ran Mac on Amazon at one point.
Is that technically bare metal or all.
[00:26:43] Speaker B: That is bare metal too?
[00:26:44] Speaker C: So I did have to. I did that at one point.
[00:26:46] Speaker B: Yeah. The nitro card provides the metadata service to even bare metal. Okay, Google Spanner is expanding the types of capabilities it can support with several new variants. First up is Spanner graph, offering you an intuitive and concise way to match patterns, traverse relationships and filter results, and interconnected data to serve common graph use cases such as personalized recommendations, finding communities, or identifying fraud, the new spanner advanced full text search builds on Google's decades of search expertise to bring powerful matching and relevance ranking over unstructured text, as well as now supporting vector searches. Supporting semantic information retrieval, the bedrock of generative AI applications. Building on twelve plus years of Google research and innovation, and approximate nearest neighbor algorithms. In addition, to meet all those complex costs and compliance needs of enterprise customers, they're also giving you geo partitioning, which allows you to deploy while storing parts of your data in specific regions to support fast local access and data sovereignty. Dual region configurations offering multi region availability properties while respecting data sovereignty and auto scaling automatically adjusts the size of your spanner deployment, allowing you to quickly react to changes in traffic patterns without the need to over vision. These new capabilities are available to you in either the spanner enterprise or the Spanner enterprise plus versions of Spanner.
[00:27:55] Speaker A: Excellent. I finally have a reason to use Spanner now. I was going to use Neo four J, but that's a lot more complex. Yeah, this is great simply because marketplace agreement, I mean the products neo four J is great, don't get me wrong. But as far as native cloud services go, Spanner is so much easier to use through terraform and they, instead of building their own standard for the graph query language, they used an open standard, which is nice GQL because Neo four J doesn't support GQL yet and I think they're still working on it. That's really nice because now I can pivot to something else. If it sucks.
[00:28:31] Speaker B: You gotta get to work on your GCP account first though. So that's step one.
[00:28:35] Speaker C: I just hate how they do enterprise. Enterprise plus, enterprise premium. I feel like we're getting to like.
[00:28:42] Speaker B: Speaks of the enterprise, the Azure guy who's got ultra and ultra disc premium.
[00:28:46] Speaker C: And there's, you know, even for like redis on Azure, it's like standard and premium and isolated. And I'm like why?
So it drives me crazy as they go down these routes of like we built something but now we want to charge more for it. We cant roll it in because were going to charge extra and extra and extra. So we have to have all these tiers. And figuring out which tier works for you, youre starting to get back into like the Microsoft PhD of licensing with some of these things.
[00:29:15] Speaker B: I mean I prefer, I mean the Amazon way is also sometimes problematic though because theres thousands of knobs you can turn and some of those knobs cost a lot of money and some of them dont. And so if you dont want to have risk of someone turning on the expensive knobs you with enterprise or enterprise plus you have the ability to turn some of those things off without having to muck around with IAM permissions. So I mean there's pros and cons both ways but I do sort of prefer the Amazon way because I feel like now I don't have to redeploy, I'm just enabling the expensive knob.
[00:29:42] Speaker A: Yeah, yeah that's fair.
[00:29:45] Speaker C: As long as the expensive knob doesn't require redeployment.
[00:29:48] Speaker B: Correct.
[00:29:49] Speaker C: Because there's a few of those. Gotcha.
[00:29:51] Speaker B: There's a few. But typically they also handle those for you. Yeah. Without too much downtime or with minimal to no impact. As long as your app has good retry logic.
Well, for those customers who do spend a lot of money on expensive things, a SQL server on GCP Cloud is now giving you the general availability of Cloud SQL Enterprise plus edition for SQL. That's a lot of SQL SQL Enterprise plus for SQL delivers new innovation that meets the needs of the most demanding SQL server workloads without building on the core foundation of Cloud SQL. There's two new machine families for enhanced performance on higher memory per VCPU, a data cache for improved read performance, which is one I'm most excited about, and advanced doctor capabilities and a 99.99% availability SLA for business continuity. The existing version of Cloud SQL for SQL Server will continue with no changes to features or pricing, but will now be known as Cloud SQL Enterprise Edition plus or Sorry, Enterprise Edition and this new version will be plus for a SQL server. The enhanced node types are performance memory optimized up to 32 gigs of ram per VCPU, as many as 128 vcpus and the performance optimized machine family. And for read intensive workloads, Cloud SQL Enterprise plus provides a configurable data cache that delivers high read performance data cache, leveraging server side SSD as a way to persist most frequently accessed data, lowering read latency and improving throughput.
[00:31:06] Speaker C: Your CFO will hate you if you launch 128 VCPU.
[00:31:12] Speaker B: Maybe it depends on what you got to do with that thing. You're doing stuff for the CFO. You might be very happy because you got done real fast, but I mean having it built in read cache built into the cloud SQL for SQL server without having to do table pinning and all the other terrible things that you have to do to make SQL scale or manage a redis cluster or Memcache D on the outside.
That's super nice. I have a workload for that in my day job that I already talked to my DBA about. Hey, this would be perfect for this use case.
[00:31:41] Speaker C: It is something that Azure has which is called application intent which you can just pass that as part of the SQL string for it and it will just automatically route you to the read only replica if one's there and if not route you to the right, which is a nice feature too that Azure has. It sounds like this is the same.
[00:32:02] Speaker B: Thing, just except for this is all in the same node. So yes, I mean you can do read only replicas and then you just pass the intent into the connection string. It'll pass you to the read only side. But this is actually a caching layer where they're actually in man in the middling. Basically the SQL command and basically returning that data to you. Okay, slightly different but same idea.
I think Microsoft does have a solution similar to this that they make available in Azure SQL. For those who want to spend a lot of money, Bigtable has grown up and became a true NoSQL solution with the inclusion of Google SQL, an ANSI compliant SQL dialect used by Google products such as Spanner and Bigquery. Now you can use the same SQL with bigtable to write apps for AI fraud detection data, mesh recommendations for any other application that benefit from real time data, there's a quote here from John Kasana, executive officer VP of engineering at Plaid. Seamless SQL integration and efficient counter functionality will empower us to build more robust and scalable solutions for our customers. We applaud big tables committed to innovation and eagerly anticipate leveraging these enhancements to simplify work with big, complex, and fast moving data.
If a customer can tell you that you're innovating by adding a SQL layer to your database, that's just an easy win.
Take that every day of the week.
[00:33:15] Speaker A: I guess the AI is better at writing SQL statements than anything else, and so they implement SQL.
[00:33:21] Speaker C: There's just so many examples of SQL statements out there that it's easy to get the AI to do it for you.
[00:33:29] Speaker B: For those of you who have security teams that will allow you to run arbitrary terraform code against your Google Cloud account, the marketplace now provides an easy step by step guide to deploying a marketplace VM using a terraform script from the Google Cloud Marketplace UI in a few simple clicks.
And so yeah, if you trust the marketplace 100% and you want to just run arbitrary terraform code in your account just like you do cloudformation on AWS, you can now make that simple for yourself.
[00:33:55] Speaker A: What could go wrong?
[00:33:56] Speaker B: What could go wrong?
[00:33:58] Speaker C: So I went in and tried to figure this one out. There's a button, and I just kept going in a loop in the blog post where it was like, press here to go do it and linked you back to the run thing that says run your terraform code by running terraform init, terraform apply. It's very confused by this. I didn't know if either one of you two kind of played with this.
[00:34:16] Speaker B: At all or so. Basically what they've done is as a marketplace provider, you basically produce a module that you can then either run through a command line deployment by downloading the terraform module to integrate it into your CI CD pipeline, or if you just click through the GUI, it launches it into Google Cloud Infrastructure manager, which is basically their version of Terraform enterprise for cloud, and that'll then do the deployment and apply process. So it really depends on how you want to do this, but it's very similar to click this button to get cloudformation automatically deployed on your AWS account, which again, I don't recommend that either.
[00:34:51] Speaker A: Yeah, it's nice that it tracks the deployments that you've made so that if people have gone away and deployed a bunch of.
What was the example in the, in the blog post? Vip instances or whatever, like f five.
[00:35:04] Speaker C: Load balancers or something.
[00:35:05] Speaker A: Yeah, I mean, just in case you really, really want to burn money. But I mean, you can track all the deployments from, from the marketplace so you can see where they all are, you can see who's deployed what. And presumably if there's, if there's updated images, then the, the terraform will be updated as well. You could easily do in place upgrades. So it's kind of nice functionality, but the opportunity to actually inspect the terraform and see what it's doing and see who it's running as and which permissions it has would be kind of nice. It's a little vague at the moment.
[00:35:37] Speaker B: Yeah, there's a lot of gotchas to how to make Google cloud infrastructure manager work and how the, what IAM permissions need to have. Like. Yeah, if you don't have the right things applied to your role, it won't work. So it's appreciated. It's an easy button, assuming you've done all the plumbing you needed to do in advance to make that work. If you prefer the security violation of running unarbitrary terraform code that you don't own.
[00:36:00] Speaker A: But of course, the irony there is that the security vendors are going to be the first people to use this to make their tools easier to deploy.
[00:36:09] Speaker B: Well, guys, I am sorry to tell you, you might already known this though, because I knew Google is a monopoly.
[00:36:16] Speaker A: No.
[00:36:17] Speaker B: Yeah. A judge ruled on Monday that Google violated antitrust law by paying other companies to make its search engine the default on smartphones. The ruling could force Google to change the way it runs its business and impact several other antitrust lawsuits involving Amazon, Apple and Meta. The decision recognizes that Google offers the best search engine, but concludes that we shouldn't be allowed to make it easily available. We appreciate the court's finding that Google is the industry's highest quality search engine, which has earned Google the trust of hundreds of millions of daily users. This is a quote from Google, by the way, has long been the best search engine, particularly in the mobile devices continue to innovate in search and Apple and Mozilla occasionally assess Google search quality relative to its rivals and find Google to be superior. Given this and that, people are increasingly looking for information in more ways, we plan to appeal the decision. As this process continues, we will remain focused on making products that people find helpful and easy to use, says Kent Walker, president of global affairs at Alphabet.
[00:37:06] Speaker A: I find this really interesting because, Polly, I think they'll win an appeal. I hope they win an appeal.
[00:37:13] Speaker B: Depends on the administration and office at the time.
[00:37:19] Speaker A: The weird thing is, it's common practice to pay for your product to be front and center. You just walk into a grocery store and those things aren't placed randomly by employees. They're carefully mapped out. Kellogg's pay for their things to be on certain shelves, certain height above the floor, eye level for adults, eye level for kids. Like, there's a huge market in charging people for product placement. And I don't see having Google search being the default search on iPhones any different than any other kind of product placement.
[00:37:51] Speaker C: I mean, they essentially force Apple to say, okay, Google has Pixel and Android phones, so Apple, you can't use Google.
It just feels like a weird thing to.
Now you're starting to force other companies that are not Google to select what they what they what and how they use these tools. Which feels weird, too.
[00:38:16] Speaker A: Yeah, I mean, I suppose we first power up a new phone. It could say, okay, now choose your search provider and there'll be a list of vendors and you have to specifically pick one and you present it with them once. But then you know what order they go in. They go in alphabetical order they go. They go in what order they go in. But I think, I mean, Apple, Apple care about their customer experience. And Google search is and has been for a very long time the best tool around. Why wouldn't Apple want Google search to be the default search on their phones?
[00:38:50] Speaker B: I guess the question, because, again, a couple of things about this. Apple's paid religiously $20 million a year to make Google the preferred search engine. And Mozilla, apparently 80% of their revenue comes from Google.
So that's a big deal for Mozilla foundation. Basically, the argument is basically that a new competitor in search would not have the dollars to be able to compete with that. And I'm looking at OpenAI, who's talking about building a search capability, and I'm like, well, they have a lot of money and raise capital dollars. I definitely think they could go to Apple and say, hey, I'm willing to pay you 25 million to be the default search. If it's good, then Apple make the decision based on if it's good or nothing. And yes, I'll take your $25 million, because I'm not silly.
[00:39:36] Speaker A: Yeah.
It's strange thinking about it as a new business in the space, having a hard time competing is probably the only argument that makes any sense in terms of the ruling, I think.
[00:39:49] Speaker B: But again, I find it weird. This is the issue they have. The issue they have is around the fact that they're paying for placement in places, which I agree with you, the supermarket is actually a really good defense. But the ad market, which is the bigger area of monopoly in my mind because they don't allow any other ad networks to be involved in Google search. That's a bigger monopoly to me than the searches because again, search, you'll go wherever you want to for search. If someone creates a better search engine, you'll go there. I used to use Yahoo and then I used Alta Vista and I used ask jeeves.
All these different search engines at one point in time. And then Google came and it was amazing and it did the best job. I started just using that. I tried to use DuckduckGo because there was a point where it's like, oh, I shouldn't give Google my data. And it's like, well, I sort of like the things that Google's giving me that are in its search algorithms versus what I'm getting out of DuckDuckgo. Because going to Duckduckgo was like going back to Yahoo and I didn't feel like it was as good consumers make a choice.
And it's not that hard to switch on an iPhone to a different search provider. It's like three clicks or four clicks. I mean, I know where it's at. It's not easy for a new user. So maybe the finding is that Apple and their antitrust suit has to make a choice, make it available as an easy choice as part of a setup of a new phone that you get to pick your search engine. But again, these are silly things that I don't think are huge monopoly problems.
[00:41:14] Speaker A: Yeah. You think they'll win the appeal or.
[00:41:16] Speaker B: No, I feel like they could. I don't think they realize that the, the Firefox and the Apple would be such a big point of what the jury would point to as their monopoly power. And so I think they didn't know how to, they didn't focus their defense enough on it that I think in an appeal they'll be able to argue their way out of that.
[00:41:36] Speaker A: Yeah. Interesting. It's like Android and Java and Oracle all over again.
[00:41:43] Speaker B: Yep, all over again.
I mean, I do wonder what happens with Amazon and Apple's antitrust case based on this though.
[00:41:52] Speaker A: Yeah, it's weird. I think I've used it as well for things and it's like it is 20 year old technology.
Google has got very good at giving you the answers to what you meant to ask and not what you did ask, whereas other search engines very much just text indexes. I think Google applies some magic context to what you're actually looking for.
[00:42:17] Speaker C: And my best example of that magic context is Google by now has to know I am somewhere in the tech world, somewhere in the DevOps world, those in the cloud world. So if I go type chef recipe or something, it's not going to give me recipes that a chef uses to cook in the kitchen. It's going to target me towards chef cm tools.
[00:42:42] Speaker A: I'm going to try that right now. Chef recipe, nope, recipes for dinner. Fun as a result.
[00:42:50] Speaker B: That's always the worst thing about being a chef developer was trying to find cookbooks for things. They're like oh, why did you guys choose this of all the things, that.
[00:42:58] Speaker A: Choice of name, I will give them credit.
[00:43:01] Speaker C: I enjoy how far down that whole way they went.
[00:43:04] Speaker B: Oh yeah, they, I mean they dug into it.
[00:43:06] Speaker C: They went to knife and recipes and cooked books and everything else and all this stuff. Yeah, I enjoy the fun of that that they did.
[00:43:18] Speaker B: But also no one know what chef was either. So it was sort of. It's the risk of cute naming versus functional naming. All right. Azure is embracing the future of container native storage with the new Azure container storage capability. This is a platform managed container native storage service in the public cloud. Azure container storage joins their suite of container services, tightly integrating with kubernetes and simplifying your stateful workload management across Azure set of comprehensive storage offerings Azure container storage supports ephemeral disk, local NVMe or temp SSD and Azure disks. With Azure disk you can take advantage of built in resiliency by choosing between zone redundant storage options or multi zone storage pools on local redundant storage lrs to deliver a highly available solution across zones. Server side encryption is provided by default with platform managed keys and enforced network security per respective backing storage options. You can further enhance the security by providing your own customer managed key.
[00:44:12] Speaker A: I'm confused.
[00:44:13] Speaker C: A nice solid add on. I feel like that you can now actually have the ephemeral disks where no different than EBS backed containers and all that stuff. So a little surprised it wasn't there to start off, but you now have it. I'm not, still not saying that this is a good idea, but it's an idea.
[00:44:34] Speaker B: Stateful workloads are definitely a thing though, like staple workflows in the containers. We lost that argument years ago.
[00:44:39] Speaker C: At this point, still fighting here. Still fighting as much as I can. It might not be winning, but I'm fighting it.
[00:44:46] Speaker A: Yeah, stateful containers and ephemeral disks in the same paragraph don't make a lot of sense to me, but I get the cost if you don't need resiliency.
[00:44:59] Speaker C: I read a little bit about this a few weeks ago. Not this specific, but just containers. When it comes to even training your own models, like your own ML models or going to LLMs and all that type of stuff, where some of it's just the container needs to boot up, get a sum of data to start its model on, and it's the ephemeral disk, so it's sort of staple. Because if the container crashes while it's, you know, running its section of the model, it can have problems. But it's, you know, it needs that. It wants that ephemeral to load any historical data or anything else that needs to run its own future compilations. They kind of get that like ephemeral local storage attached to it is for that aspect of it. I still just, just like containers that need EPS.
[00:45:48] Speaker B: Well, I mean, there's also cases where you're training AI or LLMs where you need, you want to load stateful data to the container to then make it able to access that disk faster. So there are other use cases too that aren't just run your database on top of kubernetes.
[00:46:06] Speaker A: Yeah.
[00:46:06] Speaker C: Shielding, you can go build that for us. Just.
[00:46:09] Speaker B: Sure, you're right on that.
Well, you know, earlier I showed my disdain for all the AI announcements we're continuing to do every week by creating three podcast hosts on Titan who were annoyed about talking about AI and not talking about cool things like network firewalls. And OCI delivered for me this week and gave me announcing tunnel inspection for OCI network firewalls. This feature allows for a new use case using threat analysis capabilities with their native virtual test access port service, or VTAP. This combination allows for comprehensive traffic analysis through dedicated out of band channel enables detection of malicious sources or destinations, identification of inappropriate crypto traffic and spotting of SSH sessions targeting known command and control domains. Packet mirroring be still my datacenter heart.
[00:46:56] Speaker C: I miss just fun cloud announcements like this. It's a good solid feature that they are adding to the cloud to make a compliance security person happy. This is sometimes what I just miss of just straight core cloud features that we don't get all the time anymore.
[00:47:20] Speaker B: Yeah, I mean, this is what we created the cloud pod for, was to talk about stuff like this and why you did use it. And I don't know that I want to have an out of band traffic flow of my encrypted tunnel traffic. But hey, if you need this for security compliance, this is cool. It's great to have it and you have the ability to do it now, but it has been a little bit for the last year now. Just a lot of AI. So I do hope that I saw Andy Jassy talked about needing to make sure you have a successful outcome of all this AI investment. I saw some other people talking about some of the in the earnings world about all these AI investments need to make money, and there's unclear paths to money for some of these things. I'm hoping maybe, maybe our knee jerk into everyone has to have all AI all the time is maybe coming to some level of back to the middle versus so far pegged to one side or the other, because I don't mind AI stuff. The old days of having a couple ML announcements a month mixed in with our other cloud infrastructure stuff was fine. But yeah, it's just so much AI right now.
[00:48:24] Speaker A: Yeah, I think part of the problem with AI right now is like, okay, we've been over the image generation thing, and okay, we've done the large language models and yes, we can write some fake stuff. I think the exciting bit is going to be where people come up with some really cool use cases for it.
[00:48:38] Speaker B: But I do think we're limited by the type of AI that we've designed. This predictive, text based AI capability.
It only has so many use cases before you run out of it doesn't do what you really need to do or what truly people believe AI is, which is the Sci-Fi version of AI.
There is a limited utility. And I've kind of saying that for a long time on this show is that I do think there is an overhyped problem. The bubble is too big for what AI can do. Companies saying they're going to fire all their designers because AI can design things. Yeah, you can design people with twelve fingers. So cool.
There's still need for people who are good at what they do. And so I think people have sort of chased something that isn't real yet, but it could be real soon. I think it's not as good as people think it is.
[00:49:29] Speaker C: I think there's going to be good use cases for it. I don't think the bar shop needs a, you know, the local repair shop needs AI bot on their website to do whatever they can think of. But I think there are features in specific products. And, you know, maybe I'm biased, but I feel like some of the stuff that my day job is kind of working on is pretty cool and where it's going to end up is a good feature that actually will help people. Versus, like, we threw a bot in here, we can type words and make generational pictures. Like, I think, like, if you actually embed it into your technology, into your platform and make it be a part of it, versus, like, we've augmented this thing onto the side that kind of works, but not really. So, like you said, I feel like it's really about the implementation now of what people do with it.
[00:50:21] Speaker A: Yeah, I mean, I can think of some really good examples, though, of where it would be useful. I mean, you go to the Napa auto parts or something, and my car's making a funny noise. It's doing this, or it's got, this thing is happening. You could offload that entirely to AI, and somebody could chat on the website, and it wouldn't cost you the cost of paying that employee to sit and chat with somebody for half an hour about this rattle noise they've got or whatever. So you could still use it as a sales funnel and help people diagnose things and then not take up a person's time.
[00:50:57] Speaker C: All I can think of is doing use in Napa auto parts was the Phoenix project.
And like, that's. I might have stopped listening to after that because I was like, oh, well, there they're talking about how to make employees more efficient and everything else by actually having technology work for them. That's all my brain went to.
[00:51:18] Speaker A: Yeah, I mean, I think.
[00:51:19] Speaker B: How many of those books have you read now, Matt? Like, I know you were doing Phoenix project, and then I think you said you were going to do the developer one, and then did you get to compliance unlimited two?
[00:51:27] Speaker C: We have. We've not done it. We're doing a couple chapters every other week, kind of, you know, actually read them, talk about them as a group and whatnot, and slowly, you know, build up from there. So we are slowly going through them. I did not follow my own instructions to my team, where I actually just said, nope, I'm too impatient. Just listen to all the books. So, yeah, I'm a bad person.
[00:51:52] Speaker A: Remind me not to join your book club.
[00:51:54] Speaker C: Yeah, I'm not patient. Well, I listen to the audiobook, so I'm like, okay, cool. They go take the dog for an hour long walk and I'll just listen to an hour with the book.
[00:52:03] Speaker A: So, I mean, people do kind of make fun of llamas today and just kind of reduce them to next word predictors. And I think that's, while technically true, I think it's kind of mischaracterizing or underestimate, underestimating what they can really do for people.
[00:52:23] Speaker B: Because people, people, it's a gross oversimplification.
[00:52:26] Speaker A: For sure it is. But people communicate using words or text every day. And I think it's very close to being incredibly useful.
[00:52:35] Speaker B: And there are very common speech patterns and ways that people talk and those things, it just, you know, it, it doesn't do the highly creative stuff that people think it's going to do. And that's the part that I think they don't quite get. Like, yes, it can make writing an email to a customer much easier. It can give you boilerplates, it can give you scaffolding for coding. It can do all these things. But like the actual part where you had to get creative and do something that is unique and new, it can't do that. And that's, it can't invent yet. So maybe someday it will. I just don't know that this model, I don't know if the text prediction models will to that at the level we want, but we'll see. We'll find out.
[00:53:09] Speaker A: Yep.
[00:53:10] Speaker B: I'm not willing to bet against it at this moment.
[00:53:12] Speaker A: No.
[00:53:15] Speaker B: All right, guys, we'll see you next week here at the cloud pod.
[00:53:18] Speaker A: Yep, see you later.
[00:53:19] Speaker C: See ya.
[00:53:23] Speaker B: And that is the week in cloud. Check out our website, the home of the cloud pod, where you can join our newsletter slack team. Send feedback or ask
[email protected]. or tweet us with the hashtag hash poundtheclapod.