300: The Next Chapter: How Google's Next-Level Next Event Nexted All Our Next Expectations - and What's Next Now That Next Is Past

Episode 300 April 18, 2025 01:20:18
300: The Next Chapter: How Google's Next-Level Next Event Nexted All Our Next Expectations - and What's Next Now That Next Is Past
tcp.fm
300: The Next Chapter: How Google's Next-Level Next Event Nexted All Our Next Expectations - and What's Next Now That Next Is Past

Apr 18 2025 | 01:20:18

/

Hosted By

Jonathan Baker Justin Brodley Matthew Kohn Ryan Lucas

Show Notes

Welcome to episode 300 of The Cloud Pod – where the forecast is always cloudy! According to the title, this week’s show is taking place inside of a Dr. Suess book, but don’t despair – we’re not going to make you eat green eggs and ham, but we WILL give you the low down on all things Vegas. Well, Google’s Next event which recently took place in Vegas anyway. Did you make any Next predictions? 

Titles we almost went with this week:

A big thanks to this week’s sponsor:

We’re sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You’ve come to the right place! Send us an email or hit us up on our slack channel for more info. 

GCP

Pre-Next

02:35 Google shakes up Gemini leadership, Google Labs head taking the reins 

04:35 Filestore instance replication now available

05:16 Multi-Cluster Orchestrator for cross-region Kubernetes workloads

06:26 GKE at 65,000 nodes: Evaluating performance for simulated mixed AI workloads

09:15 How we built the new family of Gemini Robotics models 

09:58 Tuesday Night

Was anyone else weirded out by the scheduling? Did any listeners actually stay until the end on Friday? If you, we’d love to hear from you. 

13:30 The AI magic behind Sphere’s upcoming ‘The Wizard of Oz’ experience 

https://www.youtube.com/watch?v=f01dsTigSmw 

*Show note writer Heather is a curator at a museum that showcases Hollywood’s early history – and is VERY interested to see how the film world feels about this AI rebuilding of such a beloved classic. Some interesting discussions are definitely coming! 

21:11 Next Day 1 Keynote

Ironwood: The first Google TPU for the age of inference

24:30 Ryan – “So I was sort of surprised because they did spend a lot of time talking about inference and this chip handling inference concerns. I thought that was real. I mean, it’s just not the way that we’ve been talking about these custom AI chips in the past, right? It’s definitely been all about model training and building all these things. And the inference is more about running these very large models. And so there did seem to be a huge focus on performance and end user experience with AI development all the way through the conference.”

Google Workspace adds new AI tools to Docs, Sheets, Chat and more.

26:04 Google Agentspace enables the agent-driven enterprise

27:24 Ryan – “Well, so it SEEMS really cool, until you get through the hard edges…a lot of it really relies on your utilization of Chrome Enterprise Premium, and so that’s a whole workspace ecosystem that if you’re not bought into you’ve got a whole lot of heavy lifting to make that work.” 

32:18 New video, image, speech and music generative AI tools are coming to Vertex AI.

AI Hypercomputer updates from Google Cloud Next 25

36:46 Agent Development Kit

43:30 Agent 2 Agent (A2A)    

Google Unified Security

Cloud WAN

Meta Llama 4

Other Day 1 items:

We’re introducing a new way to analyze geospatial data.

48:20 Next Day 2 Keynote

52:16 Vertex AI Agent Engine

Data analytics innovations at Next’25 | Google Cloud Blog

Gemini Code Assist in IDEs

Software Engineering Agents – now in preview! 

Google Cloud Next 2025 Wrap Up:

1:07:22  Google Next Predictions

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. [00:00:06] Speaker B: Welcome to the Cloud Pod, where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure. [00:00:14] Speaker C: We are your hosts, Justin, Jonathan, Ryan and Matthew. [00:00:18] Speaker A: Episode 300 recorded for April 15, 2025 the Next Chapter How Google's next level Next event nexted all our next expectations and what's next now that next is passed? Yes. I nailed it. [00:00:31] Speaker C: First try. [00:00:32] Speaker A: First try. [00:00:33] Speaker D: Bravo. [00:00:34] Speaker A: I was not sure I could pull it off, but I'm very pleased about it. So, yeah, there we are. We're back. Back from Vegas. My liver has recovered, my sleep schedule has not. Ryan was there. Although if you went to either of my keynotes, he did not make it to either one of them, which I was not. I'm not surprised about, to be honest. I promised the listeners he'd be there and he did not deliver because he was hungover. [00:00:57] Speaker D: It's Ryan. [00:00:57] Speaker A: It's Ryan. It's fine. It's just what we expect. But I was there at both of my talks. That went great. I thought. I did meet a couple listeners who came up and got stickers, so that was great. Good to meet them. And they can write all their complaints to [email protected] about him not being at the conference. He will gladly ignore all those emails just like he does his work emails. [00:01:19] Speaker D: I was like, does he even have this set up anywhere? [00:01:22] Speaker A: He does. He does have [email protected] set up. Yeah. [00:01:25] Speaker D: But is it connected to his phone or anywhere he has a potential. [00:01:28] Speaker A: Oh, no. I think it forwards to his real email address because he doesn't log into his CloudPod address ever. [00:01:34] Speaker C: Facts. These are all just facts. [00:01:36] Speaker A: Yeah, we love you, Brian. We just accept these things. [00:01:41] Speaker C: Yeah. [00:01:44] Speaker A: Well, it was a very busy week. There was some stuff that came out before next, so we'll touch on some of those real quickly here and then we can get into the meat of this week's episode. This is not going to have any AWS or Azure news this week. We will save all that for next week. This is officially our 300th episode, which we're very excited about, but to do it truly justice, we need to wait till next week when we're not just coming off of Google Next. So we will Talk all about 300 episodes next week and how it's going, et cetera. [00:02:12] Speaker C: I think it fits well with the Cloud pod. [00:02:13] Speaker A: Right? [00:02:14] Speaker C: We're going to be late. [00:02:15] Speaker A: Yeah. Always the way it is. [00:02:18] Speaker D: Just like the cloud providers. We'll talk about stuff that's going to be released in the next Year. [00:02:23] Speaker A: Yeah, exactly. [00:02:26] Speaker C: Now in preview episode 300. [00:02:30] Speaker A: All right, well, first, before next Google shook up their Gemini leadership team, making the Google Lab head take over all of the Gemini work. This is an effort to continually streamline their AI presence. The DeepMind CEO Demis Hassebas is basically doing this to sharpen their focus on the next evolution of the Gemini app. There's a lot of Gemini news at next. We'll talk about here in a minute. But they're basically trying to get even more focused than they already been focusing. This has been a slow collapsing of multiple AI efforts into one cohesive team and really turning AI into a platform team, which it should be. So makes sense to me that they're doing that. [00:03:05] Speaker C: They didn't really get that as like a combination or like, you know, this is more like a firing, I think. [00:03:14] Speaker A: Like, I mean, you could maybe say that I was trying to be kinder, you know, help the guy out his next job, but, you know, it could be a termination, you know, that's, that's fair. [00:03:23] Speaker C: Well, I mean, it's so, it's interesting because going through the article, a lot of the, you know, the, the new head of AI, which I'm forgetting, the new head of Gemini. There we go. Has, you know, contributed to a lot of the, the major Google AI announcements that have come out recently. And you know, the, the, the outgoing leader, you know, was part of the Bard, you know, debacle. And, and I don't know if it was, you know, obviously don't work for Google. So. Yeah, I don't, you know if there was any other things, but there wasn't the way the article was written, there wasn't a whole lot of other announcements by his out who's outgoing. So, you know. [00:04:12] Speaker A: Yeah, I mean, it's not, it seems to be in rogue. Sorry. In favor this week as Apple Intelligent also fired their head of, you know, the new Siri, which has been a disaster. So makes sense. You know, everyone's holding people accountable for not hitting their roadmaps, apparently. Well, in addition, the filestore instance replication was announced, which helps you meet your business continuity goals and regulatory requirements. Feature offers an efficient replication point objective RPO that can reach 30 minutes for data change rates of 100 megabytes per second, which probably just eliminated a bunch of databases that you would like to have used this tool for. But it's not a database. You now have an option. [00:04:49] Speaker D: It's an impressive feed, 100Mbps with a. [00:04:52] Speaker A: 30 until you get to your database and then it's less. [00:04:56] Speaker D: Well yeah, but this is for file share. [00:04:58] Speaker A: True, true. [00:04:59] Speaker D: And if you're running a file share on top of you're running your database on top of the file share, I feel like you're doing your job wrong. Just saying. [00:05:07] Speaker A: Agreed. In a point killer for me, they introduced Multi Cluster Orchestrator to scale your kubernetes workloads across regions. It's announced the public preview of Multi Cluster Orchestrator, a new service designed to streamline and simplify the management overloads across kubernetes clusters. Multicluster Orchestrator lets platform and application teams optimize resource utilization, enhance application resilience and accelerate innovation in complex multi cluster environments. You get intelligent resource optimization, enhanced application resilience and tighter integration with existing tools like Argo, cd, et cetera, which is a nice improvement. Probably wouldn't have taken a point for this just to be honest because it's not that close to what I said but I would have tried. [00:05:43] Speaker D: I actually was trying to remember what you said that was multi clustered and I was like wait, this doesn't really fit. [00:05:49] Speaker C: Well, this wasn't on the keynote, right? [00:05:51] Speaker A: This is a no, this is all pre invent. That's why pre next. There's no, no point for it any regardless. [00:05:56] Speaker C: But I think yeah, it would have. [00:05:57] Speaker A: Counted if it would have been close. I don't know. As I said, unification of major enhancement of Anthos and GKE Enterprise isn't quite that. I would try to argue it for you though. But yeah, and you guys would have rejected my point. [00:06:09] Speaker C: I would have argued against it then if you had gotten the point. But now that you didn't get the point, I think you should have got it. [00:06:14] Speaker A: Yeah, but it wasn't at the keynote so no chance whatsoever for the sadists out there. You can now run 65,000 node clusters up from 15,000 nodes which I think we said you were status last time. This can also include your TPU based clusters for Google's Kubernetes engine. If you need massive massive Kubernetes scale, you can now do that on GKE and I look forward to seeing your therapy bills. [00:06:38] Speaker D: I just want to know what customer requested this. [00:06:41] Speaker C: So this isn't 65,000 nodes in a single cluster. [00:06:45] Speaker A: This is even if it's not in the single cluster, it's too many nodes. [00:06:49] Speaker C: Oh 100%. I was going through the multi cluster orchestrator announcer going why? Oh my God. Because even that I'm like I don't know if we really want to start bridging all these Things, but there's probably enhancements to be had and it looks like they were addressing some of my concerns in that announcement with the regional sort of deployment of applications. So we'll see. But I mean, this seems to go against a lot of things that I've been saying for a while, which is like, you know, consolidate your workloads, manage these things with smaller little pods and distributing globally has very limited return for the amount of operational overhead. If you're at 65,000 nodes. Yeah, I don't even know. [00:07:30] Speaker D: So is it 65,000 nodes per project, per organization? I thought this was my cluster. Maybe I'm wrong. [00:07:40] Speaker C: Oh, it is. It can support a 65,000 node cluster, up from 15,000. So it is a single cluster. Why? No. [00:07:56] Speaker A: Now he's mad. [00:07:58] Speaker C: I am mad. [00:07:59] Speaker A: He went from like, I'm okay with this too. I'm now really mad that you're doing this. [00:08:04] Speaker C: Well, I mean it just enables. [00:08:05] Speaker A: Yeah. If only there was video on this podcast. People have enjoyed that moment of realization on your face that I just saw. [00:08:12] Speaker C: That was fantastic. [00:08:14] Speaker D: I mean in the article there's a lot of like other interesting facts that they work through, like you know, pouring images, pod startup, you know, and like kind of all the other pieces of it. When you have something at this larger scale, which is interesting, but like you're writing something at 65,000 notes, I really want to just go launch In a region 65,000 average just to see what Google does. Also, as long as it's here, is there enough capacity? [00:08:39] Speaker C: Let's start with that. [00:08:40] Speaker A: Well, you're. First of all, you need to make sure you have enough credit. [00:08:42] Speaker D: Right. [00:08:43] Speaker A: Except one, you have to go quotas. Yeah. Then you gotta get the quota big enough and then you can try to make Google pay attention to you. [00:08:50] Speaker C: Yeah, at that point we'll definitely hit a stock outage at that point. [00:08:53] Speaker A: Yeah. Well, if you're. Maybe they're trying to run all these robots on Gemini nodes, GKE nodes, perhaps. The DeepMind team released all a bunch of updates on their new Family of Gemini 2.0 models designed specifically for robots. And now if you are worried about the end times of Terminator, is a good time to read these articles and understand the weaknesses of your robotic companions if you ever need to take them out. I don't really care that much about that, but I think it's cool. They're using AI solve a lot of big problems. [00:09:25] Speaker C: I mean I didn't read the article, but I assume it's like go for the eyes. [00:09:29] Speaker A: I assume so. Go for the sensors. That's what I would recommend. Don't assume they're in the eyes. That's the trick. You got to know where the sensors are. [00:09:36] Speaker C: Yeah. [00:09:37] Speaker A: All right, that gets us through pre next and things get interesting. So first of all, this conference, they decided for some reason that they would run it from Tuesday until Friday. Technically Wednesday through Friday for most people. If you're an executive or you're going to a GCE event or any other of these like long day workshop days, those are all on Tuesday, which meant really weird traffic patterns. So everybody shows up on Tuesday afternoon and then they all leave Thursday night or Friday morning, it doesn't matter. You put the conference till 3 o'clock on Friday, they're going to go. Just how it's going to work. So I don't exactly know why Google decided to do that. I don't actually care for it. I thought I would. When they first announced it and I realized it, I was like, well, that might be nice actually. And then now I've done it, I can tell you I don't like it. Tuesday through Thursday is the epitome of a week long conference. That way you can travel there on Monday and you can travel home on Friday. That's how it should be. I'll debate you on it all day long. But I think that's the right schedule and I'm going to stick to it and I will put it in my feedback to Google when they send out the surveys for the event. [00:10:44] Speaker D: Wait, so AWS is Tuesday, Monday is all day. [00:10:49] Speaker A: Well, Monday, Monday is kind of. [00:10:50] Speaker C: There's Monday Night Live. [00:10:51] Speaker A: Monday Night Live is the beginning of the conference. [00:10:53] Speaker D: No, there's Sunday night. Sunday. [00:10:55] Speaker C: Well, no, no, no, no. Quiet with your partner. Nonsense. No, no, no, no. [00:11:00] Speaker D: It wasn't partners. There was like the like wing eating contest. That was at midnight on Sunday. [00:11:05] Speaker A: No, that was. That's like Monday night or Tuesday night. [00:11:07] Speaker D: Oh, it's on Monday night. [00:11:08] Speaker C: Yeah. [00:11:09] Speaker D: Well, I know on Monday we did like the hackathons and other things. [00:11:13] Speaker A: I mean, it's been a while since I've been to re invent, but I'm not going to fight you on this as hard, but December 1st, December 1st technically was Sunday in 2020. Oh, this is 2025. This is the stream. All right, hold on, wait, sorry. December 1st, we use calendars. Monday Night Live is. Monday is the game of the conference for Amazon, in my opinion. And Monday during the day, you may have partner things, you may have an executive thing, you may have security things, you may have A training, certification thing. They have all that on Monday, which is typical. And then basically, like, Friday, they have, like a bunch of rerun sessions for people who really wanted to get into one that was really full on Tuesday, Wednesday, Thursday, no one goes to the Friday sessions. I've done one one time. I was like, this is laughably silly. [00:11:56] Speaker D: I mean, I agree with you. I like the Amazon. I'm just trying to live up the show. [00:12:02] Speaker A: Sometimes. Playing the active part of Contrarian is Matt. Do we know he's playing Contrarian? No, we don't, but we assume he's not, and then we get mad at him, and that's how that goes down. [00:12:11] Speaker C: Can you adopt a British accent? It'd be really easier if we to tell it apart. [00:12:15] Speaker A: We can definitely tell you being more Contrarian if you're British. [00:12:17] Speaker C: For sure. [00:12:18] Speaker D: I'm not even going to try one. I was thinking about it. I don't think you could do it. [00:12:22] Speaker A: So now let's go back to conference. So Tuesday, if you are a special invited person, you got to go to the Sphere. And I don't know who exactly got invites. I got an invite to it because I was part of the executive groups that were there. Executive talk track. My finops guy was there. I don't know how he got there. [00:12:39] Speaker C: I had an invite, but no one told me. So it was just languishing away in my inbox and it went unnoticed. I was really upset about that. Yeah. [00:12:46] Speaker D: See the beginning part of this podcast about reading your email. [00:12:48] Speaker A: Yeah, yeah. So anyways, for those of you who were, you know, who got this invite, basically they said, come to the Sphere for some mystery event. Now, secretly I was hoping it was the Eagles, just because Ryan loves the Eagles and they do have a residency at Sphere right now. And I thought that'd be kind of fun if Google had a surprise private concert of the Eagles, and if I could get Ryan there, it'd be even better. [00:13:12] Speaker C: Oh, the headlines like, oh, would have been bad. I would have gotten arrested. [00:13:16] Speaker A: Yeah, it would have been bad, but it wasn't that. So I arrived. Yeah, they said to get there at some time, which I don't recall, but my Uber driver took forever to get there. He was very mad at the traffic. He was like, what's going on? I don't understand why all this traffic's here. And we're like, it's a Google thing. And he's like, is it the concert? And I'm like, I don't know what it is. We don't know what it is. We're just showing up at the Sphere. And so I walked in like maybe 15 minutes after it's supposed to start. I got my free soda, I got my popcorn, I sat down and it's just an endless sky in black and white. And I instantly knew we were talking about wizard of Oz. Now, if you haven't paying attention to anything in the media sphere, the week before, there was an announcement that came out from the Sphere that I saw and ignored 100% because I did not care. And that announcement was that the original 1939 classic wizard of Oz movie was going to be presented at the Sphere. And at the time that was announced, I said, oh, okay, cool. It's a movie theater, really. And if you've been to the Sphere, you should have known that Justin didn't control the dots his head at the time. Because the reality is you can't take a movie that's a 4, 3 aspect ratio and put it on a screen that's 16,000 K in resolution. It's not going to work because that's a really big screen. I mean, it's massive. I mean, you literally feel like you're in other places when they do screen stuff on the screen. And so when I walked in, other than I was in black and white, it was in color. I would have thought I was in Nebraska or Kansas or wherever this movie is filmed at in a field of corn with a windmill in the background. So this is pretty impressive already that, you know, they have this big screen and they have wind effects. So as this tornado which, you know is coming, is approaching, and the storm, you know, basically it starts at night and then comes into daytime before it starts. And there's lightning and thunder and you tell there's a storm coming. But that was basically just kind of like. Well, that's just 3D generated AI stuff. It's kind of cool. But then they get into the technical details, what they actually have to do to make the 1939 World wizard of Oz movie work in this projection. And they have to basically create the entire movie all over again, but with AI. And so they partnered with Google's DeepMind to basically create, with VO2, which is their video AI system, basically create this version of wizard of the Oz, which they took all of the source article, all the source material from Warner Brothers, all the, you know, the National Emmy, whatever, whatever. I think Oscar foundation had a bunch of archive footage of it, all the movie, all the cut room, you know, all the stuff on the cutting room floor. They scanned all of that into an AI model. And they basically built out a brand new AI model. And so when you typically will see this movie, they will have upscaled the characters because again, they're filmed in 1939 technology at 4, 3 ratio. So it has to be upscaled to this size. And then they're also adding people into film that did not exist. That film does not exist in any way, shape or form. So there's a scene very early in the movie where one of their. One of the characters comes from off screen, which we consciously know that he's off screen in the movie because he walks into frame and he puts his arm and hand on the shoulder of one of the characters and he comes into frame that way. So they had to basically recreate this actor digitally, give him all the personality traits, the way he walks in the movie, his gait, etc. And then have him literally be standing in the frame before that happens and then walk in and do what he does in the movie. And they had to do this for the entire movie. There's a close up of Dorothy. Well, guess what? If she's dancing in this close up, she's also dancing in real life. But they don't have the footage of lower below her stomach. So they have to basically digitally create her legs and the ruby red slippers on her feet and show her dancing in the scene that was filmed back basically 100 years ago. Amazingly impressive use of AI. I was blown away. And they showed us two scenes of the movie. They didn't show us the whole thing. They showed us over the rainbow, Dorothy singing over the rainbow in the black and white section. And then they showed us the scene where they meet the wizard in the wizard of Oz or in Emerald City. And both were cool. A little uncanny valley. I will say again, it's not going to be out officially until August, so there's a lot of work to be done. They're very busy. They talked about it in the video they presented and they had a whole like, you know, walking you through all this stuff that I hope it put onto YouTube at some point so you can see the technologies, technology involved in making this movie. Or maybe they'll air it before they show the movie so people can really, truly appreciate this thing. It's a technical marvel that I am very excited to go see now in August. And a week ago I would have been like, yeah, yeah, whatever. Wizard of Oz, don't care. And that was a special sneak preview. And then, you know, Sundar kicked it off in that. And then TK at the very End of it. Welcomed us all to next and said, have a great week. That was the beginning of the conference. [00:17:48] Speaker C: And I was already exhausted. [00:17:52] Speaker A: Well, I did drag you to the weekend before when actually we took you out partying all weekend because it was my birthday. And so we celebrated my birthday in style. We had a great time. But we made Ryan and I very tired before the conference even started. [00:18:04] Speaker C: And you dragged me to the compute all day session as well. [00:18:08] Speaker A: I did drag you to that too. It's because you're gonna sit around doing nothing. What are you gonna do? [00:18:11] Speaker C: It's true. [00:18:11] Speaker D: Recover. [00:18:13] Speaker C: Yeah. [00:18:13] Speaker A: Recovery is when you come home, man. Once you're in Vegas, you're in Vegas. You got to do it all. Only one thing can take down Ryan. That happens later. [00:18:21] Speaker C: Yeah. [00:18:23] Speaker A: All right. So that is the beginning of the conference. That is not as. Not as nerdy as. I mean, pretty nerdy. I mean, I think just as nerdy as Monday Night Live, but much more focused. So I don't know how they'll top that next year. Maybe they're converting other movies. But then I was thinking about all the movies. I want to see them convert to the. To the sphere format. Like, I would love to see Jurassic park in this format because a 30 story tall T. Rex running at me sounds awesome with like the wind effect. [00:18:49] Speaker D: Of it running by you and everything else. [00:18:50] Speaker A: Yeah, yeah. Like there's all kinds of cool movies that would be awesome in this format. So I assume this is a success, that we'll see other movies. There are some other things I heard later on through my people. Like, you know, one of the challenges is cuts in that size of the screen make you disorientated. So they do a lot more panning and a lot more digital effects to move you between scenes because it's too jarring. It'll make you not feel good. So lots of things you learn about how the sphere has to be designed and thought through for some of these types of things, but definitely be cool to check out. [00:19:22] Speaker C: It's pretty wild. I mean, it's. Yeah, you'd start talking about some of those things where they have to literally change the way they show you things because of how unique that medium is. Blows my mind. [00:19:32] Speaker A: Yeah. Well, I mean, right now there's only one of these screens in the entire world, and then there is one being built in Dubai, I think is what I heard. And so there's only a few opportunities. Actually, the movie from the thing is available, so you can see it. I will post that in our show notes. The little 30 minute video they showed us of the making of how they're doing it. And it's a partnership between Warner Brothers, the Sphere, some studio I'm apparently neighbor, which is disrespectful to them because they're doing a great job on this. Oh, magna opus. And then Google Cloud to do all this work. [00:20:10] Speaker C: It touches on some of the tech and stuff behind the Sphere as well. Like it was really. I watched it as well, all the way through and so like I like that a whole lot. [00:20:17] Speaker A: Yeah, very, very cool. All right, enough about wizard of Oz because you guys want to talk about Cloud, I'm sure. All right, so getting into day one, keynote. Oops, sorry. Go ahead. Matt. You have something I was going to. [00:20:28] Speaker D: Say we should name this show like off to See the Cloud off to the Next or something. [00:20:31] Speaker A: Where were you at the show note title spot you. I was coming up with ridiculous puns on what we did last week because. [00:20:37] Speaker D: We probably still would have selected that. [00:20:39] Speaker A: Because it was so good nailed on the first take. Just want to point out once again, I'm still proud of myself. It's been however many minutes of recording we've done it, I'm still happy about that. All right, day one, Sundar comes out once again, drops the news that they announced something called Cloud wan. I was like, that's random. And then that they were supporting Metalama 4 and then basically handed it over to TK, who then went through a bunch of announcements with a bunch of guests on stage, including Ironwood. The first Google TPU for the ages of inference. Significant performance improvements over the previous inference chips. We'll talk about that in just a second. Google Workspaces added a ton of new AI tools, the Doc, sheets, chats and more Agent Spaces, which is basically Google's answer to whatever Q does at Amazon and ChatGPT Enterprise, etc. It lets you basically connect to all of your corporate enterprise Systems and use ChatGPT type interfaces to Gemini, of course, to access all those things. New video, image, speech and music generation. AI tools. We're coming to Vertex. We'll talk about those in a second. Hyper Computer updates for Google Cloud Next. So if you want really big expensive boxes, you can get them on Google Cloud Next, the new Agent Development Kit, which is an open source framework simplifying the process of building sophisticated multi agent systems while maintaining precise control over agent behavior. And it supports mcp. And then they created Agent to Agent, which is they say they're the first hyperscaler to create an open agent to agent protocol to help Enterprises support multi agent ecosystems so agents can communicate with each other regardless of the underlying framework or model. This is a combination of MCP plus these connectors through this protocol so agents can talk to each other without having to do special regardless of whatever AI built them to begin with. And then Google Unified Security which is a solution that brings their visibility threat detection, AI powered security operations, continuous virtual red teaming and the most trusted enterprise browser and mandiant in one converged security solution and like I said Metal on four so jumping back up to the Ironwood basically this thing can do when scaled to 9216 chips per pod it could do 42 and a half exaflops which they commented on stage was 24 times the size the world's the computer's world's largest supercomputer El Capitan which offers just 1.7 exaflops per pod. And this is all on a chip like crazy. Each individual chip boasts peak compute of 4614 TJ and is a monumental leap. They show peak performance of larger pocket figures for TPUs basically 1.350x to 3600x in the FP8 Peak Flops Performance Benchmark and they showed quite a few RAM enhancements including including 192 gigabytes of RAM and much more throughput. You get significant performance gains while also focusing on power efficiency allowing AI workloads to run more cost effectively. You get a substantial increase in high memory bandwidth and dramatically improved HBM bandwidth reaching 7.2 terabytes per chip and enhanced interchip interconnect bandwidth which is up to 1.2 terabits per second. So pretty large improvement. 14.6 fleet peak flops for watt was the Trillium CPU. This new Ironwood TPU is 29.3 and they like to poke that one a little bit because Trillium I believe is Amazon's if I correct and they're 5.2 was the V5P which is the fastest TPU before that. So Google is ready for the inference age they're saying basically with this new capability they also highlighted how it's helping Alpha AlphaFold do even more than ever before in their quest to get protein folding to the masses. [00:24:08] Speaker C: So I was sort of surprised because they did spend a lot of time talking about inference and this chip handling inference concerns and I thought that was really I mean it's just not the way that we've been talking about these custom AI chips in the past. Right. It's definitely Been all about model training and building all these things and the inference is more about running these very large models. And so there did seem to be a huge focus on performance and end user experience with AI development all the way through the conference. So I thought it was really interesting. First time I thought of it anyway, of that. [00:24:48] Speaker A: Yeah. Well, they did declare this is the year of inference. In their opinion, this is the year that big workloads are being delivered with AI agents with all kinds of different AI use cases and such, ability to do inference at scale and economically is going to become a big driver of compute. And so that was kind of why I think they went that direction this year. [00:25:06] Speaker D: I mean, I think it also kind of makes sense, right? We have all these models, we have them all now. Yeah, they're going to keep growing, they're going to keep getting better. But at one point we got to leverage them, you know, and you know, we need to have the chips available to leverage them, you know, so it's just a matter of time until they need them. So like you said, this is Google saying we're ready, come actually use our platform for these things. [00:25:33] Speaker A: Yeah. Well, the general availability of Google Agent Spaces, this product, it's the latest Google foundational models. Powerful agents and actionable enterprise knowledge in the hands of employees. Agent Space employees and agents can find information from across the organization, synthesize and understand Gemini's multimodal intelligence and act on it with agents. Lots of partners here, Wells Fargo, KPMG Cohesity, lots of SaaS vendors involved. But basically to do something with Agent Spaces you're going to basically say give employee access to agent space. Unified enterprise search analysis and synthesis capabilities allow you to discover and adopt agents to quickly and correlate data together, especially with agent to agent and deploy. Google's built in agents such as the new Deep research and idea generation agent to help employees generate and validate novel business ideas. Gives you a unified agent search directly from the search box in Chrome. So you can basically just say ask Agent Space and say something like summarize this financial statement and it'll do that for you. You can ask IT to query Jira, you can ask it to go query your SaaS applications, whatever system you want. It'll basically do that, provide you a very simple rundown of what it finds. It can also then create audio podcasts of that content, just like Alum Studio which is probably the predecessor to this. All available to you now in the enterprise grade system with enterprise grade data protection and security available to you. I'm hoping my day job adopts this because this looks really cool. [00:26:54] Speaker C: Well, so this seems really cool until you get through the hard edges on this. This is something that I was looking into before. [00:27:04] Speaker A: I don't know, I think it was early Access in December, GA did the conference. That's a good point. [00:27:11] Speaker C: Well, even now it's still only allow list only. You have to sign up for it and give them special form. So it's, you know, like a lot of it really relies on your utilization of Chrome Enterprise Premium. And so that's a whole workspace ecosystem that if you're not bought into, like you've got a whole lot of heavy lifting to make that work. There's also, you know, a lot of these things. Like it's great to announce an AI agent that can ingest all the data in your enterprise, but the reality is, is that no enterprise is ready for that yet and no one is. I mean, they have figured out how to build like individual maps of access so that in an AI model can return results for only data you have access to. But it is not real likely that you have your data labeled correctly where it can be detected, who has access to what. And so like the controls on this are pretty minimal in terms of like there's PII data and financial data and patterns that they can recognize. But this would be really hard to implement in a way where you're not getting access to, you know, all the financial HR data, how everyone's made, you know, if, if launched without any care. [00:28:24] Speaker A: Well, yeah, like all good tools, you need to be careful how you roll it out. [00:28:30] Speaker C: It's just, it's a big lift to get those in there. [00:28:34] Speaker A: I mean, this is one of the big problems we had originally with Amazon solution, you know, before Q, which I remember the name of it now because they deprecated it in Facebook favorite queues version. But that was one of the problems they had there too was how do you deal with the access model across all these tools and access layers, etc. [00:28:50] Speaker C: And it's, you know, it's not an AI problem, it's a data problem. Right. That's the trick. And you know, if your data isn't labeled or tagged or organized in a way where you can deduce those things, a model isn't going to be able to understand that either. So it's, we're all going to have to go through our enterprise data and care for it, which is going to. [00:29:12] Speaker A: Suc Age of data governance might be a lot right along with the age of data Inference. [00:29:19] Speaker D: Yeah, it's going to happen where somebody just tells one of these tools to run their organization and it ends up spewing out a bunch of internal private information on the Internet. All of a sudden everyone's going to lock down all these systems real fast. So it's just a matter of that first company slipping up and it becoming public. I feel like. [00:29:37] Speaker C: Yeah, I mean, I don't know if this is, you know, an internal tool. The whole thing is geared to be sort of, you know, for employee access. [00:29:46] Speaker D: But you mess up permissions to your, you know, low level employee that gets access to their K1 before it gets. [00:29:54] Speaker C: I would hope that the. All the model. Yeah, all the model and all the execution is completely separate from like any kind of production thing. [00:30:02] Speaker D: What I didn't realize that was Google Chrome Enterprise had like a premium tier that wasn't just the normal like it's its own product line. I never realized that they've been building. [00:30:12] Speaker C: That up and they've been tying a lot of functionality and features into it, especially security tooling where I'm at this point like just resisting because I'm angry about it, because I don't really want to dictate which browser people use for accessing workloads. But Google's sort of forcing it and I think they're going to run into antitrust issues just like Microsoft did back in the day. And I think this is all about to come crumbling down in a bad way because there's features of different access patterns for X and Google Cloud and all these things that you only get if you're managing your browsers using Chrome Enterprise Premium. [00:30:52] Speaker D: Well, Paul, also, I think also released their own web browser for DLP protection, which is like one of the big ones I see here. DLP URL filtering, malware protection, like I think Palo Alto recently released their own product to kind of do the same type of thing. [00:31:10] Speaker C: That's very interesting. [00:31:12] Speaker A: Their own browser. [00:31:13] Speaker D: I thought so. I thought it was like their own browser. Palo Alto, I mean, I'm sure it's. [00:31:18] Speaker A: Yeah, yeah, I think it is their own browser. It's based on Chromium of course, but I assume so. [00:31:23] Speaker D: I assume everything is based on currently. I am at this point. [00:31:26] Speaker C: I think so, yeah. [00:31:27] Speaker A: I think even Firefox has these days. I think so. [00:31:30] Speaker D: Isn't IE now too? Or Edges? [00:31:33] Speaker C: Edge, yeah. [00:31:34] Speaker A: Not the same thing. IE and Edge are different. [00:31:36] Speaker D: Yeah. [00:31:38] Speaker A: All right, well, let's talk about AI. So they have a new video, image, speech and music, generative AI tools. The first one, Lyria which is Google's text to music model is now available in private pre with allow list on vertex AI. This means customers can generate complete production assets starting with a text prompt which like they even did the demo of this in the keynote and I was like, I don't, I don't understand. [00:31:59] Speaker C: Yeah, I always have a hard time with this because it's like music is an art form and it's like if you want, you know, to generate some pretty milk dose music out there that isn't very unique. Yeah, this is your tool. [00:32:18] Speaker A: But they all start sounding like the same, you know, electronica keyboard, you know, with the automatic beat track in the back. [00:32:24] Speaker C: Yeah, well, I mean they're doing a much better job these days with once like, you know, when it was still like the, you know, back in the day with the. What was the AWS music instrument thing. Yeah, but so they're using real instrument sounds and they're the other. So it doesn't really sound all like one type of sound, but it's still sort of like, you know, I don't care what prompt you put in there, it's not going to come up with anything. It's like there's moves. [00:32:47] Speaker A: You all right now? Challenge accepted. [00:32:51] Speaker C: Yeah. [00:32:51] Speaker A: Get to work. VO2 has new editing and camera controls available in preview that allow enterprise customers to refine and repurpose video content with precision. So pretty cool stuff. Like they showed a demo of, you know, person is in this spot in a frame and then another frame later he's in a different position and then it'll basically fill all the frames between those two positions automatically, which is kind of cool. Or they showed, you know, they built, you know, you put a GNX car into a video and they said we want to do a pan shot of it with it doing a donut and basically it generated all of that with a panning camera shot, which is pretty neat. After the wizard of Oz thing, I definitely can play with VO2. I'm curious what can I do with this thing? Chirp 3, which is about to replace Jonathan because it can create instant custom voices with a new 10 second of audio input. So only needs 10 seconds of Jonathan to be able to give us new Jonathan. [00:33:46] Speaker C: Yeah, that's what our listeners don't know. He's not taking some time off from the cloud potty. We're uploading his consciousness into the machine. [00:33:56] Speaker A: That's why there's a lack of AI capacity out in the world right now, because it's all going to Jonathan's brain. [00:34:01] Speaker D: Do you guys Ever watch the show upload? [00:34:03] Speaker A: Yes, of course. [00:34:05] Speaker D: I can't think of what you said. Uploading is bright. [00:34:09] Speaker A: What's that? There's another one too with the stacks. What's that one? That's a Netflix show, but it was a book before that they uploaded. [00:34:17] Speaker C: Oh, Altered Carbon. [00:34:19] Speaker A: Yes, Altered Carbon. That's a good one too. [00:34:20] Speaker D: I haven't seen that. [00:34:21] Speaker C: And the book is fantastic. [00:34:23] Speaker A: The book is so much better than the TV show. Like the TV show's fine, but the book is really, really good. And then finally, the last AI tool is Imagen 3 has improved image generation and inpainting capabilities for reconstructing missing or damaged portions of an image and making object removal edits even higher quality, making it easier for you to hide all those pesky rigs that are holding people in the air in superhero movies. [00:34:45] Speaker C: Yeah, much like the music. Like. Well, I don't want to say that that way because I was pretty down on the music, but I, you know, these are neat little things for like video editing and processing. I do think it's going to bring a whole bunch of capabilities that people who aren't like expert drone pilots or you know, camera rig operators, you wouldn't have otherwise. So it is going to be interesting there. And you can even see the music applications for that too for background music behind the scene or, or you know, that kind of thing. But you know, it is kind of interesting to see how that's it is sort of going to take over and I think we're going to see, we're not going to even know that it's behind like a lot of movie development. [00:35:26] Speaker D: You know, I mean it could also being good for like you know, hey, I have an old couple years, I converted all my family's like VHS to like digital, like even like taking some of those that were like damaged and like kind of filling in those in between gaps too, which would be really nice. [00:35:42] Speaker C: Yeah. [00:35:44] Speaker A: Then if you get into now the AI hypercomputer, this is an AI optimized hardware with their new 7th generation TPU powering it. Of course software advances for inference with their updated AI hyper computer software layer helping developers optimize computer compute resources as well as flexible consumption models from spot to committed usage, et cetera. As well as they also announced an inference gateway which is basically a fancy load balancer for inference to allow you to basically pass the query on to the right type of model. So you might have some queries that you would detect are very easy for a standard LLM to answer versus this one. Which requires more deep thinking or reasoning that one can be answered by different models. So you get best cost optimization in your models. [00:36:26] Speaker D: So I read about that and I was trying to understand that a little bit more. Is it the load balancer does it? Or would you like a traditional load balancer, like a layer 7 load balancer? You know, you'd have to kind of route it based on URL or whatever you pass to it. Or would the load balancer kind of know where it wants to send that data to? [00:36:44] Speaker C: So it's based on the prompt, right? So based on the incoming prompt it's going to be like oh the. There's basically a router agent right at the beginning that's going to look at the prompt and be like, you know, it's best and most efficient based off of this. I'm going to send this to the text bison thing because it's just summarize a blob of text versus creating an. [00:37:02] Speaker D: Image with the load balancer. [00:37:04] Speaker C: Okay. Or allowing you to input the configurations as part of the service. [00:37:09] Speaker A: I should mention this runs on top of gke. It's a GKE inference gateway. It provides inference optimized load balancing based on load metrics, support for dense multi workload serving of LORA adapters and model aware routing for simplified operations. Look into all of that and basically it uses a bunch of calculations in the LORA capability as well as model serving criteria that you define based on certain known patterns that you're looking for to tell it which way to route the particular query. So it's not AI for AI, you got to know a little bit to make it work. But it's pretty nice if you're trying to do this on top of GKE. But maybe it's why you need 65,000 nodes. [00:37:53] Speaker B: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance. But remember to ask will you actually give me my money back? Achero will click the link in the Show Notes to check them out on the AWS Marketplace. [00:38:33] Speaker A: Agent Development Kit. This is a Python agent development kit. You can look at sample agents they built for you. Just check it. Look at they have A branch. They have a rag agent, customer service, data science, FOMC research, LLM auditor, personalized shopping helper, and a travel concierge. So you can take a look at the code they built for these. You can see how to build your potential agent in the future. It doesn't look too amazingly complicated, which is either terrifying to me or awesome. [00:39:03] Speaker C: So I haven't quite done like Orion does a thing level, but I did look into this a whole lot in the last few days when I was not sleeping, because at the conference what dawned on me was what an agent actually is and how I could think of it in my mental sort of model of things. And if I think of an agent as like an AI function, then, and I need to, or sorry, not a function, but like a Python class, then I have the ability to now upgrade and manage that class as an individual part of my script and so I don't have to rewrite the entire thing. When you think about all the session state and context that you need to pass around, it becomes super important because now you're externalizing these things into separate little modules of things that you can now manage independently. Going through the development kit. It's really a framework and how you tie that all together. So it's, it's, it's exactly what it sounds like and it's saying in like, you know, just like this serverless framework or some other example that I'm not going to think of in enough time to be interesting. And so it, it, it's actually cool because it really helped me sort of understand a better way of sort of creating an interaction with an AI agent, in my case, you know, by, by simply, simply it out to its components and then giving a very deterministic way of putting it all together. It's kind of cool. And then the fact that they're using all this like kind of open source and unified, you know, protocols and context is like awesome. [00:40:45] Speaker D: I feel like the MCV protocol is really that key to a lot of these things actually being able to. [00:40:50] Speaker A: That's why everyone's rapidly adopting it for that exact reason. Very quickly. [00:40:53] Speaker D: Yeah, I mean the best analogy I read was it's USB C for everything, you know, and it to me really was the first time I read it. But like Ryan was saying, like, I think I'm starting to understand more how to kind of piece meal stuff together to make it be more what I want than like, oh, cool, there's this thing over here, like using it in a particular place versus like, I'm starting To be able to take that step back and be like, okay, I want to achieve X and I can now leverage these things. Like I'm starting to understand what tools are in my tool belt. [00:41:26] Speaker C: Yeah. And how to leverage things that aren't going to be within the model. Right. Like the example of the weather bot that they use. It's calling an API to get the current weather. Right. A model's only going to have data that's not up to date. It's going to only be able to give you a guess on what the current weather is based on patterns. But now with the, because of the framework separating out, at least in my head, you're defining the API call as a tool and you're telling the AI agent as part of your prompt to use tool. Get the results from tool. Answer my question. Like it's, it makes these things really easy to, to manage and you know, so this is the first time I was really able to see sort of a production grade app being possible with these things. So kind of cool. [00:42:13] Speaker D: And it's, I think a long time ago we talked about how we, we were all learning how to write, you know, queries in Google years ago and you know, in order to have effective, you know, results, you know, you've kind of learned how to leverage that tool in that way. And I feel like this is kind of that next evolution of learning how to leverage AI and LLMs in that kind of a manner of, you know, of writing to their core competencies. [00:42:42] Speaker C: Yeah, definitely. I mean for me it was the difference between like writing a Python script that was going and doing a thing and then writing a Python application with, with different classes and components different, maintained in different places. It's really cool and the documentation is spectacular. Probably because it's written by AI or. [00:43:02] Speaker A: At least heavily edited by. [00:43:03] Speaker D: Yeah, hope it gets the other way. Written by humans or written by AI edited by humans. [00:43:11] Speaker C: Hopefully. [00:43:13] Speaker A: The other part is the Agent to Agent which is basically a brand new Internet protocol that Google has created because of course they would. The open protocol called Agent to Agent will support and contribution from more than 50 technology partners like Alagia and Box, Cohere, Intuit, Linkchain, MangoDB and many, many more. I won't answer. Read all of them. Basically it's an open protocol with several design principles including embracing agent capabilities, building on existing standards including HTTP, sse, JSON, rpc, making it easier to integrate into your current IT stack. It's secure by default, it supports long running tasks and it has modality agnostic ness to IT So you can use it with any agent world A two way facilities Communicating A client agent and a remote agent A client agent is responsible for formulating and communicating tasks, while the remote agent is responsible for acting on those tasks in an attempt to provide the correct information or to take the correct action. This action involves several key capabilities including capability discovery Agents can advertise their capabilities using an agent card and JSON format contract, allowing the client agent to identify the best agent that can perform a task and leverage A2A communicate with the Remote Agent Task Management the communication between a client and a remote agent is orientated towards task completion in which agents work to fulfill end user requests. This task object is defined by the protocol and has a lifecycle. It can be completed immediately or for long running tasks. Each of the agents can create to stay in sync with each other or Malaysia status of completing a task and the output of the task is known as an artifact. Collaboration between the agents can send each other messages, communicate context, replies, artifacts or other user instructions. And there's a user experience negotiation which allows each message including parts, which is a formal form fully formed piece of content like a generated image. Each part has a specified content type allowing client and remote agents to negotiate the correct format needed explicitly included negotiations of the user's UI capabilities, for example iFrames, video, web forms and more. And so they basically had a real world example in the blog post around candidate sourcing. Basically they're hiring a software engineer who can significantly simplify with HOA collaboration within the unified interface like agent space, a user AKA the hiring manager and tasks are agent to find candidates matching a job listing location and skillset. The agent then interacts with other specialized agents to source potential candidates and the user receives these suggestions and can then direct the agent to schedule further interviews, streamline the candidate sourcing process and after the interview process completes, another agent can engage to facilitate background checks. Is just one example of how a agents can need to collaborate across systems to source a qualified job candidate. [00:45:36] Speaker C: So it was right at the real world demo where I didn't understand what was going on anymore because I thought it was much more like, you know, a low level protocol. We're talking about how these agents literally like communicate. But then the the real world example is like every other AI agent demo you've ever seen. So I'm really confused. [00:45:59] Speaker D: I think it's the ability to pass information from one to the next. So in their example go hire, you know, get me a people that meet the criteria. Cool. Ryan Lucus is one of them Pass Ryan's name directly to the next one. So it's kind of that, that scripting from level to level that, you know, the agents can, in passing, all you need to say is do a background check or Ryan Lucas and it will pull us information from the prior one, pull us name, date of birth, you know, and everything else that it has and immediately feed it into that other system from the, you know, HR recruiting system to the third party background check system. That's the way I kind of viewed it was like, now the agents are directly communicating and like passing the relevant pieces of information versus you having to say, okay, get me the JSON object that contains all Orion's information. Pull out these three pieces of the dictionary and pass it over to here. [00:46:54] Speaker C: I mean, that's what I think it is too. It's just funny because the real world example was not that. Right? [00:46:59] Speaker A: Yeah, I mean, I think you something simple that your brain would understand because. [00:47:04] Speaker C: It'S like, how do you demo TCP handshake? It is that kind of level. It'd be interesting to see a little bit more details on this. And so I'll dive into the. Or I probably won't. Who am I kidding? No, I'm not. I'm going to use the tools that use this. [00:47:19] Speaker A: I might, I might be playing with these. Cause I'm curious about some of my podcast workflow. [00:47:24] Speaker C: I mean, I'll be playing around with like the tooling on top of it for sure. [00:47:27] Speaker A: Yeah, I mean the ADK for sure I'm going to use. But then I, I think the agent to agent could be interesting because then I could, I could then leverage multiple models and agents to go do different parts of the task that I do when I write show notes. So. And have them edit each other and you know, work on different formats. I think it'd be cool. Yeah, add Jonathan's voice in because we need him now. Anyways, so that's it for day one. I didn't ask you guys what you thought of the keynote because I forgot to ask you in the beginning. So what do you think of the keynote in general, the vibe? I will tell you, I actually got into the arena this year the one time I tried last year, but it was full. I was turned away at the door. So I got up extra early and I was already tired, I was already cranky and I walked there and I got into the thing and I had a good seat and it was very uncomfortable and I will never go back. But it was cool to see it in person once. [00:48:21] Speaker C: Yeah. Pass. I watched Keynote from my hotel room with room service, coffee in a Danish and it was fantastic. I love the ski note. I thought it went and I think there's you know, like time, place and manner I guess. I thought the pacing was good. I thought that the announcements were good. All the customer sort of interactions were video which I really enjoyed because those could be a little slow draw out. [00:48:47] Speaker A: I heard someone else complain about that though. I was, I was blown away because they said I think it was strange they didn't have any customers on stage talking about what they do with it. It's all pre recorded and like if you didn't, if you squinted they could be talking about any cloud, not just Google. [00:48:59] Speaker C: And I was like, I guess you can't win it. [00:49:02] Speaker A: Sort of hear what you're saying. But you can't win on these things. Either you get a customer who goes way too long and puts you to sleep like at Re Invent or you get this, which was really tight 30 to 60 second clips. I much preferred them. [00:49:16] Speaker D: I feel like I just fast forward through the, you know, partner talk the customers talking about them. [00:49:21] Speaker A: That's what I do when I watch the recordings and from Re Invent it's like, yep, Skip. [00:49:25] Speaker C: Yeah, absolutely. So, you know, I really liked it. I thought that the length was good and so. Yeah. But again for my hotel room comfort, I think it was in a robe. [00:49:35] Speaker A: Yeah, my knee, my knee was very jealous of your bed because it was being jammed by the chair in front of me. Yeah, long legs and conference and arenas are never fun, but it is a nice space. I kind of like the open cavernous nature of it versus the, you know, we converted a conference center into a keynote. That's all one level. This is nice that has multiple levels. The production quality is quite nice too. Not as cool of, you know, lighting rigs that. I know you love Ryan, but yeah, you know, I was definitely the ins and you know, watching them come on and off the stage, the stage direction they were doing, it was, it's well put together, it's well paced. I didn't think it dragged on. I wish they were just like two extra bullets on each slide because I was looking at the photos I took of each announcement when I was going through doing show notes and I was like, it's just words like Ironwood and then Agent Space. I'm like, yeah, but what was that like? Unless you. So that's one area I would say they could do with maybe just a slight bit more content on the slides to kind of Anchor what it is. But yeah, there was definitely cool. They did have a sort of awkward moment where they tried to convince us that TK was a Chappelle Roan fan. [00:50:42] Speaker C: That was a bit awkward. [00:50:43] Speaker A: That was a bit awkward. But I'm glad TK is enough on the joke that he's. He's cool them kind of making fun of him on that. [00:50:50] Speaker C: I think that's not a running theme necessarily, but I think that that kind of skit has been in every developer or not Developer. Every keynote I can remember for Google. [00:51:00] Speaker A: Or all keynotes for Google. Yeah, those are kind of have a little bit of like, you know, because I think he's been on the last couple. He's very animated in his approach and so yeah, I think that's just something he does, you know, sort of like the Matthew woods, but more endearing than that Woods. All right, day two. This is my day. I did my session on the show floor. It was fun, had some people show up again. My talk about that went well. It was not recorded so we'll never know if you. Unless you were there, Ryan. So you will never know if it was good or bad. [00:51:35] Speaker C: The I'll have to take your word for it. [00:51:38] Speaker A: Yeah, there's not a lot of announcements in the developer conference, a lot of just expanding on what was talked about the day before. So they gave you a lot more examples of using the S SDKs, using the agent to agent. They did show and announce the Vertex AI Agent, which is a fully managed Google Cloud service enabling developers to deploy, manage and scale AI agents in production. Agent Engine handles the infrastructure to scale agents in production so you can focus on creating intelligent and impactful apps. It's fully managed, has quality and evaluation capabilities, has a simplified development process and it's framework agnostic allowing you to use Agent Development Kit, lane graph, LangChain, AG2 and Llama index all available for you. During that keynote there was also a bunch of other talks happening all over the place where they were doing smaller announcements including I Lost my Spot data analytics for Spark Serverless Spark Engine, the IDE enhancements for Gemini Code Assist. So lots of improvements there including ability Gemini Code Assist Troubleshooter to help you with your incidents. It'll actually look at your Google infrastructure and the logs it's producing and and your code and potentially make recommendations to you on things you can do to fix your application and get out of outage. A time series based foundational model, which is interesting. I don't really know why you need a specific model for time series Data, but they built one, so there's got to be a reason. And then several software engineering agents. All you talked about in that keynote or mentioned off stage, I look for one of the other rooms. [00:53:06] Speaker C: I mean, I don't, I don't think there's so much troubleshooting incidents versus just troubleshooting development process. But, you know, it was pretty cool to see them sort of tie it all together. And you know, I was trying to struggle to understand what that time series model was all about. And then I remember they did mention like it's, it helps specifically for forecasting. So I think that that's. It understands your current time series data set and then can apply the time horizon, apply it for forecasting. [00:53:37] Speaker D: That makes more sense. [00:53:38] Speaker C: So like that took a while, this whole show for drag up in my brain, but because we were discussing it before we started recording, so that's one down. I still don't, I still don't know the details on what serverless Spark engine is. I assume that it's, you know, it's Spark jobs, so I assume it's just, you know, ways to query your larger data set quickly. Not sure, but the Gemini Code Assistant, the IDE is like, I love because this is a thing that me and Jonathan have sort of as we're integrating AI more into our development practice. Like they're addressing our problems that we were having, which is how do you coordinate different things? You want AI to write this and AI to write this, and then you have to tie it together. The human's gonna do that and how are you planning and coordinating all of that happening? And so they're their code assist improvements are. You know, there's literally like a Kanban board where you're, you're using your, your prompts for AI generation are happening directly within that Kanban board and it's breaking it up into tasks that can be tracked just much like, you know, any kind of Kanban issue through different stages and, and have the visualization there. So it's a really kind of neat way that they're building these things out. Yeah, I was pretty stoked by that one. [00:54:51] Speaker A: That was pretty cool. Yeah, definitely. The Kanban board was a unique idea. I really enjoyed that as a concept, so I'll play with that one. They did add a code assist also to Android Studio. So if you're an Android developer, you get that now. They also have Gemini Codesys tools, which is in preview, which helps you access information from Google Apps and tools from partners including at Lesion, Sentry, Snyk, and more. The Gemini Cloud Assist, which is integrated with Application Design center, which basically helps you write infrastructure as code and accelerate application infrastructure design deployment. And then the Gemini Cloud Assist Investigations which leverages data in your cloud environment to accelerate troubleshooting of issues and resolution. As well as Google Cloud Assist is now integrated across Google Cloud service including Storage Insights, Cloud Observability, Firebase Database Center, Flow Analyzer, FinOps Hub, as well as security and compliance related services like sce, all in that product. So yeah, quite a bit of Cloud Assist stuff. And then we were talking about how previously Copilot was feeling a little dated. It's feeling real dated today, real dated after these came out. So I expect we're going to see something big coming from those guys very, very soon. Oh, one of the things we didn't talk about from the keynote, I just saw it as I was scrolling through the 229 announcements. Google distributed Cloud is basically making offer with Nvidia to bring Gemini to Nvidia Blackwell Systems on premise via Dell hardware. So you can basically run Google Distributed Cloud in your data center with Gemini running on that. So you can do locally air gapped and connected environments with Gemini, which is a pretty nice, that's kind of cool feature. It's the first cloud provider to provide that for you. It's also a great way to potentially get Nvidia chips that you maybe were struggling with before if they're helping you do that. [00:56:33] Speaker D: So it's like an outpost for Gemini. [00:56:35] Speaker A: Yeah, outpost for Gemini essentially or whatever Google calls theirs. Yeah. One of the other keynote things I saw that we didn't mention was customer Engagement Suite. Lots of enhancements for that product. If you care about it, you're excited. If you don't care about it, like my two co hosts, they don't care about it, which is fine. Scrolling through. See there's anything else we want to talk about. Anything you guys want to talk about. I didn't cover. I mean we covered a lot. [00:56:57] Speaker C: Like already I'm realizing we didn't really go over Cloud WAN after the initial announcement too, but because I did deep dive into that to understand what that was. [00:57:06] Speaker A: Cross Cloud network solution, a fully managed, reliable and secure enterprise backbone that makes global private network available to all Google Cloud customers. [00:57:13] Speaker C: Cloud wan, which is why you have to do the deep dive because none of that is really helpful at all. And so it really is like a, it's a branding of a bunch of little things. And so they've really taken the sort of cloud interconnect model of like this is a specific tab in your dashboard where you're here's your cloud interconnect and you go off and you configure that out with the link that you sourced and provided. And now they're really taking that whole process and controlling all of it and making it software driven. So even down to provisioning the link. And so you can do cloud interconnects, you can do site interconnects, you can do all kinds of different things and you can do it on demand, you can scale up capacity on demand. It is kind of like I think they announced the announcement. It wasn't, didn't do it a service that's not the right way, let's say that. But like, because it is actually I think more neat than people realize. But other than the product people which all their videos look like they have a gun pointed at them behind camera, they look terrified and frozen. So I think it is a much bigger announcement than people realize yet. And I think it's going to change sort of the way that we're using, you know, multi cloud network setups. That's kind of pretty cool. [00:58:36] Speaker A: Well, there's a couple things about it that I think are important to know. One is that you're getting access to basically Google's global network including all the fiber we love to talk about, which. [00:58:44] Speaker D: In one of the articles they went, they deep dove into all the different like fibers and like how many miles of fiber that they manage and they run and, and all the stats were astounding. I tried to find it. Right now I can't seem to find it again. But it was a detailed list of everything. [00:59:00] Speaker A: It was remarkably impressive including a 400 gigabyte cloud interconnect and cross cloud interconnect capabilities available later this year which was 4x more bandwidth than the current 100 gigabit cloud interconnects and cross cloud interconnects you had before. So yeah, you're getting massive throughput between cloud providers through this stuff which is pretty interesting and you could scale it. [00:59:20] Speaker C: Down to 1 gigabit. [00:59:22] Speaker D: But what do you need 400 for. [00:59:26] Speaker C: AI models of course. Same reason you need anything right now. [00:59:30] Speaker A: I need to move a lot of data out of Google into AWS into GCP so I can. [00:59:36] Speaker D: That's how this became an AI announcement for move your AI data. [00:59:41] Speaker A: Sorry, sorry guys. Yeah, well at least like 190 maybe chuckle a little bit because they didn't mention AI in any way possible, which I can't believe they didn't shove it in there which was Cloud City ends fast cache invalidation. Delivers static and content at global scale with improved performance. Now in preview I'm like oh thanks, that's nice. [00:59:57] Speaker C: But. [00:59:58] Speaker A: And then they also. I don't know, it was kind of funny to me that because. Mostly because I met with a VPC product manager and they didn't mention anything about this and I'm annoyed. You'll be able to use private Service Connects to publish multiple services within a single GK cluster, making them natively accessible from non peer GK clusters. Cloud Runner or Service Mesh. Thank you. [01:00:17] Speaker C: Yeah, it's going to make my life so much easier. [01:00:19] Speaker A: Like if he had just said that at the beginning of our meeting, we would have had a whole different meeting. [01:00:22] Speaker C: It would have been a different meeting. I know. [01:00:24] Speaker D: What is that? [01:00:25] Speaker A: So Private Service Connect is like private endpoints, what do they call that on AWS where you connect manage their nats basically to your network so you don't have to use your IP space for managed services. PSC is based. [01:00:39] Speaker C: Yeah, it's just like Private connector. Yeah, it's Private Link. Private Link. [01:00:42] Speaker A: Private Link. Yeah, it's actually their version of Private Link. But instead of each product needing its own Private Link, you can now basically put a bunch of products into the same private Service Connect. Oh yeah. [01:00:53] Speaker D: Oh. [01:00:54] Speaker C: And so it's. And it's not only Google Services, it's your own services. You can put your own services behind a private Service Connect and hopefully this works in the same way for those services as well. Where I don't have to for every single internal application in every environment have to be develop my own private service. [01:01:10] Speaker D: I have. There's a pay that I was dealing with at my day job all recently. [01:01:17] Speaker A: They also. They're also adding layer 7 domain filtering finally to cloud next gen firewall coming. [01:01:23] Speaker C: Later this year finally. [01:01:24] Speaker A: And then Also inline network DLP for secure web proxy also finally powered by Symantec's DLP which Symantec as one of the best third party DLPs out there. So that's good. But yeah, lots of good stuff in security realm they had Google Unified Security or gus which we talked a little bit in the keynote thing. The alert triage agent. So you get agents for security folks as well. I saw a demo of this in a session video I was watching earlier today. Looked pretty nice. New phishing protection inside of Chrome Enterprise. I like the malware mandate retainer so you can pay them in advance. So when you get hacked you already have a retainer in place. [01:02:02] Speaker D: Then malware analysis agent I thought was kind of cool. [01:02:04] Speaker A: Yeah, the malware analysis agent was definitely neat. Or it basically will help you do all the investigation without having to log into the box directly. [01:02:11] Speaker C: I missed that one. I'll have to go. [01:02:13] Speaker A: Yeah, that's a good one. Lots of stuff in storage Hybridisk storage pools finally released which gives you up to 5 petabytes of data in a single pool with 5x increase from what you cover for Hyper disk exit pools is the biggest and fastest block storage any public cloud. Of course it doesn't work with any of the features you really want to use like Multiwriter or ha, which I complained about to someone anywhere cache available to you. Rapid storage, Google Cloud Managed Lustre for whoever wants that. And Storage Intelligence which gives you storage insights using of course LLMs. [01:02:47] Speaker D: I like that one. [01:02:48] Speaker A: I like that one. [01:02:49] Speaker D: So we have intelligent tiering now that AWS said for years but we use LLM so it's better. [01:02:54] Speaker A: It's better. [01:02:55] Speaker C: Yeah. [01:02:56] Speaker D: That's what I read when I read that article. I was like cool, good chat. [01:03:01] Speaker A: Let's see anything else that we missed that we should talk about going through my thing BigTable SQL. You can run SQL on BigTable. You're welcome. A bunch of talk about this migrating SQL server databases using Gemini if you would like to do the lift and shift from Microsoft SQL Server to PostgreSQL. Does Google have the tool for you? [01:03:24] Speaker C: Oh, do they? [01:03:26] Speaker A: DMS now has built into it AI capabilities that will help analyze your T SQL syntax and built in functions and provide translation directly to PostgreSQL and PG SQL. The data mappings will be done as well and then also we'll look at all the connection strings inside your application code that are pointing to SQL Server and update them to be postgres as well. So they said that this can do majority of the work. You should still need to audit IT and QA it, et cetera. But that this can can provide a heavy automated migration process and then combined with DMS ability to do slow migrations, you can basically set this translation layer up in between. You could be working on the conversion for months with your active real time data. It's not an offline operation. You can do it online, which is kind of nice. My point to them was I don't really want to move my bad server procedures from SQL Server to postgres. I'd rather move them somewhere else. And they said that's a great idea so and thought of it clearly. [01:04:20] Speaker C: I yeah, I wish they didn't make this announcement because then I Could still live in the happy land I was before where I didn't realize that you could write the same sort of level of complex stored procedures in Postgres. And so I wish I didn't know that information. I didn't. I didn't need to. Doesn't help anyone now. I'm just sad. [01:04:42] Speaker D: Well, it's good now that you know it. So that when someone, you have to do a migration in the future of postgres to, you know, somewhere else, you know, to look for the terrible life choices that somebody else has made that they have pawned off on. [01:04:53] Speaker C: Oh, you'll find them both. I don't have to go looking for that. It'll be like, oh, why didn't this work? I'm like, what is this? Yeah, this is application logic in, in your database. Stop doing that. [01:05:04] Speaker D: It's the best place to put it, Ryan. Where else would you put it? [01:05:07] Speaker A: I mean, it gets you a licensing save, but it does give you a tech debt. Doesn't fix that problem. [01:05:14] Speaker C: And you know, I really wonder about performance because there's a reason why people use, you know, the Oracle and Microsoft SQL Server for those large computational sets. Like, it's because it's way performant. That's why they spend all the money. [01:05:27] Speaker A: I mean, Postgres can do it too. I mean, Postgres is very performant at server procedure logic. [01:05:31] Speaker C: It can. [01:05:31] Speaker A: It can. Yeah. [01:05:33] Speaker C: I've never heard of that. [01:05:35] Speaker A: Most people don't do it, but it definitely can. I mean, back in the day when it was competing directly with SQL Server and Oracle, many, many, many, many, many, many years ago, lots of startups ran very large postgres workloads, including heavy store procedures, because that's the only way we had to do it at the time. But then they all realized that with Java they can move it out and they all stopped doing it very quickly. So it didn't last as long as it did in the. Net world, but it did exist. [01:06:00] Speaker C: All right, I'm glad I missed that phase of computing. [01:06:04] Speaker A: Yeah, yeah, you're welcome. [01:06:05] Speaker D: All I just heard, once we move stored procedures into Java, my parade just got painfully hurt. [01:06:11] Speaker A: So you didn't move the store procedure as is. You rewrote the business logic. [01:06:15] Speaker C: Yeah, yeah, yeah. [01:06:16] Speaker D: But Java, I. I got stuck on Java and then I just got upset. [01:06:21] Speaker A: Sorry. I mean, I know that's what we used to do back in the day. We use Java, or it was. Or if you're at places like I was at, we use Ruby, which you guys loved. [01:06:29] Speaker C: And, and you see what that did to him. [01:06:31] Speaker A: Yeah, so I don't have any hair. [01:06:33] Speaker D: Because I have video once we were discussing this. [01:06:39] Speaker A: So that's, you know, that's sort of the gist of it there. Well, let's get on to the most important area of discussion and that is who won the Google Next predictions? [01:06:51] Speaker C: I don't know. I don't think it's all that important. I think we just give it. [01:06:54] Speaker D: I think it's Jonathan. I think Jonathan won. Yes. [01:06:57] Speaker A: Well, I was very pleased that Ryan did not win, won't be announcing anything new service announcements, just enhancements for AI and Gemini, et cetera. So that would have been a really boring show today. We would have been done about 45 minutes ago if they had announced anything. So I was really glad that Ryan was wrong on that one. Quite the contrarian position there on that one. Ryan also guessed incorrectly on responsible AI and console services SDK to enable or visualize your responsible AI creation or usage and he was also wrong on endpoint security tools like CrowdStrike, patch management, vulnerability type tooling. So that's a bummer, Ryan. I'm sorry, zero points. [01:07:32] Speaker C: Yeah. [01:07:34] Speaker A: Would you like to defend any of these and argue? [01:07:36] Speaker C: Oh yeah, no, there was. There's definitely hidden in most of these announcements is, you know, the increased safety that you get in AI by, you know, the pre model callbacks and the post model callbacks is definitely ways to be much safer and responsible with your AIs. [01:07:52] Speaker A: I would like to know, you know that and making sure that I was being as fair as possible with this. In the videos that are on YouTube you have access to Gemini. And so I asked Gemini for both a developer conference and that if it could give me any place where responsible AI was mentioned and the only place it gave me was in the one keynote, which I think was the developer keynote. They talked about being responsible with the transition of the wizard of Oz movie and that was the only place that even the word responsible was mentioned. [01:08:24] Speaker C: So close. [01:08:29] Speaker D: I'll give you a quarter of a point, maybe a 10. [01:08:33] Speaker C: No, I mean it's. Yeah, I. Yeah, I mean I have very. [01:08:37] Speaker A: I'm sort of sad you didn't get Endpoint security tool like a crowd strike or something to replace Qualys. But I mean, so this is. [01:08:43] Speaker C: I knew that was a throwaway one because I. With the whiz announcement that everything's gonna go. [01:08:48] Speaker A: You already had a throwaway one with the. With the there won't be anything announced. That's a throwaway too. [01:08:53] Speaker C: That wasn't a throwaway. I'M serious. We should have been a throwaway, but I'm bad at this. [01:08:58] Speaker D: All right, so we definitely shouldn't play golf score when we do this because ride will win. [01:09:06] Speaker A: Lowest number of swings. So then that brought up me. I missed on AI agents specialized for DevOps Kubernetes or DevOps capability, which I realized after the fact when I was reading through this like oh yeah, that cloud assist thing that already existed. [01:09:19] Speaker C: Forgotten about it. So. [01:09:20] Speaker A: And you had none of you guys mentioned to me like well that kind of exists already. And then I'd be like oh yeah, I guess it does. [01:09:25] Speaker C: But you guys, I thought of it but I, you know, you didn't pause long enough for me to mention it. [01:09:29] Speaker A: Yeah, it's fine. I did get next generation of TPU GPUs optimized because I was a gimme. I knew that was going to happen so I was very pleased that one came out and it was optimized for multimodal. Let me tell you because they mentioned it a thousand times and then unification or major enhancement of Anthos and GK Enterprise. I got nothing on that. Like I said earlier in the show, the pre index there was something kind of close ish, but not exactly what I said so I wouldn't have counted it anyways. But I might have made the argument if we were all tied. But Matt crushed us all. Congratulations Matt. You missed on green AI. There's nothing green about it. And I also googled this one too. There was no mention of sustainability in either of the keynotes and being more energy efficient. So I'm not going to give you green AI. But I gave you three not AI specific keynotes and I stopped counting at three because there was a lot more after this. But cloud wan nailed it. Out of the park. I was like oh no. And then Hyper Disk Asset pools got announced on stage and next gen customer engagement suite which that just crushed it. And even the next one also would have been a non AI specific item for you as well. And that third item was AI security thing. That is not endpoint. More guardrails, et cetera. And that would be Google Unified Security or our new best friend Gus. So congratulations Matt. [01:10:42] Speaker D: The first. [01:10:44] Speaker A: I think this might be your second, is it not? [01:10:46] Speaker D: I think I won Google last year also. [01:10:47] Speaker A: I think you did, which is so ironic because you don't do anything in Google. [01:10:51] Speaker D: Maybe that's why we have to play. [01:10:54] Speaker C: Figure out how to play so I can win Azure. [01:10:56] Speaker D: So there's the Azure Build conference we want in May. [01:11:01] Speaker A: Ignite's the One that we would do. [01:11:03] Speaker C: No, it literally is split across Build and Ignite. I went researching this after. [01:11:08] Speaker D: Yeah, it doesn't make sense. They do Azure Build is obviously more developer focused, but they do general Azure announcements in there of things like app services and stuff like that. So think Less Beanstalk, App Runner, those types of services. Cloud run. [01:11:25] Speaker C: I mean, you couldn't pay me to go to Ignite. I might go to Build. [01:11:30] Speaker D: Yeah. [01:11:30] Speaker A: Would I take the bullet for going to Ignite? [01:11:35] Speaker D: Know it's in Seattle, so I would go visit friends, you know. [01:11:39] Speaker A: And where is Ignite? Is it in Seattle? [01:11:41] Speaker D: I thought Build is in Seattle. [01:11:43] Speaker C: Oh, I thought Ignite was in Orlando. [01:11:48] Speaker D: Oh, you might be right. I thought it was in Chicago last year for some reason. [01:11:52] Speaker A: Ignite, when they do. The only thing about Microsoft is they rent out like whole parts of amusement parks for their parties, which is kind of cool. In Orlando. So that's worth almost going to just for that. Not really, but it's almost worth it. [01:12:04] Speaker D: Ignite. Looks like they haven't. [01:12:06] Speaker A: They haven't posted anything, but where was it last year? Chicago? [01:12:09] Speaker D: I don't know. In San Francisco. Looks like you guys are going suckers. [01:12:13] Speaker A: I can tell you the one way to make sure I never go to a conference is put in Moscone. And I'm like, I'm out there. I'm just saying, like, are you going to rsa? And I was like, nope, nope. Yeah, sorry. Nor will I ever go back to Dreamworks Dreamforce again, because that was also terrible. What? [01:12:27] Speaker C: It's. [01:12:29] Speaker D: Oh, the whole city is a disaster. During Dreamforce, I used to just hide at home. [01:12:32] Speaker A: Oh, yeah. Dreamforce is a nightmare. You don't even go near the city if you can help it. I mean, like, they do a really good job making Moscone look like the best part of San Francisco. They clean the streets. They get rid of all the. All the homeless people. [01:12:42] Speaker C: It's just so expensive. And I live close enough where I feel bad spending that kind of money, but it's not practical for me to commute two hours. [01:12:50] Speaker A: It really isn't. [01:12:51] Speaker D: It's like you're in that range where you're like, it's not. It's not worth the 600 a day for a hotel. But yeah, it's four. [01:12:59] Speaker C: I can't spend four hours a day. Yeah. [01:13:02] Speaker A: Even. Even taking the BART. It's like an hour to an hour and a half. And you're like, that's. That's too. You know, then you lose kind of the whole part of the conferences of the, you know, into the dinners and the parties and having the informal conversations. And those things have value, too. And you lose all that when you commute back and forth from a offerance. Honorable mentions. We suck this year. Honorable mentions. We didn't get a single honorable mention, which is unusual for us because normally we nail at least a couple of those. And we kick ourselves because we should have put that in our main prediction, but we didn't. That would not be the case this year. So we all did well enough that we didn't screw that up, other than Ryan, who had neither. [01:13:39] Speaker C: And I was, well, oh, wait. Oh, wait, because I've lost more, if you could believe it or not. [01:13:45] Speaker D: Have you won yet, Ryan? [01:13:47] Speaker C: No. All right, I think I got a point once, but it wasn't enough to win. [01:13:51] Speaker D: It's like one of those weird times. Somebody got like two points or two and a half. There's one of them. I remember it was like, we're like, why you call this 1.5? [01:13:59] Speaker A: Yeah, there was one. I remember I had to go back and look at it on the tiebreakers. So again, thank you to Gemini. In YouTube. I was able to ask it the questions, the tiebreaker. Basically. I asked Gemini the first time. I said, how many times was AI, ML, artificial intelligence, machine learning? Said in this presentation, it came back and it said, 118 times. And I said, that seems excessive. I mean, I was there, but I would have noticed 118 times. It was mentioned several times. I did not feel it was that high whatsoever. Then I asked it again, hey, why don't you give me the timestamps and what the topic was? And so then it came back and it only gave me 38. And I said to it, Google, that's only 38. Are you missing the other 90? Or. They came out and said, I'm sorry, Justin, I lied to you. I was incorrect in my data, but I was able to verify the timestamps and the subjects of those. And so I can tell you, the opening keynote, those four words were said 38 times. And in the developer keynote, they were said 63 times using the same prompt. And that means the total was 101. Now, Ryan felt pretty confident at one, and he's not gonna miss on that one. Matt felt pretty good 52. And I felt pretty good for Matt, too, at 52, because I felt my guess at 97 was probably a little high. But no, no, it was four off. I almost hit that on the money, which I would have been pretty proud, even though I lost the competition. I would have been pretty happy if I hit it on the money, but I'm still pretty proud that I was only far off. [01:15:27] Speaker D: If you get on the money, that's like, you know, you know, like you automatically win that show. Like, doesn't matter. [01:15:32] Speaker A: I, I don't know if that's fair, but I definitely think it's worth a point, maybe in the main game. So if there was, you know, if it wasn't tied already, you know, like, it'd be an extra point to gloat about later. But if that happens ever, which I. The odds of that are probably impossible. [01:15:47] Speaker C: Well, one of, one of these days we're going to load up all the show notes into some giant AI thing that can build us a dashboard. [01:15:53] Speaker A: That's sort of what I'm working on for the 300 episode next week. Whoops. Working on some stuff. Working on some stuff. This is why it's not ready today, because I was busy last week. All right, gentlemen. It was a fantastic Google Next, I think second Year two, a lot of the kinks were gotten out of the system by Google. People realized that taking lifts and Ubers was a bad idea unless they planned more ahead. I also learned all the shortcuts, which I will not share because I will use them in the future if I don't stay at the hotel. There are some key secrets you should know if you are taking Ubers to the conference facility. But yeah, overall I was. I think Year two in Vegas was a tremendous success. I'm very pleased with how it turned out. And again, I, I got a lot of value out of the conference, which I was pleased to be. Not enough that I go back to Reinvent, but definitely enough for this particular conference. [01:16:45] Speaker C: I'd probably go back to Reinvent just to see if I still think it's too big and I don't want to, but you know, you have to try every once in a while. [01:16:54] Speaker A: Yeah, makes sense. [01:16:55] Speaker D: I think I can answer that question already. [01:16:58] Speaker A: It's already. It's too big. I'm sure it's too big. I mean, I, I think I, I had heard, you know, and again, don't. I'm not, I'm not official statement or official person, I think even. But I heard rumblings like friend of a friend of a friend kind of thing, that they had done a three year deal at Mandalay. So I'm curious if after next year, which they already announced the dates for next year. April 22 through the something, which again is a Wednesday through a Thursday, April 22nd through the 24th. I'm hoping when they get all the feedback that Tuesday through, or, sorry, Wednesday through Friday sucks, that that'll get shifted to the 21st. They have time still to do that, but we'll see all those people back in Vegas. But I'm sort of sad it's not on top of my birthday next year, so your liver will not get as much damage this time around, Ryan. I'm unfortunately sorry to tell you, Bobber. [01:17:44] Speaker C: Yeah. [01:17:45] Speaker A: But again, I mean, we can still go on Monday. We still have fun on Monday night. [01:17:47] Speaker C: That's right. Yeah. [01:17:49] Speaker A: We can still have a mini party. How's that? It just won't be the big weekend extravaganza. Matt, we did miss you, but I know you're on watch and so you can't. [01:18:00] Speaker D: I am my watch. Yep. [01:18:02] Speaker A: And Jonathan hopefully be back soon. We will share updates on that later. [01:18:08] Speaker C: Once he's fully uploaded. [01:18:09] Speaker A: Once he's fully uploaded to the AI cloud, actually. Ooh, that's a good segue. Black Mirror is back. Have you guys watched the new season of Black Mirror? [01:18:18] Speaker C: But I plan to shortly. [01:18:21] Speaker A: So the first. The first Black Mirror is right back into the depression that I appreciate for Black Mirror. Like the. Oh, yeah, I could see how this technology went wrong real bad, real fast, real quick. [01:18:33] Speaker D: Oh, no. [01:18:34] Speaker A: And I left the first episode sort of depressed for life. And, you know, it's. It's sort of one of these. Like the last season just didn't cut as deep after the previous administration that's now the new current administration. So I was a little worried that this would not have the right satire necessary of what I was looking for. Because House of Cards is great until Kevin Spacey decided to get me to because he's a terrible person. And then the last season was in the middle of the Trump administration. It just didn't quite have the same bite when it was happening in real life. So that show really fell apart very quickly. I was worried it wouldn't come back. But the first episode. [01:19:11] Speaker C: It's all the way back. [01:19:12] Speaker A: Huh? It's all the way back. I was depressed and was, you know, like, oh, God, I couldn't imagine going through what that poor man went through in that story. And the second episode was pretty good. Not quite as depressing as the first one. And then I'm on the third episode right now and I'll report back. But yeah, so good, so good right now. If you're a Black Mirror fan, it's back. [01:19:29] Speaker C: Check it out. [01:19:30] Speaker A: Mostly because I want them to keep renewing that series forever. So all right, guys, have a great night, and we'll see you next week in the Cloud. [01:19:39] Speaker C: All right, bye, everybody. [01:19:41] Speaker D: Bye, everyone. [01:19:45] Speaker B: And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services. While you're at it, head over to our [email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback, and ask any questions you might have. Thanks for listening, and we'll catch you on the next episode.

Other Episodes

Episode 109

March 25, 2021 00:45:13
Episode Cover

109: The Cloud Pod Hopes all Fault Injections are Simulated

On The Cloud Pod this week, the team debate the merits of daylight savings and how they could use it to break things in...

Listen

Episode 87

October 10, 2020 00:46:22
Episode Cover

Episode 87 - The Cloud Pod gets the AWS Perspective

On The Cloud Pod this week, your hosts eagerly await next week's Google product announcements so they can update their old phones.    A big...

Listen

Episode 255

April 17, 2024 00:37:23
Episode Cover

255: Guess What’s Google Next? AI, AI, and Some More AI!

Welcome to episode 255 of the Cloud Pod podcast – where the forecast is always cloudy! This week your hosts, Justin, Jonathan, Matthew and...

Listen