[00:00:00] Speaker A: Foreign.
[00:00:08] Speaker B: Where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure.
[00:00:14] Speaker A: We are your hosts, Justin, Jonathan, Ryan and Matthew.
Episode 292 recorded for the week of February 11, 2025. VS Code Friend or Foe Azure Data Studio Murdered Good evening, Jonathan. How's it going?
[00:00:30] Speaker B: It's going okay. Long week, but going okay.
[00:00:33] Speaker A: Yeah, it's been a long one. It is right before Valentine's Day and hosts are out traveling and doing things. So it's just the two of us tonight. But that's okay. We can get through this together.
[00:00:45] Speaker B: You don't have the just the two of us sound effect to play.
[00:00:48] Speaker A: Yeah. You want me to sing? Really?
[00:00:50] Speaker B: No, I thought. Thought twice about singing.
[00:00:53] Speaker A: Yeah, that's good.
Well, that sound means though. Jonathan, we're going to talk about earnings.
Everyone's favorite, of course, earnings are all for Q4. Calendar Q4. Some of the people like Microsoft have weird quarters for some reason I don't understand. But basically there was a pretty common theme across all of our cloud providers we'll talk about here, and that is capex.
Everyone's spending a lot of money on OpenAI or sorry, on GPUs for AI. OpenAI being just one of those options, but lots of money being spent and earnings in general for the. These were all a little bit down after they said how much they were going to spend and Wall street said, what about Deep Seek? And apparently the cloud provider said, yeah, we don't care about that little toy. Like we need GPUs. So first up was Alphabet.
There were a little bit of a letdown with cloud revenue missing and the announcement that they were going to spend 75 billion in capex. Consolidated revenue was 12% in the period to 96.5 billion with capex investments shocking the analysts who thought they'd only spend 57.9 billion. Their 75 billion caught them off guard.
Basically. Google Cloud was 12 billion versus expectation of 12.19 billion. Other businesses all beat expectations, including ad revenue and YouTube revenue. And overall revenue was a little light, but not too far off. But they got, I think they were down like 8% today after, you know, Wall Street. What can I say?
[00:02:24] Speaker B: Yeah, I'm guessing ad revenue is going to be down again in Q1, Q2 because I think a lot of ad revenue is driven by the election season.
[00:02:31] Speaker A: Yep.
[00:02:32] Speaker B: So that's, that's not looking too good for them. But yeah, it's still.
That's a lot of money. 75 billion is a lot of money to spend on capex I don't know.
[00:02:43] Speaker A: Who has 75 billion worth of GPUs to even sell you. So that's a lot of money. That's also going to be data center build outs. It's going to be hardware purchases that aren't GPUs but I assume the majority of it's going to be GPU spend.
[00:02:57] Speaker B: Yeah, I would think they haven't gone to too much detail but I think really at this point with Nvidia leading the AI development hardware and TSMC being really the only chip manufacturer who can build those things, if Alphabet and Microsoft and Amazon aren't looking at custom chips at this point, I don't know what's going to happen. It's not sustainable the way it is.
[00:03:23] Speaker A: Yeah, not at these price points for sure.
Microsoft reported earnings next. They had revenue of $69.6 billion which would be estimate by 790 million. And their EP earnings per share were $3.23 which is up 13 cents from expectations. Intelligent Cloud revenue is 25.5 billion, an increase of 19%. But they got hammered for that because prior quarters they were above 20% for cloud growth and the market was not happy about that. And Microsoft said we see your 75 billion Google and we're going to invest $80 billion in CapEx for AI and data center growth.
[00:03:58] Speaker B: Yeah, I wonder where they're going to spend it. I mean Virginia doesn't want any more data centers I guess Midwest probably at this point.
[00:04:05] Speaker A: Yeah, I think Midwest and places where cheap power is available is going to be the king. So nuclear options, hydro, all those areas are going to be big investments. Also international expansion is still I think a big area too particularly for Azure and Google and even Amazon. They're all announcing more and more regions, more expansion of data centers. Lots of laws that are getting passed for data sovereignty that they have to deal with. So there's, there's spend everywhere.
And then Amazon said hold my beer, I'm going to drop $100 billion in capex for Amazon's AI efforts.
Andy Jassy said that could grow faster if they were not hindered by data center capacity, which that's actually interesting.
So basically they're saying that you know, we would be growing. Our revenue which has been slowing down consistently quarter over quarter for at least last couple of years is partially tied to capacity and I assume that's the capacity of GPUs. So yeah, but their sales were 187.79 billion beating analysts estimates of 187.32 billion. EPS was $1.86 compared to a $50 per share expected. AWS was a little light compared to estimates at 28.79 billion compared to expectations of 28.82. But hey, 300. Who was 300 million between friends? That's pretty cool. Amazon. Yeah, pretty close. Amazon guided lighter than analysts expected at 151 billion to 155 billion for the quarter versus expectations 158. So they were also penalized in after hours markets. So they're all penalized in their own ways for different reasons. But capex is a big factor for all three of them with a lot of analyst takes coming out about all the money they're spending on hardware. But if hardware is truly slowing down growth I could spend, baby spend.
[00:05:49] Speaker B: It's going to be interesting. I'm starting to understand more now why Amazon at least split their investments between the training, which was really just for training, and inferentia for inference. I think there's definitely more money in consumer usage of AI products and that's probably what Amazon going to shift towards rather than the training side of things, I would think.
[00:06:10] Speaker A: Yeah, I would assume inference becomes the bigger area of investment long term, but short term you need to train.
[00:06:18] Speaker B: Yeah.
[00:06:19] Speaker A: And they, I think a lot of their stuff they like Trainium and those things was really focused primarily at training first. So inference seems to be where everyone's spending most of the money these days.
[00:06:28] Speaker B: Yeah, I guess, I guess training is a one, a one time cost or you know, certainly not, not as much of an ongoing cost of as providing a service. Yeah, build it, won't sell it. Sell it continuously for sure.
[00:06:42] Speaker A: All right, let's move on to AI is growing great this weekend. Maybe not going so great for Elon. So Elon Musk apparently made an offer to buy OpenAI for $97.4 billion. Sam Altman rejected that bid on Twitter as you would of course using the platform the guy owns to tell him go f off.
Altman told us that Musk's effort was embarrassing and not in the best interest of the OpenAI mission to develop artificial general intelligence to benefit humanity. Altman also that this is Musk's attempt to slow down a competitor of his. And this does cause some complications though for OpenAI who is trying to shift away from its nonprofit roots as ideally to do that you need to sell the assets of the nonprofit to the for profit business that you're going to create. And if this 97.4 billion is the market setting that Might have just got a little bit More expensive for OpenAI to make this transition. So we'll see how that pans out over time. They also, you know, the OpenAI board could sell it for pennies on the dollar and then, you know, OpenAI shareholders for the nonprofit might have a different feeling about that. But again, you're in a non profit situation, so it's completely different. Animal. I don't know how the rules work.
[00:07:52] Speaker B: No, it's interesting. I wonder.
It's interesting that he made a bid. I mean, I don't think he would have actually followed through on it personally. Not now. He's Got Xai and Grok 3 coming out soon and those other things. I think it was just, I agree with Sam Altman that it was probably just a distraction to, to mess with things. But I, but he has, he has drawn a line in the sand at $97.4 billion though. So that's, I'd love to know how he arrived at that number. And kind of. It's certainly not based on their, their revenue.
[00:08:28] Speaker A: I mean, how did he arrive at $44 billion for Twitter? I was like, I don't know where he comes up with these numbers.
It was like some crazy multiple over what they were trading that day even. It was just, yeah, he made it up. It's what he could get funding for. So it's a big number. It sounds impressive and it makes things difficult for his competitor. So I get it.
Well, you know, a big sporting event happened over the weekend, Jonathan, and for those of you who don't care about sports, you know, you might have just watched a Super bowl for the ads. And so of course, you know, super bowl is there for people who like football, if you care for either team, which I was rooting for the asteroid take out the stadium. But, you know, the ads were there and so you know why the drumming of the Kansas City Chiefs by the Philadelphia Eagles 4022 happened. I was hoping for some good cloud commercials to talk about.
You know, a couple years ago, Amazon dropped one and I thought maybe Google would. But only thing Google dropped was some Android ads. And those don't count. We don't talk about that. But OpenAI debuted their first ever ad at the super bowl this year. And I heard the super bowl ads were running $8 million for 30 seconds. So this is a one minute ad that ran, I believe, in the third quarter, if I recall correctly, and was a minute long. So potentially maybe as much as $16 million was spent on this ad.
And it's a Little interesting. It's got a lot of ape, you know, dots like dot matrix printer type graphics that go through different human evolutionary things. Basically ending at the end with creation of OpenAI as the next. Basically implying the next evolutionary leap of humankind similar to hunting fire, wheels, etc.
[00:10:11] Speaker B: It was interesting. I actually like the look of it. I was. The first time I saw it, I was like, this is. This is a bit strange, but I like the halftone look. It's kind of. It reminds me of the newspaper print and news unfolding over the years. I think it was kind of neat. I'm glad I didn't spend the extra 8 million on another 30 seconds though, and showing the doom that's going to come out and the poverty the desolate wasteland of Earth after nobody got a job anymore.
[00:10:35] Speaker A: Terminator 5 coming soon at the end.
[00:10:39] Speaker B: I, I just wonder why they, why they spent the money. I mean, it's. They're a household name really at this point. You know, people, people refer to chat GBT even if they're talking about something else at this point. So it's. It's strange that they would do that. I'm not quite sure who they're trying to appeal or.
[00:10:56] Speaker A: Yeah. Doesn't seem to be helping, you know, help benefit humanity by creating a commercial.
[00:11:01] Speaker B: No.
[00:11:02] Speaker A: So it does seem a little weird. But yeah, you're right. Like, I kind of thought maybe Claude would have an anthropic or cloud would have an ad or you know, some other, you know, meta maybe we mentioned AI and that they ran, but they didn't run any ads either. So it was sort of sparse for tech companies this year on super bowl ads, which was interesting.
But. Yeah. Who is this for? Everyone knows who OpenAI is.
[00:11:25] Speaker B: Yeah. And it's not like they can really advertise for more customers at this point because I think they're really constrained in terms of GPU capacity. I know Claude certainly is anthropic. Certainly are. I would not be surprised. I mean, at least OpenAI has Microsoft to back them up.
But yeah, I don't think they can really go out looking for new customers at the moment.
[00:11:47] Speaker A: Yeah, very, very interesting. So anyways, worth checking out the YouTube video in the link. There's some good comments on the video as well. My favorite one was someone said this ad costs more than Deep Sea took to train.
[00:11:59] Speaker B: Oh, that's hilarious.
[00:12:01] Speaker A: That was pretty good.
[00:12:02] Speaker B: Yeah, I didn't watch it live. I was out shopping for a couch at the time and I asked the guys in the. The phone just saw what the score was and at that time it was, it was half time and it was like 24 to 0 or something. I was like yeah, okay, yeah, I.
[00:12:16] Speaker A: I, like I said I don't, I didn't care for either team winning or losing. They both could have lost and I would have been very satisfied. But yeah, it was a drumming if you will. Yeah. So the score score does not imply how lopsided it actually was. CUZ the, those 22 points the Chiefs ran up in the fourth quarter at the end when basically the Eagles already gone home for the day, they're like yeah, you can keep playing with the ball for a bit.
[00:12:39] Speaker B: It's my turn now. Yeah.
[00:12:43] Speaker A: Apparently OpenAI though is in the final stages of designing its long rumored AI processor with the aim of decreasing the company's dependency on Nvidia hardware per routers. ChatGPT plans to leverage TSMC for fabrication because that's what everyone does within the next few months. But the chip has not yet formally been announced. The first chip will use TSMC's 3 nanometer processor or process. The chips will incorporate high bandwidth memory and networking features similar to those found in the Nvidia processor. Which of those of you listened to last week's show? Jonathan Height as the big reasons why those Nvidia chips are so special? Initially, the first chips will focus on running models for inference rather than training them with limited deployment across OpenAI the idea that the mass production could start in 2026 and this hardware I would assume will end up in Stargate or Microsoft's data centers to bring down costs.
[00:13:29] Speaker B: Yeah, that's great. I'm glad somebody's doing it. I know he's talked about it for a couple of years. It was very vague back then.
I'm seriously concerned about TSMC though being, being the really the only go to place for chip fab anymore.
Especially with the, the asteroid in 10 years which is going to potentially land somewhere. It would just really suck if it landed in Taiwan.
[00:13:51] Speaker A: Our only hope if AI takes over the world to save us.
[00:13:54] Speaker B: Possibly. Yeah.
[00:13:55] Speaker A: Yeah. So maybe, maybe it's a, a blessing.
[00:13:57] Speaker B: I don't know.
[00:13:58] Speaker A: Yeah, no, I, I'm shocked. I'm. I'm actually shocked it took this long for them to announce that they were doing their own chip and to, you know, they actually haven't announced it technically but you know, rumors come out that they're doing one. There's been some scuttlebutt about it but this is a pretty firm, you know, research paper by the news article by the routers. So yeah, very interesting. I suspect all of the, all of the cloud providers and all the AI companies are all thinking we're going to be paying for TSMC to build us chips.
[00:14:24] Speaker B: Yeah, they could name their price at this point and I think they are, I think they're putting the prices up because. Because there is such high demand.
[00:14:30] Speaker A: Yep, 100%. Well, and you know, I know that intel was trying to build that, that chip Fab down in Arizona or New Mexico, one of those, one of those places, you know, and they have struggled to get that off the ground and TSMC hasn't been really super interested in sharing technology with them. So yeah, you're right. TSMC is really the only man in, in town. But they should probably make sure they're distributing themselves around the world because I'm not sure Taiwan is going to be necessarily good footing either if the asteroid takes it out or not.
[00:15:00] Speaker B: Yeah, that's quite a risk.
[00:15:03] Speaker A: It is quite the risk.
All right, let's move on to aws. Fastlane for AWS code build has now come to the Mac OS environments. We talked about Fastlane briefly in a prior show. It's basically an open source tool designed to automate various aspects of mobile app development. Provides mobile app developers a centralized set of tools to manage tasks such as code signing, screenshot generation, beta distribution and App Store submissions, which, if you've done mobile app development, you know there's a lot of pain.
Yes, fully integrated with popular CICD platforms and supports both iOS and Android development workflows. And previously if you wanted to use Fastlane, you could install it on your own macOS installation on AWS. But Amazon has not taken away that undifferentiated heavy lifting for you. So you don't have to get finicky with Fastlane. And now you truly are on the fast lane with macOS on this solution.
[00:15:54] Speaker B: That's awesome. I do remember the pain of building iOS apps.
[00:15:59] Speaker A: Yeah, the signing was a problem. And the screenshots and distributing beta through test flight, then submitting it all to the App Store and who has the account to do all that? It was always a big pain in the butt.
Yeah, definitely. Nice to see AWS Sub Functions is expanding its capabilities of distributed map by adding support for JSON L or JSON Lines is something I did not know about until this article. JSON Lines is a highly efficient text based format that stores structured data as individual JSON objects separated by new lines, making it particularly suitable for large data sets. Don't delete the new line in the wrong place, though, that could get bad for you. This allows you to process large collections of items stored in JSON L format directly through Distributed Map and optionally export the outputs of the distributed map as JSON L files. The enhancement also introduces support for additional delimited file formats, including semicolon and tab delumin files, providing greater flexibility in data source options for your JSON L and step functions. Distributed Map workloads.
[00:16:59] Speaker B: That's really cool, actually, because thinking about streaming data like log data, everyone's moved to JSON logs, except now we just emit a text event with a valid JSON, but it goes in the same file. So JSON Lines is very much, I think, designed for log handling, log scanning, looking for patterns there. So this is really nice. It means we don't have to have a separate lambda function that reads in a 50 gigabyte file and breaks it into pieces first.
And being able to do that natively and distribute it out to different workers, it's awesome.
[00:17:34] Speaker A: Yeah, I mean, that's the beauty of the distributed map is that parallelization feature you get out of it that, you know, so combined with this as a data input format, you really solve that big problem. As you said, bringing up the file.
[00:17:44] Speaker B: Yeah, it may be maybe the first kind of useful to me feature, at least in step functions in quite a while.
[00:17:51] Speaker A: It's good. I mean, I haven't really done a lot with step functions as of late. I mean, it's a pretty good product on its own. And it's, you know, native support of Lambda has, you know, benefited from lambda functions coming and getting updates and quick starts and fast start capabilities, et cetera. But yeah, specifically Step functions hasn't done much lately because it hasn't needed to.
[00:18:10] Speaker B: Yeah, I still think they missed an opportunity there. It's just so complex.
If you're building out a more than simple set of step functions to do data processing, handling the exchange of data between the lambdas involved in the process is really clunky. Like the language that you use is Amazon's own format for how Daisy gets bought in and mapped to different things and then shift out again to the next step. And I think it would have been a lot better if they could kind of wrap this in something nice. They could write, you know, a single piece of code and then have Amazon do the mapping for the components of the code into the different step functions. I don't know. They still need to work on it. They've still needed to work on it for, well, pretty much since they released it.
[00:18:57] Speaker A: Yeah, but definitely they seem like they've been too busy trying to fix cold starts.
It'd be good definitely for them to pivot back around to maybe making the UI better and the interface and the dev pipelines for it simpler to manage. I think I agree with you on your assessment.
All right, let's move to GCP.
Apparently now you can buy BigQuery data sets on the Google Cloud Marketplace through BigQuery Analytics Hub. Opening up new avenues for organizations to power innovative analytics using use cases and procure data for enterprise business needs Using the Google Cloud Marketplace offers you a centralized procurement tool to a wide array of enterprise apps, foundational AI models, LLMs and now commercial and free data sets for third party data providers. And Google combine that with the Hub you can enable cross organizational 0 copy sharing at scale with governance, security, encryption all built in natively for you. I remember when Amazon came out with this many years ago. Yeah, so you know Google, thanks for catching up on this one finally and makes sense in the AI era that you should have this. But yeah, free data sets are actually really nice too. Only if someone had downloaded all the data sets deleted by the current administration from all the government websites and put them here first, we would be in better shape.
[00:20:05] Speaker B: I think they're slowly putting them back again by court order, but slowly but surely. Yeah, I guess Google have the advantage here though because it's. They don't, they don't have to copy the data. They, they make it, they keep one copy and everyone has access to it. Whereas Amazon I don't think quite got there yet, do they?
[00:20:21] Speaker A: I don't know for sure.
[00:20:25] Speaker B: There's data in S3 like public S3 sharing for some data sets and things like that. But yeah. Oh well that's cool.
[00:20:33] Speaker A: Yep. I, I'll check out what days after that. I didn't have a chance to go log in to my Google account to go poke around but hoping there's some good ones there. Maybe like some, yeah, DNA sampling and stuff like that you can play with in different use cases.
[00:20:48] Speaker B: There are a lot of cloud cost management tools out there, but only Archera provides cloud commitment insurance. It sounds fancy, but it's really simple. Archera gives you the cost savings of a one or three year AWS savings plan with a commitment as short as 30 days. If you don't use all the cloud resources you've committed to, they will literally put the money back in your bank account to cover the difference. Other cost management tools may say they offer commitment insurance but remember to ask will you Actually give me my money back. Our chair will click the link in the show notes to check them out on the AWS marketplace.
[00:21:28] Speaker A: Google is launching the public beta of Genai Toolbox for databases in partnership with LangChain, the leading orchestration framework for developers building large language models. The Gen Toolbox is an open source server that empowers application developers to connect production grade agent based generative AI applications to databases, streamlining the creation, deployment and management of sophisticated gen AI tools capable of querying databases with secure access, robust observability, scalability and comprehensive manageability. You can currently connect to self managed PostgreSQL, MySQL as well as manage offerings like AlloDB, Spanner and Cloud SQL for Postgres, MySQL and SQL Server.
I mean it's a server, it's an AMI image, right?
[00:22:06] Speaker B: Right, exactly. They're just packaging some commonly used tools to make it easier for adoption.
It must be tough really because the way AI has grown up next to Tube's notebooks, people are so used to having it. Local people just log in and play with things and do things. I think that it's a hard sell to make somebody switch from that which works to something in the cloud that now they're going to have to pay extra for. But, but I guess as more and more corporations adopt this kind of technology, knowing that there's a standard image and knowing that you can put tools on there around security I guess is probably of some value.
LangChain is pretty cool though. I played with that.
[00:22:49] Speaker A: Yeah, I've looked at the documentation, I haven't played with it myself but it definitely looks like a nice out of the box solution for a lot of the, you know, again undifferentiated lifting that you might have been doing if you're trying to get into LLM.
[00:23:00] Speaker B: Yeah, if you're building agents, which if you're not building agents, you know, but anymore if you're building agents, you probably want to be using LangChain.
[00:23:10] Speaker A: So sometime last year Google gave us Memory store for redis clusters with the ability to manually trigger, scale out and scale down operations. And now to be the elastic nature of modern memory store workloads, they're excited to announce the open source memory store cluster autoscaler available to you on GitHub, which builds on the open source Spanner autoscaler from 2020. Autoscaler consists of two components, the poller and the scalar which monitors via cloud monitoring the health performance Memory store cluster instances. And while I appreciate this and, and the spanner one I really like you just build this into the product. Like why is this an open source thing? I have to run on my own server or infrastructure. But yeah, but in fairness to Google, Amazon used to do this too. They would build like these custom solutions that they put on their, their GitHub thing and then eventually a lot of people downloaded those things. Those eventually became future products within a couple of years.
[00:24:00] Speaker B: Yeah. It reminds us of our Lambda Spackle sticker that we, that we made. Indeed. We can't don't have this in a product, but you can automate it with Lambda. Like yeah, yeah. It is strange that I'm built into product though. I mean maybe I'm not surprised because it's coming from the people who basically invented the complexity of kubernetes. So maybe they just like keeping things separate.
[00:24:21] Speaker A: Maybe they do. Maybe they do. Maybe there's an architectural principle they have. Like we aren't going to do that, but we still have those Lambda Spackle stickers for sale on the website. People want a Lambda Spackle sticker.
[00:24:30] Speaker B: Yeah, I got a new laptop. I don't have any stickers on my new laptop right now. I need to.
[00:24:35] Speaker A: I have all the stickers in my office here. Taking them to conferences later this year.
[00:24:39] Speaker B: Awesome.
[00:24:40] Speaker A: Yep.
Gemini 2.0 is now available to everyone. This is the 2.0 Flash update. They updated both the Gemini app on desktop as well as the mobile apps and the web interface as well, helping everyone discover new ways to create, interact and collaborate with Gemini. And with a Flash General ability. Sorry. Generally available via the Gemini API, AI Studio and Vertex AI developers can also now build production applications with 2.0 flash.
[00:25:07] Speaker B: It's quite a stretch to say build production applications. I mean, I guess you can build applications maybe if you're lucky. I play with Gemini too and I played with their deep research, Gemini's 1.5 deep research offering a few days ago. I think it's got a way to go. I don't think it's quite, quite there with OpenAI's version of the same thing just yet. But I did pay for the Google AI stuff to have a play with. It's pretty nice. It's very fast, I'll say that.
[00:25:41] Speaker A: Yeah. I mean the fact that they have it answering questions to you in Google searches, it has to be fast.
So I definitely see the speed from it. But also that speed comes with not always as fully thought out answers where Claude will give me a really detailed reasoned answer of how it wrote something or what research did where Google can do that too, but typically it's a lot lighter weight in how it's answering.
[00:26:09] Speaker B: Yeah, I get the feeling it's like, it's, it's like the Friday afternoon version of the AI when it's, it's got its eye on the clock and he wants to go home.
[00:26:19] Speaker A: It's like a cloud pod episode recorded on Friday versus on a normal Tuesday where we're like rushing through it.
Yeah. The stats that they actually posted for this, I was surprised how not great they looked. If you look in the article, they have basically a bunch of the various benchmarks that exist for these things. And like MMLU Pro, Flash 1.5 and Gemini 1.5 Pro were 67.3 and 75.8 respectively. And then if you look at the Gemini 2.0, the flashlight is less than the Gemini Pro, but slightly better than flash. The Gemini 2.0 flash is 10% better than the 1.5 flash and the Pro is 4% better. Is that a huge improvement on that benchmark? And the other ones are not overly super impressive either. I mean like it's definitely leading in all of them, but they're not, it's not a leap in bounds improvement.
[00:27:14] Speaker B: Yeah.
It's so strange trying to benchmark a tool so versatile.
[00:27:19] Speaker A: Right. It's hard to QA a tool that's so versatile like this. That's.
I don't know how they do that.
[00:27:27] Speaker B: No. And I think Google is putting a lot of effort on the safety aspects and not really talking about that a whole lot, but I suspect if they were less focused on that like Deep SEQ and more focused on making it better, I think they probably do a very good job. I mean, they invented it. They invented Transformers.
[00:27:49] Speaker A: They did. They did.
All right, let's move on to Azure. Azure has announced that they are murdering Data Studio. Well, they're going to force it into early retirement. So as of February 6, 2025, they are announcing the retirement of Azure Data Studio as they focus on delivering a modern, streamlined SQL development experience. Azure Data Studio will remain supported until February 28, 2026, giving you plenty of time to transition. This decision apparently aligns with their commitment to simplifying SQL development by consolidating efforts on VS code with Ms. SQL extension, a powerful and versatile tool designed for modern developers. And they tell you why they wanted to focus on innovation and VS code, providing a robust platform. They wanted to streamline tools, limiting duplication, reducing engineering maintenance overhead, and accelerating future develop delivery, ensuring developers have access to the latest innovations. That's really a You benefit Microsoft, but you know, fine and then transition to VS code. Gives you a modern development environment and a comprehensive set of Ms. SQL extensions which allow you to execute queries faster with filtering, sorting and export options for JSON, Excel and CSVs. Manage schemas visually with Table Designer, Object Explorer and support for keys, indexes and constraints. And you connect to your SQL Server, Azure SQL and SQL databases in fabric using an improved connection dialogue. A new streamlined development with scripting object modifications and a unified SQL experience. And you can optimize performance with enhanced query results, pane and execution plans, as well as integrate all of this into DevOps and CICD pipelines with SQL database projects all in VS code. That's quite a bit of capability to move out of Visual Studio honestly into VS code with this MSSQL extension.
I'm glad to see that. And I didn't use Azure Data Studio so I'm not so sad. But it's interesting.
[00:29:34] Speaker B: No, I don't use it either. The push towards VS code and Copilot though is crazy for Microsoft or anyone else. They're going to retire and just say it's going to be AS code.
[00:29:46] Speaker A: Well, I think it just makes sense, right? I think that the days of charging for IDEs doesn't really make sense. They get the benefit of the open source community is supporting VS code, they're writing plugins, they're doing all the things which Visual Studio is an anchor. It's so big, it's so complicated. If you're trying to get people to do modern. NET development with C, you don't need all that bloat.
They're still supporting WCF frameworks in Visual Visual Studio which are up to 20 years old at this point. You don't need that in modern. NET web development.
It makes sense to me that they're divorcing themselves from Visual Studio. I wish they use a different name. I Think VS Code vs Visual Studio confuses people quite often.
It's interesting. One of the things I was playing with was Cursor. I don't know. Have you done much playing with Cursor?
[00:30:33] Speaker B: I haven't. I've used some other free tools that do similar thing.
I've been tempted to.
[00:30:41] Speaker A: So I recently got into GitHub co pilot because I made it free for my personal GitHub account and so I've been playing with GitHub Copilot and then I think I've talked about on the show before. I have Claude plugged into a VS code extension that I use to basically do Agentic AI things to my code. It refactored all of my cloud pod terraform code and I use it for a bunch of things in writing test cases and different things when I'm playing with different, different hobbies.
So the one thing I noticed right away with GitHub Copilot was that it doesn't do much. It's like auto completions. It gives you the ability to do a chatbot capability so you can kind of ask questions of your code sort of, but only like the code that's in the current repo. It doesn't do certain chat across all of your repos and then it can generate a little bit of code, but not really anything complicated.
[00:31:35] Speaker B: Yeah, I've had the same experience with a plugin for PyCharm and I think I've been using lately has been. It's called ADA A I D E R. It's. You could sort of run it on the command line in the background and it watches all the files in a directory. And then when you're in the ide, it doesn't directly link for the ide, but when you're in the ide you kind of put a comment and making notes the AI to say, hey, please update this function to do something. And then you put a symbol at the end of the line. And so then, then the tool that's running in the background reads that sends it off to the LLM, comes back and updates the thing and makes a git commit for you. Which is kind of cool. But I really miss the chat back and forth. I've had so many good times with Claude.
[00:32:21] Speaker A: Yeah.
[00:32:22] Speaker B: Like chatting about the project. I'm like, don't write any code yet. Let's just think about this and come up with a plan. And then we write the code at the end or you export the artifacts, you know, the project plan and the guidelines and the tech choices and all the other things into a project and then you have the lam. You consume those things and then write some code. And I feel like, I feel like these tools are kind of a novelty in a way and a temporary novelty. They are really just autocomplete.
But yeah, I think there needs to be a middle ground where you can add the extra context and have the conversations as well.
[00:32:56] Speaker A: I agree. So I mean that was the thing that was interesting to me again because I would assume that if you were to ask people what's the leading developer assistant for AI people are going to say GitHub copilot. I think maybe I'm wrong because again, I'm not doing a lot with it. Other thing with GitHub Copilot then my experimentation with Claude. But agenda going under Cursor. Cursor is everything I want from GitHub copilot in basically a customized visual studio. And it's VS code under the hood, but they've kind of put their own wrapper on it. You get really powerful chat, you get search across all your repos, you get the agentic capabilities I get with Claude. With Klein, you can say, hey, I want you to write me. I can basically just talk to Cursor and say, I want you to write me an application that does this, this and this and has this interfaces and it'll write that code for you, which you can do with Claude as well.
I haven't tried the thinking exercise. Maybe I'll try that as my next experiment. But it really just makes me feel like GitHub Copilot is falling far behind. And it's not cheap. I mean, Cursor's not cheap either. It's like $40 for enterprise users, but it's $20 for a hobbyist developer like you and I, which is what you pay for Copilot.
But you know, like, I think GitHub Copilot needs a major update soon, otherwise it's going to fall behind these other tools.
[00:34:11] Speaker B: Yeah, I almost feel like what, what we really need is I like, I like the integration directly in the IDE so I can see code changes and see diffs back and forth. If a change being made, that's all good, but I feel like we need just like an extra little pane over to the side where you can chat to the AI in a different context, no pun intended. But I want to have a place I can go and have a conversation in words about what we're doing and then go back to the code and then say, okay, now fill this in.
I think it's too much to expect one interface to provide all of the levels of conversation and interaction that you need to build a product end to end.
[00:34:53] Speaker A: Yeah, well, I'll keep an eye on that. We'll keep you posted here at the show if we see anything happen with Copilot. But I have to think that a build or one of the other weird Microsoft conferences that they're going to have to announce some major updates to to that for the dollars people are spending. Because I know like, even, even the day job we're talking about, like we should probably do a pilot of Cursor because you know, people are saying there's Advantage and we should be looking at it.
[00:35:19] Speaker B: So yeah, it's, it's weird. Like so many. There's a lot of products out there. Some are free, some are not free. They all seem to cover a lot of useful things, but there's no one product that does all the useful things that we need. You know, I think, you know, I use Claude projects, but I still find myself copying and pasting stuff back and forth all the time. And I've become like the assistant. Claude's doing some work, I'm the pm, Claude's my engineer and I'm copying and pasting text back and forth and it's slow. So we need AI that has agents that can access the disk, make files, edit files, do git commits, do all those things. I think it's not going to be long before we have a really good tool. I don't think it's going to be VS Code.
[00:36:05] Speaker A: Yep, agreed.
All right, moving off topic here, we got a couple things. You want to do the controversial one or the non controversial one first?
[00:36:14] Speaker B: Oh, we can do that. We'll do the controversial one first.
[00:36:17] Speaker A: Okay, that's right.
So I'm really, I really talking about this because I'm hoping one of our listener knows of a plugin that I could put into Chrome to fix this.
But Google has updated the Gulf of Mexico to Gulf of America for those of you who are in the US and if you are in Mexico, you'll only see Gulf of America. And if you're in the rest of the world, you'll see Gulf of Mexico and then in parentheses Gulf of America. So thank you, Google. You've fallen very far into following executive orders from the President, which is great, I guess if you want political favor AP News, learn that that gets you kicked out of the room if you don't call it Gulf of America. So, you know, maybe this is what you have to do in this administration, but I am not for this. And so I'd love a plugin for Google Maps because I can't go to Apple Maps. It's not good. And you're stuck on Android with Google Maps, so I don't know what my alternative is, but I'm not a fan.
[00:37:11] Speaker B: No, I mean, thankfully, I know it's the Gulf of Mexico and I don't have to go look it up. And I think if you search the Gulf of Mexico, it still takes you there.
It's the principle more than anything.
[00:37:22] Speaker A: Yeah, it's more of like when I'm scrolling, when I am looking at a Caribbean destination to Potentially go on a cruise on and it pops up. I would like to not be annoyed at that moment.
[00:37:30] Speaker B: Yeah, it's weird. You know, it has a name that everyone knows because everyone needs to have a common understanding of what you're talking about and just to arbitrarily decide that we're going to call something something else is. It's just bizarre, especially on such a short timescale.
I don't know.
Yeah, no, I had a play. I went to Google Maps in a browser and went to the page inspector and tried to figure out if I could change elements back or what could be done, but it's buried pretty deep in the data that comes from their data store, I think.
[00:38:06] Speaker A: Yeah, it's definitely somewhere deep in the weeds of the data you're talking about. There's some master system that kind of controls the place object, names it. And maybe this is not something that's that crazy. I assume that's something that they had a support for in other places where there's some maybe disputes about Google about names of things that they were able to just roll us out so quickly.
[00:38:27] Speaker B: Yeah, yeah. I tried changing my browser language. It didn't work. It's very much location based. I was like, oh, I'll change it to British English. What did you tell me then? Still shows Gulf of America.
[00:38:41] Speaker A: All right, well, after that topic, NotebookLM, which is trying to put us out of business here at the Cloud Pod, is of course the research and thinking companion, designed to help you make the most of your information. And you can upload all that material, summarize it, ask questions, chat with it. Jonathan likes to do with his code, or make it give you a podcast style audio discussion, which we've played to hear the show in the past. It's still pretty cool. But if you are a Google One AI Premium plan user, which is a version with a higher usage limit and premium features, you now get Notebook LM plus at no additional cost. So you get more video, more podcast audio, more summarization, more documents that can be searched and indexed across the system and help you do all the great things you want to do with your projects on NotebookLM. So those of you already paying you now get the benefit.
[00:39:29] Speaker B: Yeah, I just signed up for a Google workspace to get my custom domain name and all the, all the goodies that come with it. And I signed up for the AI Premium plan so I could play with these things. So maybe we'll do some fun stuff on the podcast in the next few weeks.
[00:39:44] Speaker A: Yeah, definitely. Check it out here at the show.
[00:39:46] Speaker B: Yeah, and it's definitely improved since we first talked about it. I remember the first demos we did was very little stunted and kind of creepy. Unnatural.
[00:39:57] Speaker A: Well, it had a little bit of the female co host, was a bit of the sidekick who wasn't in charge.
It would be good to check it out again. Maybe you come up with some content that you want to create a little podcast for us. We can play a clip of again on the show. But, yeah, it's coming a long way.
[00:40:14] Speaker B: Yeah, definitely. The female voice now is much more involved in the conversation. It is not just the token person sitting there saying, oh, really? Oh, tell me more about that. You know, I think she.
[00:40:29] Speaker A: I mean, that's Ryan's role here on the show. You can't just replace that with AI.
He's happy to defend himself.
All right, well, that's it for another fantastic week here in the Cloud. We'll be back next week with more amazing cloud news for all of you.
[00:40:47] Speaker B: Wonderful. I'll see you later.
And that's all for this week in Cloud. We'd like to thank our sponsor, Archera. Be sure to click the link in our show notes to learn more about their services.
While you're at it, head over to our
[email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback, and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.