[00:00:00] Speaker A: Foreign.
Where the forecast is always cloudy. We talk weekly about all things aws, GCP and Azure. We are your hosts Justin, Jonathan, ryan.
[00:00:17] Speaker B: And Matthew Episode 338Recorder for January 13, 2020 6T5 Gemma says AI will be back Good evening Jonathan, Ryan and Matt. Yeah, much better. That's much better.
[00:00:28] Speaker C: Much better.
[00:00:28] Speaker A: Definitely.
[00:00:28] Speaker C: Try two was better.
[00:00:30] Speaker B: Yeah, try to was much better. I'm more. I was more excited about it. It was a rough week for our show titles this week. Yeah, it's just a rough week in news in general.
So it's just the nature of the titles.
Well, let's jump right into AI is how ML makes money this week with Anthropic launching Cowork, a new feature for the iOS Cloud Desktop app that extends cloud code's agentic capabilities to general office work tasks. Users can grant cloud access specific folders and use plain language instructions to automate tasks like filling expense reports from receipt photos, writing reports from notes, or reorganizing your files. Cowork lowers the technical barrier compared to cloud code by making AI assisted file operations accessible to non developer knowledge workers, including marketers and office staff. Feature was developed Afraanthropic Observe users already applying cloud code to general knowledge work despite its developer focused positioning, and I've done that many times already. The tool provides similar functionality to what was possible through model context protocol integrations, but offers a more streamlined interface with cloud code style usability improvements. Users can now submit new requests for modifications to ongoing tasks without waiting for initial assignment to be completed.
This is a strategic expansion of Anthropic's agentic AI approach beyond software development into broader productivity workflows and I hope it can make my PowerPoint soon.
[00:01:43] Speaker D: Yeah, it's funny you say that this week is the first time I actually tried to use AI to generate a PowerPoint presentation and did not go well, but did generate some cool images.
I I love these types of things because there's all kinds of. I find myself doing all kinds of things that this, this would apply to.
[00:02:02] Speaker C: The funny part is this was the first time this week I did that, you know, had it build me a PowerPoint and it wasn't terrible. I mean so we have Copilot, my day job. So it was leveraging Copilot to kind of do it and it pulled. I was actually still impressed in how it pulled data from all the different sources that Copilot had access to into something. Now it completely butchered like multiple bullet points on a 5 PowerPoint slide, including the title slide. But it was much better than I would have sat down and put together in five minutes.
[00:02:33] Speaker D: I do think, you know, like things like cloud code and stuff. It's all about the work you put into it and setting it up.
[00:02:38] Speaker B: Right.
[00:02:38] Speaker D: Those work, those workspace or, you know, those agents and sort of the functions, like the more instructions and context you provide, it better it'll be.
So perhaps if I, you know, gave a whole diatribe about how I want my presentations to look, it would have.
[00:02:53] Speaker C: Gone better if you, like, linked it to your company standard and whatnot. That's what I was talking with one of my coworkers about, linking it to your company standard, say, here's my company standard, and then kind of piecemeal it through what you're looking for. Then apparently it does a pretty good job saying, here's a bunch of information, go do something. It. Yeah, made weird images appear.
[00:03:16] Speaker A: Mean, like a corporate style guy kind of thing. Like this. This is our base template. These are our colors. These are our preferred pictures of people who don't actually work for us putting all our slides, that kind of thing.
[00:03:28] Speaker C: Genetic person picture.
[00:03:29] Speaker A: Exactly. Look at the diversity. Yeah.
[00:03:32] Speaker C: Screen slightly blurred on the background of the computer systems. You know, maybe a sixth finger pop in there. But don't worry about that, it's fine.
[00:03:41] Speaker B: Yeah, I must not have the new version with cowork because when I ask it to make me a PowerPoint slide, it still does HTP to PPTX skill, which produces pretty boring PowerPoint slides, I'll tell you. But I do. See, I had to restart to update, but I tried that once already and it didn't work, so I'll have to deal with that later, I think. I installed Claude through Homebrew, which is why I'm probably not getting updated like.
[00:04:04] Speaker D: I think I should.
[00:04:05] Speaker A: Yeah, I kind of see as a, as a home assistant for. But for work at this point, I can just see it developing into having a decent voice interface, which anthropic really haven't put much effort into so far.
And I mean, goodbye subsistence, I think.
[00:04:21] Speaker B: Yeah. I mean, the audio mode that Claude has on their mobile device, at least for iOS, is not super great. It's slow.
So.
Yeah, I appreciate that Claude is really well on coding and on certain business tasks that I need done.
But the, you know, definitely when you look at ChatGPT or you look at, you know, Gemini, like they do a better job at some of this other text, voice and image generation stuff that they just don't do a good job with cloud and that's that's not their priority. And I get that. And so I'm okay with that approach. But I just. So I know every time if I want to do something with images or I want to do something with audio, Cloud is not the. The robot I'm looking for at the moment.
[00:05:03] Speaker A: Yeah, that's still pretty good at looking at images, though. It's just not generalizing images. Like, I've taken pictures of my trees in the backyard and like, hey, what the hell is wrong with this apricot tree? I was like, gotta hire someone to come dig it out. Like, oh, damn.
So it's really good at. Really good at assessing images, but just, I mean, perhaps that's a more useful case for the kind of business that anthropic are going after.
[00:05:27] Speaker C: So I was doing bedtime with my daughter earlier and every now and then I'll be like, okay, I don't want to read or she doesn't want to read. So just like, we'll have Claude tell us his story. So as we were doing it today, as we were reading the next stories, I was like telling Claude on my phone to generate me, you know, the story. So first it generates me just a markdown file.
[00:05:50] Speaker B: I was like, no, no, no.
[00:05:51] Speaker C: Build me like a book. So built me a book with no images. I was built the images and here that I'll show it to you guys. But like the quality of them looks about like my drawing abilities. So Justin will send it to you to add to the show notes. But it does something at least.
[00:06:07] Speaker A: It's just pretty users.
[00:06:10] Speaker C: Yeah, like it's better than what my ability would be. I mean, it's pretty much just fish that are ovals and triangles.
That's it.
[00:06:22] Speaker A: Yeah. I think there's going to be real value in a tool which can use the best models for the best things and be consumer facing. I'd love to have an app on my phone which will find the best model for whatever it is I'm trying to do at any point in time or even combine their powers.
Captain Planet, we are your powers combined.
[00:06:43] Speaker B: All right, I've lost my show notes because I open too many tabs of Claude.
[00:06:50] Speaker D: This is the problem.
[00:06:51] Speaker B: Chrome lose tabs all the time. All right, here we go. Google has released VO 3.1 updates in the Gemini API and Google AI Studio, adding enhanced ingredients to video capabilities that maintain character identity and background consistency across generated videos. The model now supports native 9 and 16 vertical format generation optimized for mobile first applications only. The need to crop from landscape orientations the updated model delivers professional grade output with new 4K resolution support and improved 1080p quality using state of the art enhancement techniques. All generated videos include Synth id, digital watermarking for content, Providence tracking. These cables are available today through the Gemini API and Vertex AI for enterprise customers. And don't make mistakes like I do and go try this and then get a 35 bill for three second videos which I did the first time I tried Veo out. So do be cautious with this one. Don't just go out unless you're using the company account.
So. But yeah, having used this in the past, I do know that getting consistency across multiple frames that you need to stitch together was a bit difficult. So I'm kind of glad to see see them starting to do some anchoring with images and different things to allow you to be more consistent across your videos that you're generating with vo.
[00:07:58] Speaker D: Yeah, it's still cost prohibitive for me to try to use. I feel like I have ideas that would be neat but nothing that's like worthwhile like an investment.
[00:08:06] Speaker B: Right.
[00:08:07] Speaker D: So it's all just like silly ideas that I have like what could, what could I. What 30 second video could I use to torture my children kind of things.
Haven't tried it out yet.
[00:08:17] Speaker A: Yeah, I think it's asking for somebody to come up with a new, a new show idea or a new cartoon idea or something and create the characters and their profiles and then run series on YouTube and get thousands of subscribers and cash in on, on the ad money.
[00:08:29] Speaker D: I would like, you know, a buy to that like YouTube influencer money. Like that sounds like the way to go.
[00:08:38] Speaker A: Huh?
[00:08:39] Speaker B: Well, we can start recording video and you know, we could launch, you know, be more serious about our YouTube channel this year.
[00:08:44] Speaker D: Well, but I would like someone to pay us. Like I don't think, I think they'd pay.
[00:08:50] Speaker B: Yeah.
[00:08:54] Speaker A: Need a thousand subscribers before you get, before you get a send anymore. They change the rules. It's very, it sucks honestly.
[00:09:03] Speaker B: Only because you had one time had a viral video that you got paid for that you. I did.
[00:09:07] Speaker D: They were going to be able to read. It does seem like quite the hustle. So like of the people that are making it work.
[00:09:17] Speaker A: Yeah. I think even, even professional YouTubers are struggling now. There's so many, so many YouTubers are doing like collaborations with, with each other to promote each other's channels and all the, all the channels that I, I watch regularly are deliberately asking people and telling people about the algorithm changes and how we really like you to like and subscribe even though you're a subscriber. Like I like it. I wouldn't watch it if I didn't like it, but I don't always think it's a click click the like button because I I assume that watching it is good enough, but apparently it's not so making it very hard for, for content producers to make a living anymore. Unless you Mr. Beast, of course.
[00:09:50] Speaker B: That's that whole thing is crazy.
[00:09:52] Speaker D: Well, he's built an empire. Like he's, like he, he's transcended just YouTube, right? There's a, I was watching. There's a Roku channel that's the Mr. Beast Roku channel that just comes for free on my Roku. Like it's nuts out.
[00:10:10] Speaker B: Snowflake is acquiring Observe to integrate AI powered observability directly into its data platform, allowing customers to analyze telemetry data like logs, metrics and traces alongside their business data. The consolidation eliminates the need for separate observatory tools and reduces data movement between systems. And the acquisition addresses the growing challenge of managing observability data at scale, which has become increasingly expensive and complex as your organization generates massive volumes of telemetry information.
Observer's approach stores data in structured format that enables more efficient querying and analysis compared to traditional observing platforms. By bringing observability into Snowflake's platform, customers can correlate operational metrics with business outcomes using the same SQL based tools they already use for analytics. Deal positions Snowflake to compete more directly with observability vendors like Datadog, Splunk and New Relic by offering you native capabilities rather than requiring third party integrations which comes a per time since Snowflake had like a major outage two weeks ago.
So hopefully they can prevent that in the future.
[00:11:07] Speaker D: I don't know how to feel about this. I feel like Snowflake is a part of an application, but it's not the entirety of an application. Like I definitely see a use for this for you know, like data warehousing and visualizing this, but you know, like I don't think it replaces your traditional observability tools because you have too many data sources that are outside of Snowflake.
[00:11:30] Speaker A: I mean, I think that puts pressure to reduce prices by any of those other three that we mentioned.
[00:11:34] Speaker D: That would be welcome. Yeah, for sure.
[00:11:36] Speaker A: Yeah. I mean, I'm not even sure. I mean are they making an absolute killing on the observability services or is it really just a reflection of how expensive it is to operate those things at scale?
[00:11:47] Speaker D: I think it's the latter having with our history of logging, I'm like, oh yeah, no.
[00:11:53] Speaker A: Yeah, don't, don't, don't.
[00:11:55] Speaker D: I wasn't gonna say it by name. I know it triggers can't afford therapy.
[00:11:59] Speaker A: I can't afford therapy anymore.
Yeah, it's kind of, it's kind of weird to diversify like that. I mean, I, I guess, you know, our use cases for Snowflake are not everybody's use cases and maybe, maybe people do a lot more internal business analytics and having metrics alongside those makes sense. Whereas for perhaps a back on a use case, it doesn't seem to make.
[00:12:21] Speaker D: As much or it really is, you know, data warehousing but for your observability data.
[00:12:25] Speaker B: Right.
[00:12:25] Speaker D: It's a. Not cold storage but like a cheaper way because you know, paying for retention on something like datadog or new Relics. Expensive. And so I wonder if you could get a cheaper option this way but still sort of have, you know, maybe, maybe not as granular metrics but you can sort of tone down.
[00:12:44] Speaker B: It'd be cool.
[00:12:45] Speaker C: I don't know.
[00:12:46] Speaker B: Yeah.
In other acquisitions, Flexera is acquiring ProsperOps and Chaos Genius to expand its FinOp solutions with Agentic and AI enabled cost optimizations. ProsserOps brings automated commitment management for AWS, Azure and Google Cloud with over 6 billion in annual cloud usage under management. While Chaos Genius focuses specifically on Snowflake and databricks optimization with reported cost reductions up to 30%. This joins Flexera's FinOps approach from passive recommendations to active autonomous execution through agentic AI. This means the platform can automatically purchase and manage cloud commitments and optimize data workloads without requiring manual human intervention. ProsterOps will continue operating as a separate brand while integrating with Flexera's existing FinOps capabilities. And the company was growing at over 90% and has generated more than 3 billion in lifetime savings for customers, suggesting strong market demand for automated rate optimization solutions.
I mean, I don't know if I want to AI buying cuds and savings plans.
I don't know that Amazon will accept. Well, my AI did it for me that I didn't actually need it. So you know, be cautious of this. But I do appreciate they're giving you some automation capabilities.
This is an area that I sort of felt is lacked in a lot of the finops tooling over the years. But you know, again, I don't want to give all the keys. I still want some controls.
[00:14:02] Speaker C: But the reason why I feel like a lot of the other tools always got you to the point of Pressing the button to buy it too, you know, so like cloud health back in the day and everything else pretty much gave you a link there that said push link to go purchase this savings plan or the reservation. So this is just kind of that next step. I mean it definitely needs some pretty strong guardrails, you know, of what your business objective is. Like don't go over 90% savings plans or you know, look at the secondary market for short term. If you see a random burst for a few months, you know, but it's not a terrible idea to at least get that like maybe that first 50%, you know, we have no savings plans. Let's get something in the door and get us going. Until you actually look at your business and go, okay, we're expecting growth or we're not expecting growth and we're expecting, we know we're going to lose customer A, B and C which are will cause us to go down. So at that point there's, I'm not gonna say no sum of AI, but a large sum of AI with you know, full business knowledge that we have to have in order to make those types of decisions.
I might have just done a massive.
[00:15:08] Speaker D: Savings plan to reservation thing like last year.
[00:15:11] Speaker C: This might be a little bit too close.
[00:15:14] Speaker B: I approved one earlier today, so I'm very familiar.
[00:15:16] Speaker D: I can see this being powerful with, you know, giving it a lot of context. But like it feels like the act of giving it a lot of context would be very difficult in this use case, right? It is. It's when you have, you know, a finops professional, that's really what their value prop is, right? They're going and doing the research. They're the ones that talk to the teams.
And so it's. I don't know how to sort of put that all into an AI agent in a way that this is going to, that you would get a point where you, you're getting more than, you know, those RI reservation calculators that do it on demand, you know, on demand probably ways to do it. I just can't think of any because it's so difficult to figure out. I hate making these purchases.
[00:15:58] Speaker A: I mean it's not just, it doesn't just have to be about purchasing savings plans. It could be many other things. It could be all the things that we built before with Thor's hammer and going through and switching storage types, the right storage types or downtown sizing things that are scaled too high. You could probably give it some guardrails, but it could probably do a lot outside of just Purchase spending millions of dollars to help save cost.
[00:16:23] Speaker B: That's right, fair. I mean, I think one of the, you know, problems I have with savings plans these days is you're making huge commitments for multiple years and most of the cloud providers don't provide any type of approval, workflow or capabilities or ability. You know, so it's this poor finops guy who has to go in and click this button and now he's just put the company on hook for millions of dollars.
And like there was some early integrations like Koopa, but then they haven't seemed to invest in that feature. I don't think they actually still even provide it or if they do, it's pretty dated and only works with Amazon, if I recall.
I'm just kind of surprised that this isn't more of a governance problem for a lot of organizations. But I know we built a manual process around it to at least have an approval trail so he feels better about pushing that button. But yeah, it's something. I'm kind of surprised hasn't been a more common ask and use case.
[00:17:12] Speaker A: Yeah, maybe we always get on a teams call or a zoom call or something. It's. It's like launching a nuclear weapon. You turn the key at the same time just to make sure we don't spend the money on the wrong thing in the wrong zone.
[00:17:22] Speaker C: I mean, I remember the first time I had to go spend. We were a reseller. I had to go spend like $1.5 million. I was like, please, for love of God, make sure I don't do anything stupid like the wrong region or anything else. Because it would have been, I assume, my company on the line at that point for that it was saving going. Please don't break this. Please don't mess this up. Please don't mess this up.
[00:17:44] Speaker B: AWS this week is sleeping.
[00:17:46] Speaker D: Yeah.
[00:17:48] Speaker A: Moment of silence, I guess.
[00:17:49] Speaker B: Yeah, I mean they, they did announce some things. I just didn't feel that any of our listeners.
[00:17:53] Speaker D: I know, I didn't care about any of it.
[00:17:55] Speaker B: So I, Yeah, when I, when you guys were all challenging me earlier, I was like, you mean you, you really wanted to talk about larger managed database bundles for Amazon Lights this week? And you're like, no. And I was like, exactly, exactly. Or Amazon MQ now supports certificate based Authentication Mutual TLS for RabbitMQ brokers. Yeah, no, don't care.
[00:18:14] Speaker A: Oh man, we totally missed the show title Sleeping in Seattle.
[00:18:18] Speaker B: Oh yes.
[00:18:21] Speaker C: We need two show titles. The one we come up with before.
[00:18:24] Speaker A: And we're getting them off at the End.
[00:18:26] Speaker C: Yeah, well, one at the end. You know, like the Rebuy it out needs to be. And our final show title is this.
[00:18:34] Speaker B: Work on that.
All right, well, since Amazon's sleeping, let's move on to gcp. Google is adding a new pre configured monitoring dashboard to Gemini CLI that provides immediate visibility into usage metrics like monthly active users, token consumption, and code changes without requiring custom query writing. The dashboard integrates with Google Cloud monitoring and uses OpenTelemetry for standardized data collection, allowing teams to track CLI adoption and performance across their organization.
The implementation uses direct GCP exporters that bypass intermediate OTLP collector configuration, simplifying setup to three steps, setting your project id, authenticating with the proper IAM role, and updating the settings JSON file. This reduces infrastructure complexity compared to additional open telemetry deployments that require separate collector services.
Organizations can analyze raw opentelemetry logs and metrics to answer specific questions, like identifying power users by token consumption, tracking budget allocation by command type, and monitoring tool reliability through status codes. The Data follows Genai OpenTelemetry conventions, ensuring compatibility with other observability backends like Prometheus, Jaeger and Datadog. If teams want to switch to platforms, they'll be supporting that snowflake thing we talked about earlier.
[00:19:39] Speaker D: That'd be good.
[00:19:39] Speaker B: The feature targets development teams using Gemini CLI who need to understand tool adoption patterns and justify their AI tooling investments through concrete usage metrics. So CLAUDE has this as well, and I have played with it a little bit. They have it built into their CLAUDE code with slash stats. Or if you're using CLAUDE Max plans, you can use their usage thing to get similar data. And it's pretty cool when you can see, you know, your input and output token data and how much you're using it and how many tokens you're consuming per hour. And it's definitely a little bit of a challenge. I'm like, I could be that number higher.
[00:20:08] Speaker A: I think.
[00:20:09] Speaker B: Yeah. Which is really not the right answer because that is money being burned at the same time. But it is good to have this type of data and tracking capabilities to provide justifications for ROIs as well as it's just. It's nerdy and I love nerdy things.
[00:20:22] Speaker D: As long as there's no metric for how stupid a question is. Because that. That I don't want.
[00:20:26] Speaker B: Nick, you can.
[00:20:28] Speaker D: You can identify me as a power use for my token consumption.
[00:20:31] Speaker B: Sure, sure.
[00:20:31] Speaker D: But that's only because I'm asking really dumb questions over and over.
[00:20:36] Speaker A: Yeah, there'll Be some irony in Gemini telling you you should have Googled it though.
[00:20:42] Speaker D: This would be.
[00:20:44] Speaker C: It just redirects you every time you type something into. Let me Google that for you.
[00:20:48] Speaker D: That would be pretty funny.
[00:20:49] Speaker B: What can go wrong, you know, Put.
[00:20:51] Speaker D: That in my instructions. Right. There's a 2% chance you'll just tell me to Google it.
[00:20:58] Speaker B: Continuing our acquisition spree, Alphabet has announced a definitive agreement to acquire Intersect, a company that specializes in data center and energy infrastructure solutions. Zacharian aims to accelerate the deployment of data center capacity and energy generation infrastructure in the United States.
If you can't get the capacity from the vendor, just buy them and then force them to do it.
[00:21:18] Speaker A: That's a good move around the customers.
[00:21:20] Speaker B: Yeah, exactly.
Who else is your customer? Oracle. Screw those guys. We're not building their data centers anymore.
[00:21:26] Speaker D: Yeah, it's pretty crazy, right? Like, and I wonder, you know, like, it does feel like, you know, the capacity shortage and the purchases, like, make it faster, but it is sort of like, in my mind I'm like, that's not going to work.
Maybe.
[00:21:41] Speaker C: It's interesting because I know both Azure and AWS have their own internal data center teams that they use to build the physical data centers and everything else.
So it's interesting that Google doesn't. Or, you know, I guess they needed extra capacity.
[00:21:57] Speaker D: I think you. This would be alongside that.
[00:22:01] Speaker B: Yeah. I was thinking like, you typically have the, you have the design team, you have the project management, you have vendor manager, contract management internally. You know, you have the specifications and architecture what you want, but then you typically outsource the grunt work of like, pouring concrete and putting all the power in and running all the cable. And like, this is, this is all the way end to end with this company now, which I don't, I wouldn't have expected they'd had before. But this is probably their. Based on the photos on their website, I assume that they built a lot of Google's data centers over the years and so it's just a matter of this is now a way to streamline and drive efficiency versus having them bid on different things, etc.
[00:22:36] Speaker A: Yeah, maybe they wanted their vendor to support other things that they weren't doing. We want to use you as a vendor, but we want geothermal, we want battery storage locally, we want all these other things that you don't offer as a service, but we also don't want to find a different vendor.
[00:22:53] Speaker B: They do have a podcast if you're interested in listening to data center people talk.
[00:22:57] Speaker D: I mean, I used to be one and I Probably would love that podcast. So I might look.
[00:23:06] Speaker B: Of course, after you listen to us and leave a positive review on Apple Podcasts or Android, whatever Android uses.
All right. Google has announced Data Tables feature for Notebook lm, a feature that automatically synthesizes information from multiple sources into structured tables that can be exported directly to Google Sheets. This feature is available today for Pro and Ultra users, with rollout to all users planned for the coming weeks. The feature addresses a common workflow challenge where valuable information is scattered across multiple documents requiring manual compilation and data tables. Automate this process by extracting and organizing key facts into clean structured formats without manual data entry. Use cases span professional and personal applications, including converting meetings transcripts into action item tables with owners and priorities, synthesizing research data like clinical trial outcomes across multiple papers, creating competitor analysis tables with pricing and strategy comparisons, and building study guides organized by relevant categories. Feature represents Google's continued integration of AI capabilities into productivity tools, positioning NotebookLM as a research and synthesis tool rather than just a note taking application. I love creating spreadsheets like my budgets, my all my tracking of things tasks I'm doing vacation planning. It's all lives in spreadsheets. And you're take that away from me, Google. How dare you. Although I think Cowork might have just done it earlier too.
So AI is coming for my passion of creating spreadsheets. And Jonathan also a spreadsheet junkie. Like I am so more so even than me.
[00:24:29] Speaker A: Yeah, I don't know, don't mind that this is kind of cool though. I mean I think this is like the first of many like deeply AI and ML integrated offerings now because I mean probably on the back end this is let's just pull the data into BigTable and then use AI to aggregate it and and build views and export it out to Google Sheets or something. It's. Yeah, it's kind of cool though.
[00:24:50] Speaker D: I'm interested to try this out just because I I have been using Notebook LLM more you know, just in like I have a couple notebooks that I have for just setting up cor corpus of data like that I can ask questions like because I have a terrible memory. So it's like I have like one where I'll dump a whole bunch of like you know, new, new security announcements in terms of technologies and I'll just export it and put it in the into this notebook and then I can query it for questions.
I've seen people do it for like podcasts to catch them up. Like have it generate a podcast to catch you up and work for the week by reading your email to you kind of thing and couple things like that. And so I think actually having it sort of be able to put things in, you know, tables and probably graphs and that kind of stuff would make it more functional for me. So that's cool.
[00:25:41] Speaker A: As a security guy now you can take all those Excel sheets you've got with you know, vulnerability information in and aggregate it all into a list of things that actually matter.
[00:25:49] Speaker B: No, come on, they won't do that.
[00:25:51] Speaker D: Very anti infosec man. Yeah, that's not our job.
[00:25:53] Speaker C: No, it's more fun to sit in.
[00:25:55] Speaker A: A meeting with coworker actually send it out to the actual relevant teams, you know.
[00:26:00] Speaker D: Well, you think I'm going to put.
[00:26:01] Speaker B: The research into the relevant.
[00:26:03] Speaker C: I have to sit in a meeting and go line by line to see the vulnerability.
[00:26:07] Speaker D: It actually is a dream of mine to because it is such a terrible thing and Jonathan knows because he's built in parsing for vulnerability data before and it's just so difficult and so having something that can actually intelligently figure out who's responsible and you know, making a contextual sort of decision on actual risk versus just using the CPESS score like would be pretty awesome.
[00:26:36] Speaker A: It's a 10. It's a 10 but nobody uses still a 10. You don't have it installed. It's still it's a 10.
[00:26:42] Speaker D: The scanner's picking it up.
[00:26:44] Speaker C: My favorite was We Cloud first.
My favorite was We Cloud first came out a bunch of the tools still identified IP address as the source of truth for the device and this crazy concept called auto scaling set up.
So trying to explain to someone that this Linux server, this Linux thing didn't have the Windows vulnerability on it and they didn't understand why IP address wasn't the unique identifier was a fun multi hour conversation.
[00:27:11] Speaker D: Still does that to this day.
I had that problem just weeks ago.
And they've tried to fix it by changing their data schema. I can see attempts but the core engine underneath is still IP based and so every once in a while get these things like the QID has a bunch of different things and you're like wait, how does it have both Windows and Linux vulnerabilities? That doesn't make any sense. I'm like oh no.
[00:27:42] Speaker B: Yeah, because you, you've made bad choices.
[00:27:44] Speaker A: That's why everyone, everyone should start using RP6. Then we can just use IP address once and then discard it and never touch it again and then we'll never.
[00:27:51] Speaker B: Know the vulnerabilities are, you'll never find them.
[00:27:53] Speaker D: Now.
[00:27:58] Speaker B: Google's releasing T5 Gemma 2, a new generation of encoder decoder models based on Gemma 3. Available now in pre trained checkpoints at three sizes, 270 million, 1 billion and 4 billion parameters. This models uses tide word embeddings and merge decoder attention to reduce parameter count while maintaining capabilities, making them suitable for on device applications and rapid experimentation. T5 Gemma 2 adds multimodal vision capabilities using an efficient vision encoder for visual question answering and reasoning tasks.
Extending Context windows to 128,000 tokens using Gemma3's alternating local and global attention mechanism and supports over 140 languages out of the box. These represents the first multimodal and long context encoder decoder models in the Gemma family itself. In addition to this, they've also released function Gemma, which is a specialized turn of 70 million parameter model based on Gemma 3 that enables native function calling for edge devices. So you can run this directly on the edge or on Jonathan's fancy video card that can run models.
Yeah, yeah, yeah.
[00:28:56] Speaker D: Oh, nice.
[00:28:57] Speaker A: Looking for another one now?
[00:29:00] Speaker D: Yeah, yeah, yeah.
[00:29:01] Speaker B: Once you have one, you need more. That's how it works. Yeah. Next thing you know, you're spending thousands of dollars electricity and you're like, this is gonna bankrupt me. And then you're having the AI trying to hack the power company. It all ends badly.
[00:29:13] Speaker A: I'm actually looking forward to playing with the T5 Gemma model because the, the, the encoder part of it is what's going to make it really special. So transformers have always had this two hearts, the encoder and the decoder. Most lms only use a decoder. And what that means is that as the attention is calculated for each token in the context window, it only ever attends to previous tokens in a message. So if you have a word, that word can only ever be related to something that you've already said in the conversation. But people aren't like that. People go back and forth and they refer back to things they said, or people just suck at communication most of the time. And so what the encoder model does is it looks at the entire message holistically.
It doesn't only look at the last word. By the time it gets the last word, it looks at everything and encodes the meaning of the entire text. And then from there it passes it to the decoder and the decoder starts generating text based on the entire knowledge.
[00:30:15] Speaker D: That is cool actually. Like, I hadn't really thought of that. Cuz I always, you know, I laugh at like the vector search and the, you know, the probability that this, this comes after that and that's how it somehow turns that into like a usable answer. But it is sort of funny when it gets it wrong. That's a, a very, you're like, oh, you focused on the wrong thing.
[00:30:32] Speaker B: Got it.
[00:30:34] Speaker A: Yeah. Especially people. People are just less lame and they're like we'll say something. And then like oh, by the way, I actually meant this.
And, and, and for an LLM that's just disastrous because it's put all this effort into doing something and it gets the, gets your last token. Like oh crap, let's start again.
But like encoders. Encoders are. I think the original use is probably, you know, language translation. You take, you take a foreign language string, you encode under the semantic meaning of it and then use a decoder to generate the alternate language interpretation of it. But just, just. Yeah. Anyway, excited to play with it. And it's, it's tiny and it supports vision multiple languages. So it's, it's kind of cool.
[00:31:16] Speaker D: I've decided to make an AI bot that can yell at my children and so the vision is going to be particularly useful for me for exactly that.
[00:31:29] Speaker A: Yeah.
[00:31:30] Speaker B: Well, a few weeks ago I made some predictions and one of them I said was about purchasing on AI and apparently Google heard me as they've now launched Universal Commerce Protocol, or ucp, an open open standard for agentic commerce. Co developed with Shopify, Etsy, Wayfair, Target and Walmart. UCP enables AI agents to interact across the entire shopping journey from discovery to post purchase support, working alongside existing protocols like agent to agent, AP2 and MCP. The protocol is endorsed by over 20 companies including Aiden, American Express, MasterCard, Stripe and Visa. New Agentic Checkout features Goals Live in AI mode in Search and Jumping app, allowing shoppers to purchase from eligible US Retailers directly with Google's AI surfaces. The integration uses Google Pay and PayPal for payments, the retailers maintaining seller of record status and ability to customize the implementation. Global expansion and additional capabilities like loyalty rewards and product discovery are planned for coming months.
The business agent will launch later this week as a branded AI assistant that appears directly in search results for retailers like Lowe's, Michaels, Poshmark and Reebok. US Retailers can activate and customize this agent through Merchant center with future capabilities including training on retailer data, customer insights, product offers and direct agento checkout within the chat experience.
Google's introducing Direct Offers pilot available in AI mode Allowing advertisers to present exclusive discounts and deals with shoppers during AI powered searches. The system uses AI to determine when offers are relevant to display, initially focusing on discounts with plans to expand to bundles and free shipping. Early partners for this one include Petco Elf Cosmetics, Samsonite Rugs USA and Shopify merchants. With the merchant center adding dozens of new data attributes designed for conversational commerce discovery across AI mode. Gemini and business. Business Agent. So yeah, this is the beginning of what I was talking about.
[00:33:09] Speaker D: I mean I, I think it's important to standardize, right. And so because there's in a trans, a web transaction where you're doing shopping, there's so many handoffs of different things that I, I can see as more and more AI and agent based or agent assisted transactions happen. Like being able to talk a common language is going to be super important.
Especially with anything about money.
[00:33:32] Speaker A: Kind of reminds me of, reminds me of Alexa years ago. You could order all the things from Amberson through Alexa. It was terrible, terrible interface and you know he'd get, get crap arriving on the door. Said like he ordered this and like some kid was chatting to the thing. Yeah, now I want to buy some.
[00:33:46] Speaker B: Your son was, that's who ordered it.
[00:33:48] Speaker A: Or you know, whatever it is.
I don't know this. It's bizarre. I mean how many years ago was that? Like five years ago. I remember doing the same thing.
[00:33:58] Speaker C: Yeah.
[00:33:58] Speaker A: Was it that long?
[00:33:59] Speaker C: This was like 10 years ago. I was gonna say it was when.
[00:34:02] Speaker A: Like the echo dots.
[00:34:03] Speaker C: Yeah, when Capital One gave everyone the free echo dots.
[00:34:05] Speaker A: Yeah, that's, that's right. Yeah.
[00:34:07] Speaker C: That was like my, my, my first Alexa.
[00:34:10] Speaker A: I think it's kind of weird though. It just kind of disappeared and I didn't do anything with the Amazon could have expanded on that. And I felt like there should be some kind of retrospective in a couple of years looking back at the history of all this stuff. Like Google published the, the transform paper years ago and sat on it and did virtually nothing with it as far as we know, until OpenAI came along and actually started building, building models and making money. Like what were they, what were they doing? And then Amazon had all these chances. They had the, the retail pipeline, the transportation pipeline. They had echo dots in people's homes before Google Home was around.
[00:34:45] Speaker D: I do think Amazon tried what they do with it. Well, I do think they tried and failed several things because I, you know.
[00:34:50] Speaker C: Having they tried, it just wasn't there.
[00:34:52] Speaker D: Like if you had one of the Amazon shows like it was, it would, it would Suggest purchases based off of your past purchases, you know, because it's tied into your, your Amazon ecosystem and I, they never, whatever, you know, all the things that they've tried. I don't think it's has worked out because the reality is I just don't know if that's how people want to purchase things. Right. Like I do maintain a, to you know, a shopping list on the thing, but it's not going to be like something that say oh yeah, just get all those things on my, in my, on my list. That's not a thing.
[00:35:25] Speaker A: But now, now with this though I absolutely would, though I'd absolutely say, you know, I think so.
[00:35:30] Speaker D: I think it's just a way for that to communicate.
I think it's the same problem.
[00:35:34] Speaker C: It's just, I mean unless if you set up some sort of automation with this where you say hey, this $50 item, automatically purchase it if it goes below, you know, 35 and set up, you know, a cron that runs daily that does all this, you know, and checks and then automatically purchase it through the A to A or the, sorry, the UCP protocol, which P protocol would be redundant but whatever.
[00:35:59] Speaker A: I mean I just like to be able to say hey appliance, add this to my showing list or whatever. And if it's not an important thing, you can just sit there until there's enough things to get free shipping or there's a discount like I mentioned and then be smart about it. Shop around for the best prices, submit the orders and the softle arrives.
I mean, sounds nice. Wait, so is this how Google are going to make money in the future? Because obviously serving ads through AI is both controversial and a very lame customer experience. Are they going to start skimming half a percentage of sales for sales they direct to these retailers through their AI interface? I guess it can be the new maths that stick out.
[00:36:39] Speaker D: I mean I think it'll be a long time before because they're, they're still serving you know, ad content with you know, their Gemini powered search results. Now you definitely don't want it to do like a sponsored result and have.
[00:36:50] Speaker B: It read like the benefits of this.
[00:36:52] Speaker D: Choice that it's making. But I do think that they're allowed, they do say, you know, they, they're still selling like the, or showing the like sponsored results underneath the AI response. I think that'll be around for a while.
But you know, I think they're looking at it. I think they're.
[00:37:07] Speaker A: Hell yeah.
[00:37:07] Speaker D: Yeah, you know, they gotta be smart because the world is changing.
[00:37:14] Speaker A: At least they're not inline ads that make you sit and watch the ads for the timeout until, you know, until the content how do I perform cpr? You know, we'll tell you how to perform a cpr after this 32nd after this 32nd ad. Like great, thanks.
[00:37:30] Speaker C: Well, you'll have to buy the Claude Pro ad free version to not get the ads that pop up.
[00:37:36] Speaker B: Yeah, well, moving on to Azure, they are announcing in Public Preview the launch of Dynamic Threat Detection Agent, an AI powered backend service that runs continuously within Defender to identify hidden threats across Defender and Sentinel environments. The agent operates autonomously with no setup required, automatically generating alerts with natural language explanations, MITRE technique mappings and remediation steps directly into existing XDR workflows. The Agent achieves over 85% precision across thousands of alerts and 28 threat types by combining adaptive Geni detection with hyperscale threat intelligence from Titan and Yuba Behavioral Analytics. It runs a five step investigation loop at matching scale, starting from high priority incidents, building unified activity timelines, testing hypothesis from automated qa, and closing detection gaps with explainable alerts that include transparent reasoning traces. The public preview is free for security Copilot customers and enabled by default for eligible organizations with general availability plan for late 20202026 when it transitions to Security Pilot's SKU based consumption model. Starting July 2026, the agent will be included with Microsoft's 365 E5 licenses have security for Copilot entitlement and customers can disable it or monitor usage through detailed consumption reporting at any time. The service represents data residency by running Region local and integration integrates deeply with Microsoft Security Ecosystem, using Sentinel to correlate third party and native telemetry while servicing Copilot source detections in Defender 2.0. You and now that you've been in security long enough, Ryan, could you explain to me what a Miter technique is?
[00:39:06] Speaker D: Not simply in a way that would be entertaining at all. Like just.
[00:39:10] Speaker A: Yeah, it's one of those sores Hudson angle, isn't it?
[00:39:15] Speaker D: It's. Yeah. I mean these things are great, right? Because it's we try to do this, but the amount of noise that you have to sort through as an analyst is is crazy. And so Gen AI is really going to be the only way to do these and all they're really talking about is, you know, putting these things in in formats that security engineers already know so that it doesn't seem like a mystery. Or at least it makes sense.
You know, like it's one of those things where, you know, they're, they're bundling it. It's not going to be cheap, but the amount of data that it has to process is, is a lot. So I'm sure it's expensive to run. But the fact that it's just sort of running in the background automatically is exactly what you want sets.
Otherwise you're having to do tuning and then it does become, you know, like, oh, you're, you're doing automation, you know, but only on this one SIM search. And if you've got, you know, the playbook for, for running just that thing, it's great. But this is, this is more dynamic. This is, it'll take real behavior and hopefully do exactly what you want with it, which is sort through all the noise.
[00:40:19] Speaker A: Oh yeah. Again, I got a question now. So if it, if it only achieves, I say only 85 precision.
[00:40:25] Speaker D: Okay.
[00:40:26] Speaker A: How do you know of, of the 100 of the alerts you received, how do you know which of those were the ones he got right and which the ones that go wrong?
[00:40:33] Speaker D: Well, there's a ton of false positives. Right. Like that's, there's a ton of false positives today with any system. And so, you know, like I, I, you know, I would argue that 85% is a, would be a huge increase over a people driven workflows doing the exact same.
[00:40:47] Speaker A: Oh, for sure.
[00:40:48] Speaker C: I mean to me this is no different than the, you know, hey, you're running a SaaS product or you're running your internal servers and you have alerts set up and it's saying that noise to signal ratio and being, let's say an SRE person or you know, you may have it tuned to, maybe you miss a signal or two because you know you have enough in there. But in security, at least from what I've seen, you set it so you get more noise always because you don't want to miss that one signal that could happen.
So getting any less noise in there just helps increase your signal strength.
[00:41:22] Speaker A: Yeah, and you don't want to block a potential customer who's about to press a button to spend tens of thousands of dollars either.
I guess false positives are almost as bad as false negatives.
[00:41:34] Speaker B: I mean they're both not great in lots of different ways. But the false positives are as bad because they cause a lot of drama that isn't always necessary.
[00:41:43] Speaker D: They ruin your credibility.
[00:41:44] Speaker C: But the security person, I'm going to speak for Ryan, but, well, ruin your credibility. But also everyone's happier at the end of a false positive than Ryan. Why did you miss that one signal that came in of how he had a full breach in a ransomware attack?
Like you're almost better off getting hit by that one by those couple noises that the couple false positives and miss that one signal that bankrupts your company.
[00:42:10] Speaker D: I mean it's definitely, you know, when you're doing analysis after the fact, it's definitely something you highlight because you want to know the gaps, you want to know how it got through.
And hint it's the person who gave some someone the password.
Click the wrong link but you still.
[00:42:26] Speaker B: Want to have gave Someone was tricked. I like to think I was tricked into giving my password to that hacker.
He promised me riches. I don't know.
[00:42:36] Speaker C: It was the Prince of Nigeria. They said I was gonna get a million dollars if I just give my bank account.
[00:42:42] Speaker B: Yeah, that's what he said and our final Azure story Azure Service Bus Premium now includes general availability of geo replication, allowing customers to replicate messaging infrastructure across regions for doctor purposes. This addresses a critical need for enterprises running mission critical messaging workloads that require protection against regional outage. Feature provides active replication of service bus entities, including queues, topics and subscriptions between paired regions, maintaining message ordering and metadata consistency. Organizations can now implement cross region failover strategies without building custom replication logic or managing multiple service bus namespaces manually. This capability is exclusive to the Premium tier of service bus, which starts at approximately $677 per month for the base messaging unit. Customers should factor in additional cost for cross region data transfer and the secondary namespace when planning their disaster recovery architecture. The Geo Replication option complements existing service bus disaster recovery features like GEO Disaster Recovery, which is a metadata only failover, giving customers flexibility in choosing between cost optimized metadata replication or full data replication based on the recovery time objectives. Now I'm surprised this wasn't already part of Premium.
[00:43:46] Speaker D: So I read it too.
[00:43:49] Speaker B: But I'm also I'm sort of intrigued in the fact that they think that people's messaging strategies only involve two regions because at least some of the Kafka architectures I've seen are like multiple regions with active active replication across these things for geodistributed applications that need to have globally low latency for user populations everywhere. And I guess I just can't run that on this service. So I guess screw you or wait for Azure Service Bus ultimately.
[00:44:15] Speaker D: So my second thought on this is oh yeah, I forgot that we used to call like event driven pipelines service buses, you know, so it does sort of feel antiquated just by its name alone, so.
[00:44:27] Speaker B: Yeah, exactly.
[00:44:28] Speaker C: It's missing features. That's all I gotta say.
[00:44:34] Speaker B: Just a few features.
[00:44:35] Speaker D: Mvp. MVP rollout.
Yeah, that's what it is.
[00:44:39] Speaker B: It's early. It's early days.
All right, well, that's it for this week in the Cloud Show. We have a great after show for you today about ces. So if you want to hear that, stick around till after the ending bumper. But we'll see you hopefully next week with Amazon stories.
We'll see if they wake up or not.
They're taking a long nap, enjoying their AI slumbers. So we'll see you next week here at the Cloud Pod.
[00:44:59] Speaker D: Bye, everybody.
[00:45:00] Speaker A: See you later, guys.
[00:45:02] Speaker C: Bye, everyone.
[00:45:06] Speaker A: And that's all for this week in Cloud. Head over to our
[email protected] where you can subscribe to our newsletter, join our Slack community, send us your feedback and ask any questions you might have. Thanks for listening and we'll catch you on the next episode.
[00:45:32] Speaker B: All right, is my favorite time of the year, which is ces, mostly because I love, I love gadgets and technology. And so I'm always excited to see what craziness comes out of ces. And I always look at the Verge article as kind of my de facto summary of all the good things, because I've never actually been. It sounds like I'm anywhere to go, like going to reinvent, but worse.
So I don't ever actually want to go in person, but I do like to live through all of the press that is there. And so I thought we'd talk about some of the things about CS this year. So a couple things that don't out to me this year. I thought we'd see a lot more robotics this year than we ended up seeing.
You know, Boston Dynamics thing came out and showed, you know, showed their human robot. And there was some. There was some robot stuff. So it wasn't completely robot barren, but I just expected with AI and all these supposed leaps were having an AI technology and robotics, that there'd just be a lot more. And it was not really there, but just like cloud AI has taken over everything.
So there was all kinds of crazy things that had AI built into them for absolutely no real valuable reason that I could possibly tell, but they're there. And so, you know, that's, that's kind of things about ces. And so, you know, I did geek out on some TVs that I might buy, you know, new thin. You know, thin is wallpaper. Type TVs that look cool.
New LG OLEDs, new headsets that look kind of nice. There's a 52 inch Thunderbolt hub monitor from Dell that I'll never pay for, but I'd like to look at it because it's $2,800, but I like to dream about it.
And so, you know, like, lots of good stuff. OLED, RGB, stripe gaming monitors, which is the upgrade to my 6k monitor for gaming. I love the 6k ASUS monitor that I already own. Instead of buying the Apple Rage, the expensive one, this is what I bought. I'm very happy with some keyboards, some prototype stuff every year. So, you know, you always had to kind of play the game of like, what's actually going to ship.
Amazon got into the TV world with a art line type TV because they saw the Samsung Art TVs and they said, that's a great idea, we're going to do that too, but probably worse. So I can't wait to hear the actual reviews on that. And then, you know, not a lot of, you know, iRobot this year, but other robot vacuums, some that climb stairs, lots of combo floor vacuum and mopping bots out there. So lots of good stuff. What do you guys think?
[00:47:48] Speaker C: I really like the refrigerator that you can talk to, to open and close.
Feels like there's no way that ends poorly in life.
[00:47:54] Speaker B: Yeah, I mean, like, how often have you, you know, been in a situation where you're like, damn, if I just had a automatic fridge opener, I would be in better shape. I don't have that use case typically. I mean, if my hands are cold, I guess, coming in from the car. But I'm not going directly into the fridge, typically with those items.
[00:48:10] Speaker C: No, I normally put down, put it on the counter, organize it, you know, which.
[00:48:14] Speaker B: Which is conveniently typically located right next to the fridge. Is it not the counter.
[00:48:19] Speaker C: Except for in the picture in the Verge article where there's this like, nice big, like, storage cabinet and then the sink.
[00:48:25] Speaker D: I personally just carry the bags in from the car and just chuck them in there. So this would help me. Which it would.
[00:48:34] Speaker B: Yeah.
[00:48:35] Speaker D: I mean, that's the. That's the real way is that my kids would just be like, open clues.
[00:48:40] Speaker B: Yeah, yeah.
[00:48:43] Speaker D: Yeah, exactly.
[00:48:44] Speaker C: And then the French door breaks.
[00:48:46] Speaker A: Yeah. So it's like demolition. My mom illuminate and deluminate.
Open the door unopen. The door. I. I'm actually surprised. It was not as much smart home stuff as I expected because electricity prices are going up constantly.
I think the market is already fragmented. We've got IT devices. We've got Apple things, Google things, Amazon things. We've got lg. What else have we got? We've got TP Link. I mean there's a thousand companies making stuff and there's no really cohesive, I mean, matters kind of getting there, I suppose, but there really isn't a cohesive thing.
I want to put occupancy sensors in the rooms so I can turn lights on automatically. But now I've got to figure out how to wire ethernet to the thing to power it.
Then I've got to have a smart bulb and then I've got to have something that is low latency enough so that when somebody walks in, they're not tripping over things for five seconds until the light turns on. Like, why doesn't somebody just make a light that has, you know, a microphone, a speaker occupancy sensor and the RGB built into it? I would pay, yeah, $200 a room to illuminate my entire house with a device like that that connected with Z Wave or something.
[00:49:55] Speaker D: Yeah, doesn't exist.
[00:49:56] Speaker A: But no, it's, it's like, you know, you, you can get this thing from this company and you can use this, this little cable to like reflash it with the SB home and do this. I don't want to do that. It has to be something I can hand off to the next person who moves in.
Just, it'd just be a nightmare trying to sell a place with some kind of hackish home assisted smart home.
[00:50:15] Speaker D: And I, yeah, I bought like a sensor to play around with it, you know, because I'm so sick of like the motion sensors that, you know, like my kids are furious because they're reading a book and the lights just turn off because I don't have the lights turn on. But I do have things turn off automatically because I'm sick of walking around my house turning off all the lights.
[00:50:32] Speaker C: Which nowadays with led, does it actually matter? And I say that knowing that I have this conversation with my family every single day.
[00:50:40] Speaker B: This, this is a conversation that every husband has in every family because we a, like we're all slightly autistic and we're all hate overhead lights, I'm sure. So there's that. And then number two, like, why are you not in this room if all the lights are on in it?
[00:50:54] Speaker D: I just assumed it was the law. I thought I had to by, you know.
[00:50:58] Speaker B: Yeah, it's part of, it's part of the dad, the, the dad outfit, you know. You get the, you get the shoes, you get the, you know, the cargo pants and then you get the. I turn off the lights.
[00:51:07] Speaker A: You know, I'm the opposite way around. I swapped all the, all the lights here the house to LEDs and like you could literally behind the whole, the whole house every day for maybe.
[00:51:16] Speaker D: Oh, I knew that.
[00:51:16] Speaker A: Parts turn on. It's not, not a lot.
[00:51:19] Speaker B: That's why I said it has. You have that autism part come into play on it too. And then you're just mad because the lights are on because you, you know, overhead lights are the devil. So. Yeah, that's my.
[00:51:27] Speaker A: Well, I different part of the house. I don't care if I'm in room. Sure I care. But I'm not going to nag somebody to, you know, not leave their, you know, half watt night nightstand lamp down. There's something.
[00:51:38] Speaker C: Okay, but the question is at night do you have to walk around your house and turn off all the lights when everyone's gone to sleep?
[00:51:44] Speaker D: I just, I, I yell at machines now to turn everything off. But yeah, I think I do have to turn everything off.
[00:51:50] Speaker C: I mean I do that.
I have such an old house where the wiring can't support some of that so I have to rewire parts of it, which is a different project I'm working on.
[00:52:02] Speaker D: Well, no, it's just a constant battle and they're just, it's just not good enough in terms of like occupancy sensors and like the sensors that work really well. It's like you have this like 20 foot USB cable and you have to somehow mount it in the upper corner of your room and it's like, well, what am I? So now I have to have like ugly spaghetti cable. Like no thanks.
[00:52:22] Speaker A: Smoke detectors for. We'll be perfect. Like, you know, we don't all live in California. Some of us do, some of stump. But you know there's, there's building codes. Every, every bedroom is gonna have a smoke detector. Every hallway outside the kitchen. They're all over the house. Put, put some millimeter wave sense in there. Not, not passing for one because those ones that suck if you don't move because you're reading a book, you know, put a sensor in Strand Air. Yeah, maybe you've got a Kickstarter plan coming up. I don't know.
It's like so low hanging fruit and.
[00:52:53] Speaker D: It can't be expensive like looking at the electronics because some of these sensors are dirt cheap. So it's like.
[00:52:59] Speaker B: Yeah.
[00:53:00] Speaker A: Oh man, you just go on Alibaba or something. You can buy like an espresso.
[00:53:04] Speaker C: Yeah.
[00:53:05] Speaker D: No, it's nice.
[00:53:08] Speaker A: Yeah. All right, well there's a project for the year.
[00:53:12] Speaker B: Yeah.
[00:53:12] Speaker C: All right, Claude, let's go make me.
[00:53:15] Speaker D: A manufacturing plan for sensor smoke detectors that are Internet enabled and actually detect fire still. Because that seems important.
[00:53:25] Speaker A: Yep.
[00:53:27] Speaker B: I mean I do want to know about, you know, like the Samsung display had a creaseless folding OLED panel. And then I find it funny, I see the picture that I can see the crease details and you know, so I, I still wonder like, you know, this is rumored the year of the Apple folding phone and I, I have zero, zero interest in said folding phone. But I know Jonathan's a fold fan and I, I just, is this technology evolving? Is it super popular? Like, I don't see a lot of them in the wild now. I, I don't look for them often either, but I've never noticed it.
[00:54:01] Speaker A: I mean they're too expensive.
I mean you can see the, I can see the fold in mine. But then it's. If you had the light just right. Sure it's annoying because you get a line down the middle, but normally I've got a bright video or something. I'm using it open like that.
It's not perfect, doesn't bother me. But I mean the fact that they're still 171800 dollars for folding phone is probably why you don't see many of them in the wild.
[00:54:23] Speaker C: I just don't even trust myself with a seventeen hundred dollar device in my pocket.
[00:54:29] Speaker B: Yeah, I mean I, I, I'm getting nervous. Every year the iPhone goes, gets more expensive. So I'm like, well you know, now it's gonna, it's thirteen hundred dollars. Replace this thing. So warranties start getting important.
[00:54:41] Speaker A: Yeah.
[00:54:42] Speaker B: The, the keyboard that you can program the tin key like because we're, there's people who love tinky like myself and there's people who hate it and the haters keep trying to make it go away and I'm like, just stop, please stop messing with my tanki.
I will use it forever. Till AI takes away my need to type.
[00:55:00] Speaker C: Never. Well, you still have to talk to her.
[00:55:04] Speaker A: Yeah, less and less. I'm using the keyboard on the phone to type messages now. I just quickly hit the voice thing and just have it dictate it's gotten good enough that it's probably more accurate than my swiping on the screen.
[00:55:19] Speaker D: I'm still not totally on board with voice commands and everything like that because it's fine when I'm by myself but like it's, it gets awkward like you know, and I, I, there's five of us that live in this house, so very rarely are we in a room alone.
[00:55:33] Speaker A: Yeah, like, okay, Google, remind me not to have to hang out with this guy again.
[00:55:36] Speaker D: Exactly like, oh, that was inside voice. Google Inside voice.
[00:55:42] Speaker B: A couple other things in here that are interesting like this pocketbook ink poster Duna and Tila E Ink poster. Like this, this reminds me of the dream, you know. And I heard about Bill Gates when he first built his house in Seattle. The, you know, the big story was like, well, when you get to the house they give you a sensor that you carry around and as you enter each room it like the art changes to things that, to digital versions of the art that you like. Which back in this is like late 90s, that was cutting edge. Like you know, that didn't exist. The idea of digitizing all of your art and having it just display based on personal preferences, like that was crazy talk. But now you have an E Ink display that can do it for posters. And I'm like, that's a cool idea. Like I love the dream of it. And then I see the price tag and I'm like six grand. Never mind. Like screw that for a 40 in TV. Basically that is an E Ink poster in color. So it's not just gray. But that's kind of crazy. And then, you know, but anyways, you know, I think you guys said at the beginning, less home automation, less robots, kind of. I, I would say I was sort of. Other than some cool TVs that I'm sort of interested in buying because it's been about five years. I bought my last TV and I. It's about time to, to update them. I'm sort of, kind of unimpressed with what I thought would be there versus what they get shipped out.
[00:56:52] Speaker A: Yeah, don't care about WI FI anymore. I mean I don't, I don't see.
[00:56:57] Speaker B: Not only have ubiquity. Why. Right, yeah.
[00:57:04] Speaker A: Curling ironed. Seriously. I mean the thing I like actually I think if the price was half the price, it would have a huge market because think about all the restaurants.
[00:57:13] Speaker D: You'Re paying for the digital signage.
[00:57:14] Speaker A: The digital signage. Yeah. I want to just save a fortune in, in electricity costs.
[00:57:19] Speaker D: I mean, yeah, I have two dashboards in my house, like just you know, that are using the same technologies you'd use for the digital signing. And it's, I use them as like ways to communicate with the rest of the family and that kind of thing. But yeah, if it was something that.
Yeah, exactly.
[00:57:34] Speaker C: Yeah.
Bring me.
[00:57:36] Speaker A: Do not understand.
[00:57:38] Speaker C: Yeah.
[00:57:39] Speaker D: I do have a button that says quiet and when I hit the button, it does flash the dashboard downstairs and makes a noise.
So like when we're recording and they're making too much noise, just like hammer on the button.
[00:57:51] Speaker C: Oh, that's what you're doing over there. Got it.
[00:57:54] Speaker A: Like patting on the floor.
[00:57:55] Speaker D: But that's the idea.
[00:57:57] Speaker A: It's kind of cool.
What do you use for the. For the dashboard? Because I'm looking for. I'm looking for something like that to put, you know, to update my home chores around Kanban with. Instead of a whiteboard posting that.
[00:58:08] Speaker D: So I use an app called dacboard which used to be open source but then they. It's sort of become private source, so. And then now there's a subscription which is pretty low and it's pretty reasonable. But I do find that over time they're sort of stripping features away.
But it is, you know, it is something that's very configurable and it's, you know, nice. And it hooks into like, you know, home assistant. And it hooks into. It's got a lot of integrations and stuff that I can just throw a little.
[00:58:34] Speaker A: Is it touchscreen?
[00:58:35] Speaker D: It does, it does touch screen.
So my one downstairs is touchscreen. One upstairs is not.
[00:58:40] Speaker C: They're not cheap.
[00:58:41] Speaker A: Oh, so you can run it. You can run it on your own hardware as well.
[00:58:44] Speaker D: So it basically. I mean they do sell hardware, but it runs on a Raspberry PI.
[00:58:49] Speaker C: Ah, yeah. They have a. Looks like a CPU that you connect to a monitor which looks like a Raspberry PI.
[00:58:55] Speaker D: Yeah, it is, it is a Raspberry PI.
[00:58:58] Speaker C: A pre bought Raspberry PI.
[00:58:59] Speaker D: Yeah.
[00:59:00] Speaker C: Or you can just give the SD card.
[00:59:02] Speaker D: It is, yeah.
And the software comes built into the.
I don't know what they call it, the Raspbian software flasher.
[00:59:10] Speaker B: Like it's.
[00:59:10] Speaker D: Or you could just select the os. Like it's already in there. So it is. Yeah, it's really easy to set up.
[00:59:17] Speaker C: The wall display does look nice. I mean from the picture.
I mean it's $600. So yeah, you know, there's that.
[00:59:27] Speaker D: But yeah, no, it would be kind of nice. You know, I. I do think I like the world of like, you know, you've got E Ink everywhere and you can sort of use that as lighting. You can use that as sort of decor and change it all. Like I think it would be neat. But yeah, you do have to have Bill Gates money to do it.
[00:59:44] Speaker A: Yeah. Do you still use your Remarkable toy?
[00:59:47] Speaker D: Yeah, I do, actually. It's the only notebook I use anymore, so. And I've, you know, I've got.
[00:59:52] Speaker B: So you've almost. You've almost sold me out like three times. I've. It's been in my cart multiple times. And then I.
[00:59:57] Speaker D: There's a color one now too.
[00:59:59] Speaker C: Yeah.
[00:59:59] Speaker A: I'm thinking about upgrading.
[01:00:01] Speaker C: Yeah, I'll buy your old one. Yeah, I've been like, Justin. Where I've been like, trying to buy. Convince myself to do it for a long time. But, like, I don't know that my brain works in that way because I have like sheets of paper. So my crazy person with like three sheets of paper on my desk, I flip between depending on what I need to do, what hat I'm wearing, for what, you know, thing. I'm doing that at that point and I just don't know that I could flip that fast.
[01:00:26] Speaker B: Yeah, I mean, it's, you know, it's.
[01:00:28] Speaker D: One of those things I did. It did take a little bit to get used to it. But it's also like, now that I'm used to it, I don't think I could go back because finding stuff is so much easier and it's making my handwriting better because in order to get the text recognition to work, I have to actually, like, pay attention.
[01:00:46] Speaker A: Sam, do they have a special. You know, they do not.
[01:00:51] Speaker D: And it is horrible because the it. The menu button is right. Right where you don't want it to be.
[01:00:57] Speaker A: Huh.
Storms do, like flipping the thing upside down.
All right. At least. I need to make a note to build the smoke Alarm. Smoke alarm. CO2 sensor with all that. All that crap. That'd be kind of cool.
[01:01:14] Speaker D: That'd be cool.
[01:01:15] Speaker A: There it is right there.
Best recommendation ever. Was the Remarkable. I love that thing from Ryan.
[01:01:22] Speaker B: Yeah, I'm definitely interested in buying one. I just haven't pulled the trigger. And there's a bunch of competitors now too, you know. Amazon, I hear, is garbage. So everyone says that one's bad.
[01:01:31] Speaker D: My wife likes hers, but I don't know if she's the best judge. And she.
[01:01:35] Speaker B: She used the Remarkable for I did. So, I mean, maybe her. Maybe it's built more for lawyers.
[01:01:40] Speaker D: Maybe.
[01:01:41] Speaker B: I don't know.
[01:01:43] Speaker A: Every time you pick it up, it costs you $350.
[01:01:50] Speaker B: Good lawyer joke right there to end the show. All right, guys, we'll see you next week here.
[01:01:56] Speaker C: All right.
[01:01:57] Speaker D: Bye to love.
[01:01:58] Speaker C: See you.