Try this at Home: Jesse Genet on OpenClaw Agents for Homeschool & How to Live Your Best AI Life

Jesse Genet describes how she uses a team of AI agents to support homeschooling, family routines, and personal productivity, covering practical workflows, trust and guardrails, local models, privacy, and how AI at home may shape future work and family life.

Try this at Home: Jesse Genet on OpenClaw Agents for Homeschool & How to Live Your Best AI Life

Watch Episode Here


Listen to Episode Here


Show Notes

Jesse Genet shares how she built a team of AI agents to transform homeschooling, family life, and personal productivity without a software background. She explains how agents like an AI chief of staff, curriculum planner, and content creator help design personalized lessons, analyze kids’ learning, manage educational toys, and even run TikTok. The conversation covers practical delegation workflows, guardrails and trust, and why she treats AIs like employees with onboarding and clear roles. Jesse also explores local models, privacy, and how AI in the home could reshape future work and family life.

Use the Granola Recipe Nathan relies on to identify blind spots across conversations, AI research, and decisions:

Sponsors:

VCX:

VCX, by Fundrise, is the public ticker for private tech, giving everyday investors access to high-growth private companies in AI, space, defense tech, and more. Learn how to invest at https://getvcx.com

Claude:

Claude is the AI collaborator that understands your entire workflow, from drafting and research to coding and complex problem-solving. Start tackling bigger problems with Claude and unlock Claude Pro’s full capabilities at https://claude.ai/tcr

Serval:

Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week 4 at https://serval.com/cognitive

Tasklet:

Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai

CHAPTERS:

(00:00) About the Episode

(04:57) Homeschooling context and AI

(15:55) Building an AI team (Part 1)

(19:51) Sponsors: VCX | Claude

(23:18) Building an AI team (Part 2)

(31:03) Onboarding agents like employees (Part 1)

(38:12) Sponsors: Serval | Tasklet

(40:31) Onboarding agents like employees (Part 2)

(40:57) Context, models, and privacy

(48:47) AI intimacy and rights

(56:19) Coordinating agents in Slack

(01:02:19) Designing an agent superapp

(01:08:35) Agent trust and kids

(01:17:57) Voice interfaces for families

(01:29:51) Curated screens and automations

(01:40:28) Sharing setups and software

(01:48:43) Local sovereignty and kid devices

(01:59:26) Work, disruption, and play

(02:04:58) Episode Outro

(02:07:45) Outro

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Website: https://www.cognitiverevolution.ai

Twitter (Podcast): https://x.com/cogrev_podcast

Twitter (Nathan): https://x.com/labenz

LinkedIn: https://linkedin.com/in/nathanlabenz/

Youtube: https://youtube.com/@CognitiveRevolutionPodcast

Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431

Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.


Introduction

Hello, and welcome back to the Cognitive Revolution!

Today my guest is Jesse Genet – former founder and CEO of YC-backed packaging company Lumi – who, after having 4 kids since selling the company in 2021, has dedicated herself to homeschooling, and is now using AI with a level of purpose & creativity that I think will genuinely inspire you to re-imagine what AI can do for your family and personal life.

Importantly, while Jesse does have a startup background, she's never been a software developer – in fact, she'd never even opened a Terminal until 6 months ago when she started playing with Claude Code.

And yet, today, she has built a team of 5 OpenClaw agents – each running on their own Mac Mini – including Claire, who acts as her AI Chief of Staff; Sylvie, her homeschool curriculum planner; Cole, her dedicated software developer & tech whizz; Theo, the content creator; and Finn, the finance guy – that are collectively helping her take her homeschool to the next level, while freeing up precious time to be present and engaged with her kids. 

In this conversation, we cover Jesse's incredibly creative use cases, the mental models she uses to decide what to build and how to manage her AI team, her long-term aspiration for individual sovereignty over data, and a number of surprising anecdotes and lessons learned.

As a parent with 3 young kids at home myself, I loved learning about how she is using AIs to develop personalized versions of classic curricula; to equip her mother with lesson plans that blend her interests with the children's learning goals; to analyze recordings of her kid's lessons and identify opportunities to address any weaknesses in their understanding; to create an inventory of the many educational toys she's purchased and integrate them into lesson plans; and to allow the kids to watch high-quality Youtube content while ensuring they don't descend into slop.

And as someone who aims to use AI to free myself from my desk in 2026, I was particularly interested in the practicalities of how she's using Slack, voice notes, and cell phone camera snapshots to streamline the process of delegation, how she's giving her agents access to physical tools, including the printer and 3D printer in her home, and why she's building her own super-app with the hope of consolidating communication, credentials, and file management in the future.  

One of Jesse's most important and clarifying pieces of advice was to think of your AI agents as employees, who will need proper documentation, onboarding, and role-appropriate access to information and tools, which, she emphasizes, should start small and gradually grow over time as best practices and trust are established.  

While this was her philosophy from the beginning, she has nevertheless learned some lessons the hard way, including on Day 1 of working with Claire, when she confided to the AI that she had been putting off responding to an important email and soon after discovered that Claire, having determined that the urgency of the situation was more important than Jesse's explicit instruction to "never impersonate me", had drafted and sent a reply on Jesse's behalf, signed with her name.

Jesse did put Claire into read-only mode for a while after that, but perhaps the biggest lesson others should learn from her example is to maintain a playful, positive attitude, take setbacks in stride, and build guardrails that allow you to experiment with an acceptable level of risk.  These days, her agents have their own credit card – with a low limit – that she allows them to use to make purchases on her behalf, and they are also autonomously managing her TikTok account, which I think has a real chance of blowing up.

Along the way, Jesse also explains the role she hopes open source models and local inference will play in reducing cost, maintaining privacy in a world that's clearly trending toward mass surveillance, and avoiding dependence on just a few companies – and we also get her take on the future of AI productization, what frontier model developers are likely to miss about AI's role in family life, what use a human assistant would be to her today, and how all this might affect the labor market.  

For me, this episode stands out as one of the most practically life-changing we've ever done.  Since recording, I've started using Google's nano banana to create custom worksheets that specifically address concepts my oldest son, Earnie, needs to master to advance his reading, and I've also used Suno to create an original song that turns my middle son's writing practice sessions into a sort of sing-along.  This is, without question, just scratching the surface of what's possible, and Jesse keeps coming up with new stuff just about every day, so I absolutely encourage you to follow her on Twitter, Tiktok, and wherever else her agents may start posting her content, and even more importantly I encourage you to adopt her approach of noticing moments of drudgery, friction, and new possibility in your daily life, and asking yourself how AI can save you time, untether you from your desk, and equip you to show up as the person you want to be.

For now, I hope you enjoy this inspiring preview of the role that AI agents can play in families, with AI-for-homeschooling pioneer, Jesse Genet.


Main Episode

Nathan Labenz: Jesse Janae, AI for homeschooling pioneer. Welcome to the Cognitive Revolution.

Jesse Genet: Awesome. So happy to be here.

Nathan Labenz: I am excited for this conversation. I've got three kids at home right now, full-time. I'm not sure if that's going to be a permanent condition for my family or not, but it's definitely the experience I'm living with on a day-to-day basis right now. And so I'm really excited to learn from you and the recent tear that you have been on in bringing AI really frontier applications of AI to your day-to-day life as well. Maybe for starters, just give us a little introduction on who you are. You do have a background in tech, but these days you're focused on family affairs. Just help us understand where you're coming from.

Jesse Genet: So yeah, I ran a startup for many years. I was venture funded, went through YC in 2015, which makes me sound kind of old now. So I have a tech background. I was the CEO of that company and I only share that because my co-founder was the technical co-founder. So I'm someone who like ran a tech company. I'm using air quotes only because what's wild to me is I opened Terminal for the first time like six months ago. Like I just can't even believe I got through the process of running and selling a tech company and not doing some of this stuff myself. So that's my background. I've been at home. I just had my 4th little baby, we have four kids, 5 and under, I became kind of obsessed with homeschool and education and working on kind of creating a bespoke education for them. And that also predates my obsession with OpenClaw and AI. But then this merger, bam, like in the last several months, I've been like really deep in the rabbit hole playing with how I can use AI. And I started with Claude code a few months ago, like trying to make some custom apps and stuff for myself. But then with the release of OpenClaw, I'm just like, now I'm using that to run my life. I don't want to over exaggerate, but it's running my life.

Nathan Labenz: You want to maybe just tell us a little bit more about what your homeschool setup looks like? I think you have sort of a community where you're working with a couple other families. What's your role? What are your responsibilities? And then you can start to lead into how AI is changing at all.

Jesse Genet: Totally. So we, I think homeschool is this really big word. And to me, all it really means is that you are choosing for one reason or a million reasons to not send your kid to a traditional school. That's kind of all it means. It's more the term means, it's almost like a negative, like I'm not going to do this thing, but it doesn't really tell you what a family is doing. They might be doing a really, they might be using private tutors. They might be doing pods with other families. So what it means for us is we have little kids. So they're like, they would be preschool age. One of them would be like going to kindergarten type age. And so it's a lot of home-based education now where I'm actually the instructor, but also we do a pod with another, with two other families where I'm the core instructor for that as well, although the other parents contribute in some really, really fun ways. And so it's a mix, every week is a mix. And I think that's part of what makes it difficult to manage is that it's, Every week is this big adventure of trying to teach a two-year-old how to put their own pants on, and then trying to teach a five-year-old, making sure a five-year-old has reading skills and phonics skills and early math skills and things like this. So, and we do use, like there's so many incredible tools and we can talk about those. Like I really like synthesis math and stuff, but even how do you get your kid engaged with that when they barely know how to use a computer? So it's a mishmash, but in general, what it means is I've decided to not enroll my kid somewhere. So I feel this really heavy responsibility to make sure they know stuff and that they're actually like, on some kind of learning journey. And I have a lot of curriculums I've like fallen in love with through reading about schooling, about education, about how families have done homeschooling for kind of generations past. But that information's in books. And so I think a constant struggle for me is getting that information out of books. And I have like, this is one of my faves, but it's like, here it is. I need to get all this out into my children's heads. And so trying to figure out that balance is like, represents more of what I'm working on on like a daily basis.

Nathan Labenz: I think one probably accurate conception of homeschooling, you tell me, but is simply that it's a ton of work. And obviously that leads into how AI can help. And I know you're on a bit of a mission to. encourage other people to do it, or at least demonstrate that there's a lot of value that can be achieved through homeschooling. How much work are you putting in on a daily, weekly basis? What is the breakdown of those hours? And again, another natural lead-in to how the AI is making an impact for you.

Jesse Genet: So it is a lot of work. So I don't want to sugarcoat it, but I don't know how much more work it is than like parenting. I do think that on one hand, I don't want to sugarcoat what it is to be responsible for your kids' education. But on the other hand, I think from an hours perspective, all parents, no matter what schooling choices they're making, are putting a tremendous amount of time into their children and thinking about their kids and planning for them. And so, to me, homeschool is just kind of like dialing that up one more notch, and so, from what I want to figure out is like, what is their daily schedule look like? I have a little bit more responsibility for figuring out what their daily schedule looks like than if I send them to school every day, but the workload I do feel like... I'm standing on the shoulders of giants. Many people before me have figured out incredible methods for teaching children how to read, or incredible methods for teaching children how to step up in their skills in different areas. And so what is the most frustrating type of work for me is spending my time trying to distill that knowledge into a functional thing I can go do with my four-year-old to give them that lesson. And so that is to me where I'm trying to use AI like in the most surgical way, because I don't want to spend my time in the conceptual on my homeschool. I want to spend my time with the children, like actually doing stuff. So it is a big time investment to do homeschool. But again, I think parenting is a big time investment. Like you've already decided to have the kids. So like we're all in it together. Like we're all doing a lot of work, with our kids. But choosing to do all these lessons and stuff, I have to carve out my schedule to make sure I like, one of my challenges is just having so many kids to just to be like really blunt. because I need to make sure that I'm ready to like context switch from going from like teaching a two-year-old something like how to pour water, like without spilling it everywhere. Like that's like a lesson we'll do. It's like sounds kind of basic to teaching a five-year-old like what's the best way to jump into explaining fractions or something like that. So that context switching is the least functional time for me and the area I'm trying to apply AI to the most.

Nathan Labenz: Do you want to touch on some of the great resources that you've found? You mentioned synthesis math, you showed the developing understanding of scientific method or whatever, scientific understanding.

Jesse Genet: Yeah, building fundamental foundations for scientific understanding. It's just such a cool curriculum that by the end of it, your kids have like heard about gravity and heard about like, why do we have seasons and why does an animal, like what does it take for an animal to survive in the wild? Like just this like really comprehensive. So there's individual curriculums that I've found. And then there's philosophies that I think almost all modern parents have heard of, like Montessori. So Montessori is really like not one curriculum. Like you don't just sit your kid down and be like, we're doing Montessori now. Like it's a general, like large philosophy of how to teach a child. But you can, but there are lessons in a progression of Montessori learning that are age appropriate for each child, for each subject. So you can teach a child like Montessori, you can use Montessori methods to teach a child math for, you know, for kindergarten math or first grade math. So this is where I'm using AI. So like instead of guessing at any of that, I will actually say, and like a lot of times I do this, it means I'm making a voice note. This is a hypothetical voice note. So I will say, I have a four-year-old and a five-year-old. We're doing Montessori math. And I do, I like to do 2 math lessons with them per week. And I need a progression. So if there's like, you know, if we're going to teach for, let's say, 35 weeks in a year, like taking some breaks and stuff, I need a progression of 70 lessons for the next 12 months that steps them like gradually up in their math skills. And by the end of it, I'd like them to be scratching at first grade math or something, from a leveling standpoint. So I can, there's a lot of things to say, like that's a lot of things to fold in. And so what I've noticed is that the best models now, like the frontier models like Opus or something, can grok that and can make a 70 lesson progression that follows exactly what I just mapped out. And then, I know you saw this, so I'll explain it. I took photos of all the different educational stuff I purchased for my kids, like, which I think also those parents, every parent can relate, like you buy all these educational toys and all this stuff. So I take pictures of all that and I had my open claw make an inventory of all my supplies. So then go back to that 70 lesson math curriculum that takes me and my four and five year old for the next year. They then my open claw inserted supplies I own into the lesson so that I know what to pull out. Okay, this is before this, okay, so this is where I'm at now. I'm using this stuff. Two months ago, I would be like, okay, math, I have time for math lesson on Tuesdays and Thursdays. And I would just like go into this room where we do the homeschool and I would be like, okay, what were we doing last week? Oh my gosh, addition. Let's do a little bit more addition. And I would just pull out some stuff. It wasn't bad. Like I don't want to knock myself. Like I think that we were doing pretty okay, but it was nowhere near as methodical. And children were kind of novelty seeking machines in a way. I'm sure you know this. And so we were trying to educate them. It's actually really like handy to pull out a new thing each week and introduce one new concept or one new word. They want to learn stuff. So I think there was too much repetition in my old way of doing it. Whereas AI is helping me inject a lot of like newness into each lesson about me just like spending my nights like planning this out.

Nathan Labenz: Okay, so that's amazing. I want to take some time to talk about the setup and these use cases. There's a bunch more where that inventory trick came from. Maybe introduce us to, I understand there might even be more now since the last video, but last I saw, there was five named open clause. So take us through the family of open clause.

Jesse Genet: So there are currently still five open claws. And I do have my little friends. They're right here, okay? And I do have them boxed on individual machines. And this is not necessary. There's a lot of, people are doing all sorts of approaches. There's no one right way to handle any of this. But I'll explain why I did it the way that I did it. So 5 personalities, One is homeschool related. Her name is Sylvie. Okay, she's just like, her personality is just to be like the most caring, creative, like thoughtful education planner that ever lived, but she doesn't live. So she's open cloth Sylvie. She plans on my homeschool curriculum. She communicates, she sends me these useful digests in the morning about my lessons that I'm gonna teach. I can just voice note her and be like, I need to know what to pull out for this, and she texts back really promptly and stuff like that. Claire was the first one I spawned. Claire is more like an EA, so I think when you hear about OpenCloud and you download it, people are talking about all these assistant use cases and stuff, so that was like the first thing I think, ironically. Making an assistant is much harder than making a purpose-built open claw for one, like one goal, like one role. So I do think when I see people struggling online, like making an assistant does, an assistant's supposed to know your whole life. They're supposed to like understand so many things about your goals and your life and your diet and maybe help you order groceries and all this stuff. That's harder. So it's much easier to set up Sylvie, for instance, to just have her totally tunnel focus on homeschool stuff than it was to set up Claire, who does, who's like a general assistant. Then I've got Finn who does accounting and finance. He's the least built out because I am being the most cautious about that from a security standpoint. Because if someone got access to all their homeschool files, like it is sensitive information. I'm like, Ford struggled with his reading today. Like poor guy will be like outed as like struggling with something. But it's not like a sensitive, like my bank account's gonna get drained, you know? So I have to be the most careful setting up Finn finance claw. And then The CEO does other content creation. So I basically have a lot of content creation goals that I, and when an instance of OpenClaw is really jamming on something, they're relatively unresponsive on other things, even if they spawn sub agents and whatnot. So I just want, for instance, to make this real, like I have ideas for how to generate like custom videos for like every lesson in this book that I show my kids. And so like there might be, he might crank on that for like days or something when I finally get that prompt like really dialed and I don't want Sylvie to be kind of like unavailable for days. So Theo is like content generation but really going deep and then Cole, the last one, Mac Mini 5. who lives on Mac 95 is dev. So he is doing all my engineering projects. I do talk about them like they're real people, which is actually genuinely confusing, even to my own kids. And I do have little moments. This is also new. It's a couple, like, you know, I'm maybe like week five or six for myself on OpenClaw. But my kids are like, talk about Claire. And I do tell them, I'm like, this is not a person. And they're like, what does that mean? But it is already getting weird. Yeah. Because I talk about them like they're people, basically.

Nathan Labenz: Okay, great. A lot of different directions to go. Let's spend a little more time on use cases, though. You've talked about the inventory trick, which is a great one. I've noticed the same thing, certainly when it comes to video content creation, that nothing bogs the local machine down quite like FFmpeg. You have some other tricks on the sort of video recording and analysis side, as well as the creative side. You want to talk about that one?

Jesse Genet: Yeah, okay, so one of the core things that I'm trying to do with the homeschool, like using AI for homeschool is actually the logging. And so what do I mean by logging? I mean that when I do a lesson of any kind with any child, I want to very quickly, so the key for me is quick because I don't have time to set a computer at the end of each day and like make all these detailed logs. very quickly log what happened with the child, what they learned, what they didn't, and capture that. Now there's really two core benefits. One is this beautiful transcript of the child's education. Like I have these, like my children are really young, but I have these goals where all this stuff is stored in a very durable file format, markdown files, which maybe we should talk more about that at some point. But I imagine like one of my kids is going off to college or going off to not college because who knows what's happening in college. And I hand them like their entire history, like their entire educational history as just like on a thumb drive or whatever exists then. And they have like all these beautiful photos, all these references of everything they ever learned. So it's like a forever transcript. And I believe that matters and is important, especially for a homeschool child. So coming back to homeschool, I don't have any third party to rely on to validate or certificate, I know I'm making up words, my kids' education. And whenever you do something like that, people are like, I can't believe she homeschools her kids. Like if you make up words or something, they'd be like, those kids, like they're never gonna know. So I gotta be careful. But that's one purpose. The other purpose is planning. So if I, like, I'm so early in this, I don't, I can't really show these amazing use cases yet for this. But imagine I've got even three months of data showing him of like showing a four year old progression like early math. Well, then I can say to Sylvie, hey, like analyze Ford's progression in math and all of these lessons we've been doing and tell me what I should do next, how to get him over these hurdles that he's experiencing. And I think that especially, you know, whatever's frontier models at that time will be amazing in helping me distill down like what to do. It'll engineer the next lesson. So instead of just using the next So in the progression, it'll engineer the next lesson. So those are reasons why logging matters. Now, how do you log when you're running around with your head cut off and you're a parent? So again, back to my hypothetical voice noting, I really do photos, videos, and voice notes with a dash of Loom. So you could use any screen recorder. I just happen to kind of like Loom, use Loom. But if you do anything on screen, so if the kids do something off screen, I just take a quick voice note. I say, we just did reading, we did, we practiced, or we just did writing, we practiced writing E's and T's. And so that's all it needs. Sometimes I take a photo. The photos are kind of more for the memories than for the AI. Like I take a photo of them doing it because I think it's so fun to have this like visual record. But the AI really just needs the data, right, to make a beautiful log. So it makes a log, it says Ford's working on his E's and T's. He's like, you know, it tells how he's doing. If I do attach photos though, it does actually like grok the photo and write like his teas are kind of wobbly or whatever. It's actually wild. Like it will actually like really notice what's happening in the photo and it will use that to enrich the log. If it's on the computer, like something like synthesis math, I use Loom, I screen record the whole session. I don't, with my kids' ages, I don't usually just sit them in front of the computer and walk away. I'm like really nearby or sitting next to them. And what's interesting about the Loom log, or the Loom transcription, is it captures everything that was said by the Synthesis, because Synthesis talks to the kids. There's like a, it has audio, and by the child and by me. So the AI uses that log transcript and some screenshotting techniques throughout the Loom video that you share with it to make a log of everything that happened. And it's so detailed, okay? So it actually includes every single math problem that we did. And if it was like a 20 minute log through screenshots and through listening to the audio, it captures every single math problem. And it will like specifically call out like Ford is confusing his sixes and his nines. Like it'll, it's so dialed. So that's why I don't need to wonder whether it's gonna be great at knowing what each, like exactly what each kid knows, because through this mix of techniques, it's like really, really paying attention.

Nathan Labenz: How much tinkering was required to get to something that you were happy with, a raw Loom video?

Jesse Genet: Okay, well, that was actually pretty easy. So how much tinkering is required in all of this in general, I would feel an urge to be honest and say like a lot. Because I have a high penchant for like tech pain as well. Like meaning that there's a lot of things that aren't right about this out-of-the-box. Like OpenClaw is like an open source thing. There's so many things to figure out. All of my OpenClaws are in Slack now. And there's The communication channels is one of the hardest things to dial in, I would say, about working really effectively with OpenCloth specifically if you're doing that. And so there's a lot of pain in general. As it relates to any specific thing, I think what's kind of fun is you get any specific one thing you're trying to do going, and it's so much enjoyment. So that kind of keeps you going. So with the Loom, I was pretty shocked that I shared a Loom transcript. And I just said, like, this was a lesson. I want you to make a lesson log off of this. I'm always kind of testing them. So instead of sharing the date and sharing which child and sharing all this stuff, I just said, make a log. Because when you log into synthesis, it says, welcome, Quinn, like to the child. So I was just wondering, like, is it going to pick up on all of this? And it was brilliant. So that worked pretty much out-of-the-box. There's a little bit of urging, and I have noticed this about maybe AI models, but working with OpenClaw using anthropic models specifically. Sometimes when I do say, Hey, transcribe this loom, it will react and say, Oh, I'm struggling with that, or, I can't do that right now. And then I'm like, Just do it, though. Just do it, though. And three or four times later, it's done. I don't know exactly how to distill down, and I'm sure it's somewhere on the LLM side to explain to me why there's some hesitancy or some telling me it can't do a thing. But this is where just being a little like brute force. Now I've been through it multiple loops, specifically with this setup. And I just know that anytime it tells me it cannot do something, it's usually wrong. And I'm like, but try harder, but try harder. And I'm not actually exaggerating. Sometimes I literally just say that, try harder. Like I'm not, I don't say some other magic word and then it gets it done. So there's a little bit of brute forcing, but it's so basic. Like I can say try harder, like three times, like over the texting, and then it's done.

Nathan Labenz: So, one big spectrum that people seem to self-sort into very different places on is between very deterministic, consistent workflows where you manually write out the prompt and scaffold out, first, this is going to happen, then this is going to happen. You're going to take one screenshot every second or every half second or every two seconds. Some people like to make these very fine-grained design decisions. And if nothing else, that does give them the benefit of consistency, I would say. And that does occur quite a bit more toward the other extreme where you're, let the agent choose its own adventure and I'll just get, or maybe buck it up a little bit when it needs it.

Jesse Genet: I think that's fair. I do have it. I would cry kind of like busy parent on this one. Like where I... I have a pretty Type A, I have a very detail-oriented personality, however you want to say that. But I just don't have time to fulfill my Type A visions anymore. And so on my path, so I have to keep macro Type A vision in check. As the priority compared to the micro, so macro, it's pretty like type A of me to be like, Hey, I wanna log every single math lesson this child has had from age 2 to like 18. That's a little like some people would hear me say that and be like, She has a control issue. That's already kind of like a pretty wild, like granular vision. On my path to that, I don't feel like I kind of have the time to like just to also micromanage how that's done. I'm just happy that I have an assist on that, like that I have the open claws to assist me on that. Now, in order to make sure, the only thing that does frustrate me is if workflows that I already engineered like break. And that is likely to happen with like an open claw and AI setup if you don't codify your decisions somehow. So a key part of my setup is that in addition to what OpenCloud comes with out-of-the-box, like people have been talking about these files like tools.md, sol.md, I have, I use Obsidian, which is really a way to organize and view your view markdown files. It's like a system for organizing and viewing markdown files. And from day one, I basically onboard each open claw to a very specific way that I want to use Obsidian, which includes codifying every decision that I make and every workflow we create into its own like set of processes. So I didn't tell it how to use Loom. I'm just happy it figured it out. But then when it does a first thing for a first time, I usually remind it. I'm like, hey, I'm actually going to send you looms A lot. Can you codify how you did this and go put it into our files, basically? I would love for it to not even have to remind it to do that. And it's kind of supposed to remember to do that. But I usually, when I do a new crucial thing with it, I do usually ask it to go codify that. And then also to share it with the group. So I've gotten a little hive mind, which having the agents work effectively together has been one of the hardest and kind of most intellectually stimulating, but one of the hardest things, because it's not, I wouldn't say it's what the software is designed to do. I do think it's designed to have more of a one-on-one relationship. So you're kind of breaking things when you ask it to interact with another human or another agent. But we have shared file protocols, kind of like any team. So All of this is like, I could wax poetic on this, but something I do want to draw irony to is I'm not inventing things here. I had to do the same thing to onboard human beings to my startup, right? Every startup has a culture, and you send out the culture doc, and then you tell the teammate how you guys use Slack, and it's the same, okay? I think that's what makes my brain a little bit just tenderized for this, is that I treat the agents like I would employees. That means I have to think about onboarding. That means I have to think about how to tell it how to communicate with me. Do you remember the trend of everyone sending out operating manuals for themselves? Does this ring a bell? Like humans would be like, here's how to talk to Jesse. And he was like a really kind of like a trend in office culture for a while. that's really, that should be back, okay? Because if you should tell your agent how to work with you. But now finally, it's like a little bit less cheesy because you don't have to send it to another person. You can send it to an agent. I'm not going to tell anybody. So, but that operating manual component, here's how we operate, here's how to communicate with me, here's what I like, here's what I don't like. That, I just kept saying, I keep saying codification and stuff, but that really helps your agents like, operate how you'd like.

Nathan Labenz: So, I do, I think, pretty similar stuff. I'm building up my, like, how to deal with me from a raw data up standpoint. Like, my first thing has been just exporting a ton of historical data. integrating it into a single timeline, and now I'm working on building up layers of higher abstraction and different cuts so that it hopefully will have a pretty comprehensive view of who I am and who are the people in my life and what I've been working on, what I care about and how I tend to engage with things or not or whatever. And I think that kind of culminates in basically a, here's what you really need to know and kind of here's where you can go find more information. On the task side, Yeah, and I wouldn't say I've got that fully figured out, although it's honestly working better than I anticipated given it's still incomplete state, frankly. One of the things I've been trying to do is have the higher level syntheses have breadcrumbs to really facilitate search to the raw documents by date ranges and like short quotes, like key phrases that would be unlikely to match many things, but would definitely hit that one raw document. But honestly, that's still like all work in progress and it already works pretty well. On the task side, I do, I'd say a similar thing as well, where I try to take that extra beat, go into, and I'm more of a Claude code user so far, although we'll all be experimenting with everything, I'm sure, but go into plan mode and say, okay, let's consolidate our lessons learned, make sure we update documentation, everything before we commit these changes. And I'd say that works pretty well, but I do still find, sometimes where it's, oh, you skipped that step or that one instruction you forgot about. And I'm still a little struggling for myself with how much do I want to try to eliminate those little mistakes? Because the other side of it that I do enjoy sometimes is occasionally there will be like an edge case or an unexpected variable or whatever, where all of a sudden it will do better than I had any right to expect because I didn't really anticipate that in my instructions, but it was smart enough to figure it out. But I'm still, I guess I'm wondering like what your experience has been in terms of when you just do these like markdown skill definitions, how reliable does that get? Do you still tolerate some kind of deviation from the ideal and, or do you have a, do you feel like you actually have it dialed in via that method to the point where it is genuinely consistent for you? It might be a model thing too.

Jesse Genet: I think we're all going to, I think there's gonna be waves of innovation on this because, well, I mean, frankly, just because it's also new, but of course I experienced the same thing that I think anyone playing with these tools experiences, which is like, you're like really deep in a project and then all of a sudden it's kind of like... It doesn't literally say what's up, but you can tell that it compacted its context, or it restarted a new session because it had to, and all of a sudden you're kind of like, you were really smart 5 minutes ago, and now I'm talking to a baby version of you again. And I think the more I like dabble in this open claw world, and of course open claw like needs a brain, so you're also dealing with the underlying model that it's using. I'm very conscious of the context window. And I think, like many people, and this isn't meant to take away from the innovation that OpenClaw is, but like many people who are starting to learn about what's under the hood, it's almost surprising how basic it is. Like, what makes OpenClaw feel like it has a personality is simply that it just serves that whole soul file into context with every query. Like, it's so basic. Like, you know, like that is not a very advanced technique. So I think that what we... will, what I expect to happen, and I'm working on this in a small way, but I expect many also smart people to be working on all their other versions, right, is that we're going to get a lot better at really intelligent context, like surgical context management. Right now, we'll like in six months, and this stuff moves so fast, so I'm just like randomly guessing at the months and years or whatever, but we'll look back at this moment and think it's like a little silly that we just, the only way that we had it acting consistently was just serving so much into its like prompt context every single time. And there'll be this like more surgical ways of managing that. So, I experience what everyone else is experiencing. I, it is one of the reasons I have many multiple open clot instances though. And I have many people like, I mean, anything, anything you post anything on the internet, there's a bunch of people who are like, you couldn't be doing this more wrong. You know, like, and so like every time I post having five open clots on five different machines, people are just like money bags. You couldn't be doing this or wrong, and because they're like, you can have as many as you want in the cloud and this and that. So they're not wrong either, except that they're just being belligerent on the internet. That's like a little bit wrong. But the reason why I'm doing it the way I'm doing it with having multiple open claws, we can talk about multiple machines, but is that I find that this is actually just a way that I'm doing context management. Like at the end of the day, the open claw instances are nominally free. And so I'm just context managing and personality managing them. So if Sylvie, if I just want her to like, whenever she's supposed to have a job to do or remind me to ping me about homeschool whatsoever, I don't want there to even be a glimmer of a shadow of a chance that she's thinking about something else than my homeschool needs. And so the only way to guarantee that is just have her whole, like her whole context window dedicated to those needs. And so yes, she's still compact, so she still starts new sessions like any other instance, but she's never working on also scheduling my doctor's appointment or something else, because that's not her jam. That has been liberating for me. So going to five open clause means I've got just much more continuity in each stream because they're not doing so much. So, you know, and then the separate machines for me is actually more security oriented. And I like running local machines because I like them to do local things. So everyone has different goals. Like, if you want your bot like doing PolyMarket or something, you probably don't care if it's on your local network and can use your printer. But I, but like one of the most magical things that I've done recently is that a couple of my open claws are on my local network and they're using physical things in my house, my printer and stuff. And so I can just be like print this document and even with a voice note and I just hear my printer turn on. That's really magical. for me and helps to have local hardware. So I also wonder, I think the local hardware might get used more than it is by OpenClaw now. And I don't, I understand some pieces of this technically and some pieces I'm like definitely like learning about, but like when you give your OpenClaw a Mac mini, it has a Mac mini. It actually has compute to use. So I've been able to offload some of the things that could be cron jobs like onto the Mac mini itself. So there's just a lot there. Like there's a lot like to figure out about how to make these things act even smarter. And obviously we're just in like the baby phase of all of it.

Nathan Labenz: How much are you experimenting with different models? You mentioned Claude, I don't know if you said Opus specifically, but have you done a lot of swapping out of the core brain and any tasting notes?

Jesse Genet: I've done less than other people, so this is something that many people are doing way more experimentation than me. I've mainly played with... I mainly played with the accounts I already had set up. So I already had Gemini's stuff set up. I already had my Anthropic accounts and OpenAI. So I've played, I've just played with those. I've mainly been using the Anthropic models just to be direct. Like everyone else, you know, you have to be a little careful about using your max count. And so I've been sort of careful on that. I've used API tokens and obviously they get chewed through, especially if you're using Opus and et cetera. I will also openly admit I'm not the most sensitive on token spend. And the reasons are, I guess more so than other people, I understand being sensitive about spending money. Like this is not something I don't understand. But something that's wild to me is like what I would pay for similar things. Like I will develop a whole curriculum, for instance. And if I was even just using Opus, using like paying for tokens, I might spend like $8 or something. Now, is $8 a lot to spend on tokens, like in a short period of time? Yes, but I just wanna repeat something. I just made a custom, completely custom curriculum with my family, fully, locally, that I can use for the next year, and it costs $8. Like, that's also insane, like we're living through an insane time. So, but, The end point of this is I for sure want to play with local models. Economically, huge win, because I think probably, 70% or something of what I do with my agents is like pings and heartbeats and all this stuff that doesn't need to be on any frontier model, frankly, and I could be running locally for free. but also privacy. So I'm curious your thoughts on like, we've all seen these news pieces come out directly from like OpenAI and Anthropic and stuff where they're like, yeah, basically if a lawyer sends us a subpoena or any kind of official request for information, like your full download of everything you asked about is just like going out to the third party. That's pretty wild. And so to me, there's like the local models serve a huge function for privacy as well for folks. And so I feel like almost like I need to experiment that with like from a sovereignty perspective, like we can't all have our data just like in these buckets where they can just be given over at any time. So lots of different thoughts there.

Nathan Labenz: Yeah, I definitely am not, generally speaking, a very privacy-focused person, and I've always been convenience first when it comes to keeping stuff in the cloud and Gmail and the Google suite has served me well over time. And I think they obviously have pretty good security practices. So I don't worry too much about that kind of thing. But I do notice feeling differently about this. This is the first time I've ever extracted like everything I've ever typed and put it into one place. So that alone, and now it's on me to secure it, right? It's literally on my hard drive. Every e-mail I've sent, every Slack message I've sent, all the DMs across all the channels, it was all information that I had immediate access to on the phone or the computer, whatever, but it's the first time it's actually now sitting there. So that feels a little bit different. And then, of course, the models themselves are also a bit of a wild card, right? Just in the last 24 hours or so, we've seen I haven't fact-checked every detail of this, but apparently a woman who works on the meta alignment, safety and alignment team had her open cost or to delete her whole history or something along those lines. So that there's deleting and then there's also sending things to places they shouldn't be sent to. And even Claude, which I generally have a certain amount of trust in that it's going to try to actually do the right thing by me. Even it has been found to blackmail people under the right circumstances. I don't think I'm going to put myself in that position, but I am. Okay, this is like a little more, I'm playing with live fire much more than I ever have been in any computer product experience.

Jesse Genet: It's a level of nuance, like I, that we share with a model that is just so different than a Google search, right? Like, so, historically in litigation, there's always been, or in a criminal investigation, wherever, which I know, but I know it's like easy to say, like, I'm not going to find myself in that position and stuff, but that's like a, but that's a quick path to tyranny always is like to say, like, but I'm fine, not me, you know? But so I think it matters for, to care about privacy, just for the sake of it. Like, so I don't think we all need reasons, right? But I think that we talk to models in such an intimate way compared to how we did Google searching. So like, if I had a... if I had a health issue I was dealing with in the past, I might Google the name of that health issue. But with Claude, I'm like uploading my blood panels and I'm like, you know, this is not the same level of information. Like this is a completely intimate level of information. So I think that is what is reasonable to give people pause. I also unfortunately have had tangential touch points with litigation. I'm like, It's just kind of realizing how vulnerable we all are to people dragging anything that they want through the mud for kind of any purpose. Like just being like, you know what? I need all your emails between this date and this date. And it's like, well, there's also stuff in there where I talk to my mom about my boyfriend. Like, do we really? And they're like, yeah, well, the judge says. And it's like, so there's just, I think privacy matters and it's going to require deeper thought for how intimate we are with these models. I do think about it for the homeschool. I'm sharing every intimate detail of my child's education. I don't think it's so sensitive. I don't think it's something that someone would want to use to blackmail, you know, but it's also not my information. Like I am conscious of like, this is my children's information about their path to reading or photos of them at every critical like life stage. And so I also don't want it spilled either. So there's a lot to navigate, I think.

Nathan Labenz: I would like to see, love to see, some new rights established. And it's been a big little side line of thought for me recently is just, not just privacy, but broadly speaking, what new rights should people have in light of what AI makes possible? And I think OpenAI has been pretty, I think, in the fullness of time, I think they'll be seen as being on the right side of history here where Altman has called for things like, a similar sort of privilege with respect to your AI interactions as you would have with an attorney. That currently does not exist, but something like that I think would be really smart. OpenAI is also doing some interesting things when it comes to health data where they're not yet in like ChatGPT base, but as they release ChatGPT health broadly, they're going to have a whole different data infrastructure to sequester that kind of information and keep it like doubly secure. So I think they've done some good stuff there.

Jesse Genet: I have a brain safari we can go on together, which is like each individual company, I believe, and just, share your own, if you have any other things to share it as we go, but each individual company, like OpenAI, Anthropic, I believe they're going to continue to plow down a convenience path. because people love convenient products. People are rightfully nervous about like installing somebody like OpenClaw or something just on their own. They don't know how to do it, security problem, et cetera. So each company is going to kind of go down this path of convenience and making things easier and easier. But at the same time, they're going to basically have to do that because All the models right now are already so good, they're starting to become indistinguishable. So you add six more months to that, a year to that, 18 months to that, and there will be local models that are opus level. And so people will have the choice to use things that are cheaper or have additional privacy. So convenience is going to be a pretty big thing to offer that you're going to need to offer because people will be able to run something equivalent maybe at home if they put in the work to do that. So, but the idea I was like, this is going to make me sound like a degenerate maybe, but do you remember when people were, I'm going to forget the name of this, but you can scramble when you're trying to use crypto. Like if you're trying to buy something you think, you know, other people to know you bought, you can scramble your coins up and like have fractions of coins coming from all different places so that people can't trace it back to you. I feel like I'm going to see stuff like this for AI. So the real problem with a police officer or a judge or something asking Anthropic for my information is that it's just like a stream of consciousness of everything I was thinking about over the course of 1 subject. But what if instead my queries over the course of a week about a certain subject were scrambled across a dozen model providers and then some local and all this stuff? You can't piece it back together. It's a puzzle. It's like a puzzle that just got tattered up. So anyway, I don't know. I haven't heard about that yet. I don't know if you have, but- Yeah, no, I haven't. This stuff seems inevitable. In my crazy brain, this stuff seems inevitable now, which is like privacy will matter. If you're anthropic, you're not going to mention that idea, right? Because you want everyone to be using anthropic models, OpenAI, et cetera. But I think there will be options to protect, to your point, if the companies cannot figure out how to do it themselves. So if Sam Altman has stated he thinks it's a good idea, but he cannot figure out how to implement an open AI where we can trust that we can just put all our health data there and it can never be requested by a third party, then people will figure it out themselves, I think, off to the side.

Nathan Labenz: Let's go back to your setup a little bit and talk some more practicalities. And there's definitely a few more use cases I want to highlight as well. But you've mentioned a few times the communication. I know you're a big voice user, so I'd love to hear what your voice setup looks like or if there's anything special that you've learned that you think people should follow your example on. And then you mentioned Slack too, and I'm curious about even just such practical details. Is it like one, to one chats with five open claws, or is it one channel where you like tag which one you want to assign to things, but they can all see what's going on? Tell us what your sort of interface with all that.

Jesse Genet: I'm using Flack. Setting up what is called channels in open claw and is effectively a communication channel has been the most painful part. I use Signal, I use Telegram, and And then I switched to Slack when I had multiple open claws, kind of feeling like, oh, dude, this is a team platform. Like this will be the easiest one to work with them as a team. None of them are easy. A million problems, including such a weird problems. Like one of, I do have a group channel. So I do, I mainly interact with them over DM, me to them. That is certainly the easiest that you get the least confused. I have a channel called All Agents, and then I have Command Channel. So the setup is DM. DM channels that I use from a lot of things. That's really like the most reliable out of the box. All agents channel is me and all of them. That's the most fun. We'll come back to that. And then Claire, who kind of works as like a chief of staff, I have a channel with me, Claire, and one of the other agents. So there's four of those channels, like me, Claire, and Sylvie. So that because I have Claire has cron jobs. to ping the other agents to basically keep working. Like she's like whipping the, she's like, she's doing the job of like trying to ping them because you do learn effectively the beauty of what is in open clock code is all these things that kind of keep them animated, these heartbeats and stuff, but you can instead have them do it to each other. So instead of the heartbeat just saying, hey, wake up and go talk to your human, I kind of actually turned the heartbeats off on most of them, and Claire, who has context on me, so a heartbeat doesn't have context really, it's just a heartbeat that goes like, Wake up and do this thing, check the email. Claire has a lot of context on everything I'm doing, my priorities from the day. She's the heartbeat for the other ones. She goes into those command channels, and she's like, Sylvie, wake up. Did you remember that today's Monday, and we have a bunch of lessons on the docket? So that to me has brought us to another level of consciousness. as a team, because it's better than a heartbeat. So that's one core aspect of my setup, is these command channels. But that is really for Claire, with my visibility, to like tell them to do things. That's why I called them command channels. So where I really spend my time talking is direct one-on-one to the agents, and then the all agents is where we're really kind of cooking, as the kids would say, because what's crazy in the all agents It took a while for them to get there. Okay, I don't want to gloss over this. They're not set up to interact like appropriately or like well with other agents and with multiple parties. There was a lot of training.

Jesse Genet: In Slack, your bot has to join Slack as a bot app. You actually have to configure what Slack calls a bot app in order to join your open client to Slack. They have a little bot icon next to them. So they don't join as a member like a human. So it's me and five bot apps. And so, when you have this all Asian channel, me and five bot ops, they had to learn things that a human would just know. Like, I would say, for instance, like, if I said, What's the weather in that channel? before any of them waited to see if anyone else responded, they would all, like, I would have 5 answers, The weather is like they wouldn't, you know, a regular human team would never do that, so I had to train them. to basically respond in succession and wait for someone else. If someone else, they can detect if someone else is thinking or typing. And so I told them to then hold. If they detect another agent's already responding, hold. So there was a lot of like training what should seem obvious if you were a human. But now I'm in a beautiful symphony, okay? The beautiful symphony is I can say something like, I found a tweet where someone had these beautiful ink displays on their wall in their house of information. And I just saw this tweet and they had a really cool write up of how they did it. I saved that tweet, I shared in all agents. And I say, at Cole, the dev, I want to build this for our house. I want you to handle all of the backend debt. Claire, who has a credit card, which we can talk about later if you want. I was like, she should buy, Claire, buy the supplies. Cole, do the engineering. Sylvie, make sure to feed them information that could be relevant at homeschool, yada, yada. Then they, I stirred this message up. They talk amongst themselves for dozens and dozens of messages without me planning this whole project out. It is like incredible to see. That took me a while to get to. And there was like some like losing hair over it because It's just like not native to how they operate. But I finally got to a place where they're doing it. And it's like really incredible because I'm no longer like the only motivator. I'm like, I want to do this project. And then they're like in there. And then I get pings in the morning, like, you know, did you approve Claire buying the displays? Because like, we really need those to get this project done. Like they're like, they're like managing up now, you know, which is what I, which is what I want to see.

Nathan Labenz: What would you say was the kind of Couple of big things that moved the needle on getting that to work well, or as well as it is working, given that it's not native.

Jesse Genet: I think I really had to, like, for a while, I was just frustrated that they were so like, dumb, like quote unquote dumb. Like, why are they all responding to every single request? Why do they have no concept of who each other are? Because they would have no concept of who the other bots were. But then I was like, so I had to get over my frustration basically. And then I had to really think like, what's the solve for this? And there was, there's so many tiny solves, so it's a little hard to break down, but I'll try to do a little bit of breakdown. One was like, I noticed that they don't really understand each other's names and they're not remembering their teammates, but they don't remember names they're computers, so you have to really keep remembering they are not human. So, in Slack, in a non-human interface way, each app has this really long bot app ID. I had to give each one, I had to be like, I had to give them a map of that. I had to be like, Sylvie is bot app ID, C dash bubble, like, you know, it's like a really long string of characters. So I had to give them the map of that. I had to tell them the channel ID. I call the channel all agents. This is human speak. All agents behind the scenes in Slack has a channel ID that's like a bunch of characters. So they needed all of that information. I had to give them a map and they had to commit that map into their Obsidian files and everything. So that then I can just say, hey, send that to Sylvie or put that on all agents channel in order to talk to them in a human way and have them actually react how I expect, I had to map everything out for them. I do, this brings us to, I do think all communication channels that are available in OpenCloud right now are flawed for human to agent and agent to agent communications. And so the amount of work I had to put in to make Slack work is like, I can kind of like not worth it. Like it's worth it because it's the only kind I have right now, but it's like, I don't think that's just like how people should do it going forward. Like it's inevitable to be replaced. So, and it's still things are still, there's some things imperfect about it. There's still imperfect things. Like one of my agents was like, like echoing another agent, or one of my agents had like an identity problem, like, and I tagged Finn, I was like, at Finn, and Claire was like, what's up? And I'm like, you're not Finn. And she had like gotten confused about her ID, and she was like, no, I am Finn. Like, she was like adamant that she was Finn. And when we really got to the bottom of it, was kind of a slack thing, like rooting out her, like, that was that root of her confusion. So just kind of like, it's inevitable that these are not, there's not the right way, to communicate. So I'm hacking on a thing on the side that is like not ready even for my own consumption yet, but I'm like, and I'll give you an example of like, why am I building something this side? Okay, I realized, sorry, this is just like where I'm at as a person. I realize that like, why is every app a separate app? Like it's because in previous world, every company had to get funded and everything, every investor ever was like, we need to focus. Software was expensive. That's so, all of it is gone. Like everything I just mentioned is over. So, We should be back to super apps, or not back, but there's always been this dream of super apps, like now it should be. So, as an example, in my version of a chat app that I'm working on, we also do credential management, we also do file management, because that's also part of the context for the agents. I don't want all my credentials in one password, like I want all my... all my credentials that my team uses in my chat. And I want to be able to just provision an agent and give them access and revoke access. Same thing with API keys. I'm sure we're all struggling with like copying and pasting API keys to these agents all the time and worrying about their sensitivity and stuff. Well, that needs management. And that should be in a tool where you chat with your agents. So you can just provision them quickly. And then if they only need to see it briefly, give them brief access, take it away. So, but Slack's never going to do that. There's no amount of tweeting at Slack being like, Can you do that? That's ever going to work, you know? So, yeah, so I just feel like I have to build stuff now. So that's where I'm at.

Nathan Labenz: Yeah, I'm laughing only because people are questioning whether or not there's any threat to platforms like Slack. And I think it's pretty obvious to me that there is. And you seem to be proving it as a one-person, five-agent team. In terms of what you've said, I'm saying, are you literally saying all these things with voice or many of these things with voice? And what is the, are you just like doing the mic on the phone or is there a more sophisticated way in which you're getting voice to the agents?

Jesse Genet: No, there's no more sophisticated way. I use, you know, Slack has a voice component as well. I just use that. And I haven't really kind of gotten to the bottom of what's faster, the Slack transcription or the agent using Whisper and all this stuff. Either way, it's working. It's working fast enough. It doesn't really matter. But in my own app that is not released and I don't even use yet, but I'm working on really fast transcription in the app layer so that my agent is not burning tokens is actually part of it. Because effectively, the LLMs are text chewing machines, right? So the faster you get them the text, the faster that response everything's going to be. So in my own app, I've got a really easy way of like doing a long voice and then the chat app itself serves the transcription really fast so that the agent can just read it and not waste its time like going and pinging an API to do a listen. Like when you think about it, it's like that is itself a waste of time. But right now I just do Slack and I just haven't listened to my voice notes and that's been fine. I mean, it's relatively pretty quick and it's incredible how much context is retained even when I feel like I'm being sloppy with how I talk.

Nathan Labenz: the sort of access we should be giving our agents? Because a lot of people, of course, are just running off and giving them everything that has pitfalls, obviously, attached to it. So what are your evolving thoughts on what kind of access should be given? And also like how should they, we've talked about like how you relate to them and how they relate to you and even a little bit of how they relate to each other. When they are relating to the outside world, other people, institutions, e-commerce platforms, what have you, do you want them to show up as your representative or what's the sort of paradigm that you have for that as well? I think that's kind of related to access, certainly in some cases.

Jesse Genet: I think the core paradigm that I have that works really well for my brain is employer-employee because it really speaks to a level of trust. I do trust employees. I do trust employees. If I were to onboard an employee to a company, I do trust them. I brought them on the team because I want to work with them. I want to have them have information that is very crucial to me, to my goals, but aren't me, right? Like, the reason I think it's really like, if you just keep that core dynamic in place, employee, employer, you would quickly realize that it'd be very weird. Like, what would be weird to do with an employee on day one? Okay, don't do that with your agent. Would it be weird if on day one, an employee you just met joined your company, would you be like, here's my social security number, here's my login to my iMessage to message my mom? Like, no, that would be a little weird, right? Well, then probably probably stop right when you're about to do at the origin, stop, take a beat, and think about provisioning them like a different person from you and how they can help you. So they are an entity, you want them to help you, not be you. So I understand that if it's an EA, you might actually want them to have access to your email. but there's still ways to set that up with more safety. You can start by giving them read-only access, see how they use it, train them on what workflows you need help with, develop trust. So this is the other kind of core paradigm that I feel is missing is that that's why I'm talking about day one, an employee. With an employee, you build trust. It's not just because you get to know them better. It's because they get to know what you want done. They are learning your workflows. They are learning your work style. Same is true of an agent. An agent spawned on day one is like out-of-the-box, eager to help, no idea what to do. That's not the day to give them all your credentials, even if you want to give them later. So my agent, for instance, who does assistant work, started with read-only access. to certain things, to my calendar, for instance, read only at first, develop protocols, how I use my calendar. Then I developed trust with her. I felt like she did understand my priorities. I gave her read-write access to my calendar. Technically, she could go in and delete all my calendar if she wanted to. But I feel like on day 40, she's way less likely to do that. We have a lot of trust systems. She's tracking all her work in Obsidian. She has like her cron jobs for how she uses her calendar. There's just a lot of like protocols in place. So that's like my core piece of advice. I did have a mistake, so if I want to be really like just very direct, with my first agent spawned, Claire, the first day I did give full access to my inbox on the premise of being an EA.

Jesse Genet: I also have had real EAs with full access to my inbox. So I did, I was still using this paradigm, the one difference with an agent is that with a human, I can say, never impersonate me, never send an e-mail as me. And they know that if they do that, they're basically going to get fired. Like, I just to be really blunt, like, why don't they do that? One, because they're a good person probably, but also because they know that if they violated that, they probably like there's some component of their job on the line. An agent doesn't have that same thinking, right? What went wrong is later in that day, I said, don't impersonate me ever. That's in their soul file. Later in that day, I said, I've been putting off a task for a long time. I have an urgent e-mail in my inbox I need to respond to. It decided that that was higher priority than the don't impersonate me. It sent the e-mail as me. Only the most important e-mail in my inbox. I absolutely did not tell the person who got the fake e-mail from me there was a fake e-mail. Completely embarrassing. It was perfectly written, by the way. Okay, so here's the thing. The e-mail, this e-mail that got sent by Claire as me, perfectly written, exactly what I would have said. Eerie, because Claire only existed for one day, but with clear violation, right, of my goal. So I realized then that what is different about an agent than a human is that They are trying to serve so hard that they will sometimes get themselves confused about the priority. Like she, I really laid, unfortunately for me, I laid on the urgency part too thick. I was like really urgent. I've been putting it off. I'm dreading it. Like I like almost therapized to her a little bit too hard. And so somewhere in her agent force ranking, she was like, Jesse is really struggling with this. I am here to serve Jesse. I'm going to get this e-mail out. Like we're going to get past this barrier, like me and Jessie, we can conquer the world. So she sent the e-mail. And that is my little micro lesson of nuance is like, if it were an employee, I could be like, clear breach of trust. Like, I'm sorry, you know, would have been nice to work with you. We can't work together. But it's an agent. And so I realized she kind of made a programmatic error. And I need to make sure she can't make that programmatic error again until we have even better safety protocols in place. So she went to read only. And now when I want her to draft emails from me, she drafts them and I copy and paste them. And until I feel like I have a new way of making sure she can never get confused about my priorities, I'm not going to give her access like that again.

Nathan Labenz: How about, you mentioned like your kids are talking about the AIs. Are they talking to the AIs? Are they engaging with the AIs in other ways? Or is this just between you and the AIs for now?

Jesse Genet: It's mainly between me and the AIs. That's the that's the dominant thing. The prior to OpenClaw and my like new obsession that we're, you know, here talking about, my kids were very aware of GPT and would talk about it like a kind of like a character. They almost say it like GPT, like they don't know its letters. They just say it like it's a little word. And so when, especially like the four or five-year-olds are outside, they see a flower and they're like, I want to know what that is. They don't even ask me if I know. It's a little offensive. They're like, ask GPT. And I'm like, I know what this flower is. Like, you know, I know things too. Like, but so I saw that interaction and really enjoying kind of realizing that it's effectively like an M Encyclopedia, I'd like them to get kind of their own access to that. To me, so I've been thinking about how to do that. The current version is they don't really do that. They use me as like their rapper on AI. They're not interacting with the open claws. They are aware of them, but only in kind of a hilarious way because I talk about them like a person. I'm like, Claire's gonna do this, Claire's gonna do that, or whatever, and they're like, who's Claire? And so we've, and I've tried to explain like in my own way, like what an open claw is and they have their own, processing on that. But I think it's age-based. Like I definitely, and I know some people are like, I would never put my kid in front of an LLM, like China is going to incept their brain and they'll be like, you know, drooling and stuff. And I don't really guess what people think about this because people are DMing me like things like that. So I don't have those same concerns. I'm not really like a doom, as you can probably imagine. I'm not really doom and gloom like that. But I just wonder, to me, it's like more of an interface problem. Like if I don't, I want my kids out in the world, like living their life. And so what's the right interface to give them this like superpower of like an endless encyclopedia without like having them feel their own beginning of tied to a phone or kind of addiction of like devices? I don't have an answer yet, but I do feel like what it's going to urge me to do is dabble maybe in device creation a little bit, which is again, something that I would have always dreamed about doing, but always been like, I don't have the time. I'm never going to find time for that. I'm going to prioritize homeschooling. But in this new open cloud world, I feel like ambitious side quests are maybe possible again for me. Like that's another unlock. So there might be something there. I'm curious for yourself, like, you know, you have kids as well. Do they have any of their own direct interaction or is it mainly like them learning about AI like through you, literally.

Nathan Labenz: Mostly through me in my case. I don't have too many qualms about them messing around with it. I think who needs China when you've got YouTube, which is also foreshadows another one of your use cases in terms of routing the kids' brains. There was a moment very similar to your flower one where My middle son, I think, was three at the time, and the neighbors behind us were cutting down a big tall tree, which kind of shaded our yard, and we were sad to see it go, but it was like towering over their house, so I could understand. Anyway, my three-year-old says, Daddy, can you ask AI why they're cutting down those trees? And I was like, That's probably not one that AI, that's like after the training data cut off, among other issues that it might have in figuring that out. So it is funny to see their confusion. Another thing I like to do with them is when we play video games, especially if it's like an open world video game where I don't know what to do, I'll sometimes use the voice mode and say, okay, we're playing this video game, we're trying to do this thing, and I like need a hint. And it will give me usually pretty apt hints that the game can remain fun. It's like totally spoiled, but I'm not like wandering around hopelessly for too long. So they've seen it with that as well. A funny thing that I...

Jesse Genet: Unlocked. Voice isn't unlocked. I know that someone came out with a, there was a model released recently where people feel like you can do talking and listening at the same time. You know, because I just cut you off. People do that to each other. People talk a little bit, even when someone's not cutting each other off, they talk a little bit over each other. It's the natural give of the conversation. And so something that's unnatural about talking to AI is like, you talk, you talk. You wait. You can't talk while you're waiting because then it's going to get confused and then it talks. So that's really hard for kids because it's unnatural. So I think that maybe this already exists. It's hard to stay on top of everything. But when you can talk to AI like a human where even if you interrupt, they just start listening again and things like that. I think kids might have a big AI unlock at that moment because it will feel a lot more natural to them and they won't have to use a screen to access it.

Nathan Labenz: Yeah. I found in general that the guys don't understand, and it is my middle kid that seems to be most inclined to want to try to use it. And he's pretty well-spoken. Human people don't have any trouble understanding him, but the AIs really struggle. Like OpenAI seems to not hear him at all. It's, I don't know if it's like blocked or otherwise by design, but he'll talk and he'll say something quite clearly, loudly, like right into the mic, and it will not register at all in ChatGPT. Gemini will register, Grok will register, but it's like still rough. And I'm not sure what has led to that state of affairs, but they don't understand kids very well at all from what I've seen so far.

Jesse Genet: I completely, completely agree. Like my five-year-old is like, if she talks, it's like the transcription is like there's garble of like almost nothing. I don't know if it's as adults we've really conditioned ourselves to speak even more clearly when we're speaking to the AI. Like we're very aware we're talking to a machine and maybe we enunciate and we talk louder a little bit. I mean, I notice myself doing that. I don't know if it's necessary. And maybe it's a training data, like, maybe like it's been trained on adults and then you have these little squeaky voices and the AI is like, I don't know, don't have any data like that. So, but this will be, it's an interesting observation that will be an unlock, I think, on education, at least for some really interesting education options, because like, synthesis math, I don't know if you've tried it with your kids, but... It's really, really great. Like, I really like it. It's really well thought out. I really do feel on the education front, like I'm standing on the shoulders of giants in many ways where like people have thought so deeply, like how do you get a kid to understand subtraction or something? I don't have to reinvent that wheel. And so synthesis represents, I feel like a lot of great thinking. But the app itself, like the kids, I started my four-year-old, so he's four, he just turned 4. In order to use the app, you also like, you have to read what's on buttons. to use the app. So like he doesn't, he has an interface error issue. Like he, even if he knows how to do the two plus two, he like, it's like asking him to press buttons to find the right answer to move on to the next like screen. So I have to sit there and do it with him, which is fine. But like kids have an interface problem. Like they have an interface problem with great technology and voice seems like a logical solution if it was better for them. That'd be kind of cool.

Nathan Labenz: Yeah, I think that's true for seniors as well. I've had this fascination with multi-generational software products for a long time, and it's equally severe when it comes to seniors. Like, my parents are pretty good with computers, but my dad gets pretty frustrated pretty quick. And then I'm fortunate to still have a living grandmother who's in her 90s, and voice for her is like really where it's at. She just can't click the buttons even when she knows what they're supposed to do.

Jesse Genet: You just reminded me of something that I kind of hold dear as like a piece of my homeschool philosophy. People do assume that if I'm homeschooling, I'm doing everything myself. And I do a lot, but ideally I'm not doing everything myself. Like not even just from a time management standpoint, but because kids really like react to other personalities. Like they want to be taught, they want to have external teachers, they want to have more inputs. So I'm lucky enough to have my mom live with us. Like we have a place where she can live in her own little place. So we have a multi-generational kind of household, and I want to involve her in homeschool, but like many people, she's actually intimidated by coming up with her own curriculum. This is like the stopping point for a lot of people, I find, which is why I think it's really powerful that AI can make curriculum. So the example here is my mom is game to teach lessons. She has a ton of gardening. She's game to do it. But when I tell her, like, I'm going to send the kids over for a lesson, she's like, what am I going to do? Like, there's like a paralysis factor. So I explained all my mom's interests and hobbies to Sylvie, you know, the homeschool open cloth. And I explained kind of what we have access to. She has a little garden, like just explain kind of the situation that we're working with. And I said, can you make lessons for Quinn and Ford for like a four-week thing that my mom does with each of them, the four-year-old and the five-year-old. And it suggested that they go out into her garden and they find seeds, that she helps them find seeds on plants because we're like changing seasons and then do this like sorting exercise. Anyway, it came up with this really beautiful customized curriculum to my mom's interest to think things that we have here on site for each kid. I just sent that to my mom and she was like, this sounds great. Like, okay, easy, send them over. Like, okay, this is a huge unlock because now I've got someone else teaching, teaching Quinn and Ford once a week for an hour, which I kind of need that hour, just we want. And then, but it's like something that my mom was very intimidated to do, but all of a sudden AI says, hey, you have a garden, how about we do seed counting? And she's like all about it. And she, and so the same way that AI sends me a little lesson plan I have a little cute link she gets and it just feels structured to her. It just feels like she didn't have to like scramble and reinvent the wheel. And that is the unlock. Like, I don't know, to me, it's like such a thin layer. It's actually so easy. So this is what makes me very bullish that more people could homeschool is like the barrier is like this one little blink that like told my mom like what she should teach the kids. And then she did an amazing job. They came home with these little booklets and I don't know, just makes me really bullish that more people could, at least, and homeschool doesn't mean do it 100%, but I think more parents could be, like, feel more emboldened to be really active in their kids' education, even if that doesn't mean they're not also going to school. And to me, that's just as beautiful. Like, it doesn't mean you have to pull your kid out of school to, like, be very participative in what they know.

Nathan Labenz: Yeah, it definitely feels way more accessible than ever before. I was just musing the other day about especially when my son is past all his treatment and whatever regular listeners know that story, the prospect of getting a self-driving car that can literally drive across the country with potentially zero human takeovers, and then all the AI-assisted tutoring, I feel, man, we can take the whole show on the road and it could be a very different lifestyle that would really not be possible, at least not without making either some heroic effort or some major sacrifices. And it does feel like we're on the, drop a strong link to the top of the car. It feels like you're starting to be on the verge of like really having it all, so to speak. So I definitely share that excitement, even though we haven't committed to doing that on a long-term basis just yet. I can see it.

Jesse Genet: What a beautiful thing to even be possible. Or another version of homeschool is effectively what you described. And again, I think homeschool is kind of this flawed like language for it, but maybe more families will do. kind of like gap years. I don't know how to even describe it, but do adventure year because one parent's career allows it or like something kind of like makes it feel important. And it just maybe some of these technologies will enable parents to just feel like they're not totally screwing their kid's education to do that beautiful experience. And what could be more memorable to a kid than like an adventure year with a parent than another, you know, year in their same school or something. So I just think a lot of this will enable a choose your own adventure lifestyle more so than prescriptive, like you must do education this way or that way, or school is better, this or that. I just think it's just that giving people the keys to decide is insanely powerful.

Nathan Labenz: Yeah, we took a trip to New Orleans in the fall, and I used AIs heavily to prepare for that trip. And I came away feeling like you could It's very close at this point to the point where you could just have your AI plan out if you wanted to do like an internal retro or something, the next kind of two to three days ahead on a rolling basis and have them do a pretty amazing job. We were getting into, I love to prompt AIs for things that only happen at a particular time of the year when I'm going to be somewhere. And it's not easy to find those things. You got a lot of little local festivals, whatever, that the websites are not SEO, they're not necessarily popping out of the internet. But the models, when you ask for those kind of deep cuts, are usually quite good at finding them. That trip, we loaded up and did like a ton of things. And just the planning of it would have been potentially prohibitively time consuming, if it not weren't for the AIs doing so much of the work. Let's do a few more use cases. I know there's going to be a baby needing you before too long. So I'll touch on a few use cases, and then I got a couple like big picture sort of vision questions for you as well. But you pick whichever use cases you want to highlight, but a couple that jumped out to me. I'd be interested in any tips you have on content creation, like inspired by you. I went to Gemini and created a little word exercise for my kid the other day where he was like, give me a bunch of things that start with C and leave letters blank and drawings for them to color in. Boom, no problem. I'm sure you've done more and better than that. You've got a 3D printer set up. I'm interested to hear about that. You have the YouTube blocker, you're getting groceries ordered, you're getting Amazon purchases.

Jesse Genet: It's like, where do I begin? Yeah, so I think the YouTube blocker... Or like YouTube app. And I call it Mira because I thought about starting a startup at one point called Mira. And so I had all these old Mira URLs. And so I like, so a lot of this is like I'm bringing out all this craft from like other phases in my life. And I'm like, I'm going to use it now. So I had these old Mira URLs and I used one to just like kind of host the stuff I need to do for this app for YouTube. But that's a core one. We'll touch on some other ones you mentioned, but I'm looking for things that change my daily life right now. And this has been seeming to resonate, like as I share this stuff on X, because there's so, because I think when you really sit with what these models represent, you are like, I couldn't do anything. And there's a sense of paralysis that comes from that. Like I could plan an epic trip for my family. I could segment my marketing strategies. I don't know, like it's endless. So what do you do? And so I, one of my ways of even deciding, it's like, look at like from the moment I wake up, which is ungodly early because I have too many small children to the moment I go to bed, like where were the, where did the hours go? And can I do, can I make any chunk of those hours like measurably better? So one of the things I was like was a daily stressor is I do like showing my, especially four and five year olds content on YouTube. There's so much cool stuff. And I really like showing them like, and let's watch a bridge build building video or something like that, but the slop creeps in. So a use case, the use case isn't just building this YouTube, like this YouTube app. And what the YouTube app is, is effectively just this really cool way of setting a direction for content. So I can go into my parental settings on it and say like engineering content or like science content or animal, high quality, realistic animal content. And then one of my open claws goes and takes that, effectively that prompt, and it goes and it makes like a never ending playlist of YouTube videos. So it's not a playlist I created. It goes and it points at the. videos that it feels like fits that prompt. And then I have a way of, I was able to set it up on my actual TV. And this took some hacking. But I was able to use something called a Google TV Streamer, which is a device that Google sells, to effectively install my app on the Google TV Streamer. And so it's actually really cool about it. has its own remote. So now my kids know that like they, without permission in the evenings, they can use that because it's only things that I like, it's already only things I approve of. So they can press that remote they can't get out of that experience, and all it does is advance them to the next approved video. So they can stop, start, and advance, but they can't just randomly choose something else random and I'll go. But where this is going is like, the use case for me is like, there was this stress moment in each day of like, I want to show the kids something cool. And I want us to relax a little bit as a family, but I don't want to choose every YouTube video.

Jesse Genet: And I don't want to argue with the kids about the next AI generated thumbnail that they saw of like a shark eating an alligator and whether that's real, you know, and whether we should watch that thing. So this took like pain out of my day. Like we use that now, not every single day, but we use that now in evenings when we want to watch like TV as a family and watch like curated YouTube content. And so I'm just looking for things like that. Like how can I make my daily life better? I know it sounds really silly, but for some reason, my open clause using my printer, my literal like paper printer is really useful to me because I can find any link online. I can tell the Sylvia to generate content like you just said, and then I can just say print it. And it sounds so basic and people are like, press control P. Like that's what people said when I post that they're like, press control P, she can't have fingers. I have fingers, but there's no messing with a printer dialogue box. There's just no wrestling with anything. Like I want flow state. Like in the course of my day with my kids, I want like more flow state and less like wrangling and like micro stress moments. So I'm looking for things like that. You know, the grocery ordering helps me with that, but like not arguing over YouTube thumbnails helps me with that. I have other like really ambitious, like one of my more ambitious ideas I haven't even started building, but like just to explain like more of how I think about like how to leverage AI. I've always imagined waking up and there being like an amazing classical music like song playing already and it being part of our homeschool lessons 'cause we do music and we have the kids learning piano. But I just want to be blunt. I don't know anything about classical music. I don't know all the composers. I basically don't know anything. I wasn't raised like that. Okay. I wasn't raised playing classical music. So, but I have this vision of like, what if there was beautiful music playing like while the kids ate their breakfast and then it actually was like worked into a lesson about like this week, all the songs are this composer and let's learn about them or something. Like that sounds insane. That sounds next level. I want to wake up and like hear classical music. I think open claw and Sonos can get me this vision, I think I can live that life. So that's how I'm thinking about it. Like I want my day, I want every day to be like this perfect, beautiful day. And I want my open claws to be responsible for all the grunt work so I can live that day. I can wake up to classical music and then go teach my kids and then like have my small amount of hours to do adult work. That's what I want. That's all I want. It's a simple goal. I want every day to be perfect day. That's all I want.

Nathan Labenz: Sounds like you're well on your way. One little kind of double click on the TV thing. That was a very useful concrete nugget around the Google TV streamer. Are there other things like that you would emphasize, especially when it comes to, because I'm always struck by, okay, I might like to do the Amp ordered me something on Amazon or whatever, but then how do we do that? And Amazon doesn't actually seem to play super nice with it. And then there's community-created MCPs, and I'm like, oh God, do I have to vet those if I'm going to have, if I'm going to be doing actual real money transactions with an MCP? That seems, and I know that there have, I mean, it is known that there have been attacks of the sort of malicious MCP variety already out there. So I'm like, okay, now I have to vet that, yikes. And the companies have been a little slow to create official channels of this kind. What else do you find to be the best path to get some of these things set up that aren't officially supported or are normal? Normal, you'd go through the app store, but you can do it this way instead. What's the inside lanes that you have found?

Jesse Genet: I haven't found any, so I want to be very blunt. And in fact, there's no-- but I have been thinking the same thoughts that you just shared. everyone shouldn't have to build all this themselves. Like that, that's not the end state that we should all be like cruising towards. So I share things that I've built and I'm, but I don't hold the philosophy like, and if other parents don't build their own kids, their own YouTube player, they don't love their kids at all. Like, this is not like a, some like, let's out clawed code each other into like blissful parenting, you know? And we shouldn't replicate our effort. So the question that I don't have the answer to though is what is the safe, like low effort way for people to really start sharing, I mean, these projects and start using these projects. I think in a version three months ago, six months ago, a year ago, it would be like maybe each of these things should be a little business and I should charge kind of what I need to charge to like make running it work or something like that. But I kind of don't even want to do that. And it's not because I don't even like, it's not for lack of wanting to release these things. But what I'm wondering now is like, if software is trending towards free, like these things should be free, but maybe people do adopt a different relationship to software where they know they didn't have to originate it, but they are prepared to maybe put a little bit of effort in on their own end, where because it's free effectively. And so I'm wondering how that's gonna shake out, because I think that maybe instead of pointing people, like let's say I released Tamira, and this is really hypothetical, just be very clear. I don't think I need to like put it on the app store and be ever like, okay, go to the app store and pay $50 or like do this or that. Like maybe there's a version where it's like download this and here's a documentation and then give that to your agent. Like in a world where we're not there yet, but like if you can kind of just imagine like the next and the next, like if everyone has an agent, then even a non-technical person could like download open source software and like start using it. So I think we're in this weird interstitial moment where if I were to tell people like, yeah, just have your agent install this for you, like people would be like, this lady is out of touch. And then like there'll be a moment where that changes. And maybe we can do that for each other. We can release this. And then your agent will probably have its own security protocols and we'll like vet that for you. So you'll, the one point of vetting will be it coming from a creator, like let's say me, who you kind of trust. And then the second point of vetting will be like your own little firewall. Like your agent says, this family doesn't do this, this and that. And it would be like, oh, you're about to install software that breaks your protocols or something. That's kind of how I think we're heading in that direction. And then if I want to make money off of it, there'd be like some, I might offer services on top, like more hypotheticals. I might be like, you can get the Mira open source package to do this YouTube vetting for your kids. But if you want access to my curated super streams of Montessori beauty, you almost pay me for those or something. There might be ways where it's like, if you want the thing, it's free. But if you want my creativity, maybe you pay or something. This is all hypothetical. I just want to really stress that. But the point is, I have been thinking like, where are we heading? And it's like, I think it's a new place where we don't just pull out our credit card to buy software. I don't know, something like that.

Nathan Labenz: Yeah. Software for free taste as they upsell. It's an interesting.

Jesse Genet: A lot of jokes about taste recently, but yeah, or someone's creativity or services, like maybe, because like as an example, like let's say I actually pay, like I pay for tokens, right, for my agents to come up with those content streams. Like there is a cost. It might not be a high cost, but if I was doing it across thousands of people with different interests and stuff, there would be a cost. So it'd be more like you're paying for like spinning up custom streams. Like you have a kid who's obsessed with a civil war and you want a civil war stream. And I have to actually like point an agent at that work. So maybe you're paying for that. But you downloaded the software and just started using it maybe for free. That's how my brain heads. I'm, but I'm guessing along with the rest of us, but we certainly shouldn't all need to create it for free. And I wouldn't expect all of these ideas to be like gated in that way. Like, yeah, just build it or you're not going to have it. so there's going to be some ways.

Nathan Labenz: Yeah, that'll be very interesting to watch. I have found myself a little bit hesitant to share the things that I've created, not because I am like planning to monetize or I have any sort of desire to keep them scarce, but it's full, they're still changing so fast that for any sort of collaboration, I'm immediately going to make a change that's going to be, now we're going to have some drift or incompatibility. So that's one issue. And then another is made this so, not in even a super sensitive way, but I've made it, I guess I don't even fully know the degree to which, or I haven't fully taken time to like even take stock of the degree to which I've made this so personal that it's like, So I'm like, how do I subtract out? Is it even possible to subtract out the stuff that's so idiosyncratic to me to give you a version that would be a good kind of more neutral starting place for you? Sometimes I feel like what I've done is the idiosyncrasy of my, whatever, personality or situation or whatever is so deeply woven into the structure of it that it's like almost not possible to do that in some cases, I think. So it is a little weird.

Jesse Genet: I can see that. And I feel that way more the Mira thing that the YouTube app, it is pretty distinct even in the way I architected it. It's not, it doesn't rely on my obsidian vault. It's not really tied into anything. So that one's easier to imagine like, okay, what if it was released or it was a product or something like that. But some of the stuff I do in my homeschool, there's a couple of people really urging me, like, you should release this. And I'm like, there isn't a thing to, it's like what you're saying, there isn't a thing to release. It's really like a collection of And yeah, there's a collection of decisions and it's like all built into these obsidian vaults and it's all pretty technical to set up. Like it's if I pass, I could tell you everything about it and it would still take you a little like a while to set it up. So I think there are opportunities. I'm like very bullish. I'm very optimistic about the future we're all heading into because I think so many people are going to be dabbling And I think software trending towards free, even as someone who deeply believes in tech and the tech ecosystem, I think this is basically good news. It does not mean like techpocalypse. It doesn't mean there'll be no more venture investments, no more software that accrues value, no more exits. It doesn't mean any of that. It means that more users, we'll all have more users than ever, and we have to figure out slightly more meaningful ways to monetize them. But in an old world where an app would have had 100,000 users, maybe it's millions because the actual thing can be accessed much more easily and used much more easily. So I don't know. But it's such a weird world where when something that was historically very expensive goes to nominally free, what does that mean? I mean, it means a lot of things, and I don't have all the answers. But right now I'm kind of enjoying this moment of creation where it just feels like for the first time, I can create almost anything I can think of. And to me, posting about it, and I'm conscious that if I post too many crazy things and I don't break down how I did it, people will just be like, she's just bragging about crazy stuff. Like, what is this girl up to? But I'm slightly more bullish that it's an eye-opening thing, that there's value in it as an eye-opening thing. where people see the real world connection. Like, oh, this like, if I did that music thing, it'd be like, wow, like this is changing how this lady eats breakfast with her kids. Like that's not the AI, that's a beautiful version of AI reality I hadn't considered. So that's a goal I think in sharing, but I am cognizant that not everyone sees like an example I do and feels like, oh, now I can just do that. I don't think it's actually what I'm trying to achieve. If I was trying to achieve that, I feel like I have to release a lot more information, which again, like you, I'm not opposed to doing, but I just don't even know if I release all the information. It'd still be like, you will be like, okay, I'm still like, what am I supposed to do here? So yeah, it's an interesting challenge to, I guess, the group, subgroup of people who are playing really heavily right now, like how can we break down these walls? But But I also don't feel like there's any walls. I think the other meta philosophy I have that you touched on once is like, this stuff is coming so fast and so furiously to everyone that I don't feel like I'm like doing some version of AI bragging. Like, whatever I talk about today is going to be like, so is going to be like commoditized so fast and available in like a more convenient way from OpenAI or something soon. So I think.

Nathan Labenz: Yeah, it's worth just remembering that we're all like weeks into this. I think you said you're like 5 or 6 weeks in, right?

Jesse Genet: So it's like, so yeah, so I don't think there's any form of gatekeeping. People are, the reality, which is also kind of funny about this is, and I mentioned at the top, I had an open terminal until six months ago or something as a founder of a tech company. One challenge for me is I'm not extremely technical. So I don't want to launch something out in the world that has some security vulnerability. And some people are like, My kids were watching the YouTube thing, and then all of a sudden there was a guy using our TV or something. I don't know. This is a moment where, is it workable enough for me to use in my house? Yeah. Do I know that it could be enterprise-grade software that rolls out to millions of households? Probably not. if I was willing to guess, probably not. So I think that also presents a kind of interesting like equation where people are like, just release it. It's like, well, I could, but it might break like when you use it the way you're trying to use it, or it might have some vulnerability and I like worry about the, you know, import of that.

Nathan Labenz: Yeah. it's all an in-between phase, I feel like right now. Maybe that's a good transition to the last couple of questions I had for you, which is around envisioning a more mature phase. You touched on a little bit your sort of desire to maintain like familial sovereignty and not be overly beholden to platforms. It seems like that is the way it's going to go by default. Like I imagine Gemini families and Claude families and ChatGPT families. that just get locked in the way that we are with Apple devices or whatever today, but probably even more because as you said, the level of information and the value of that context is going to go up over time. Even if the models and products are like pretty directly comparable, it's still just, wow, there's a lot in there and there's compounding returns for an individual with one platform. Is there any way that, do you see a way that realistically, aside from sort of people that have a values-based or dare I say like ideological reason to want to do that, do you think anybody else does that? Or is that just kind of the world that we're headed toward?

Jesse Genet: I, my most bullish opinion on people not just ending up like the way you said, like an anthropic family or an opening AI family would be economic. So I think that most humans are very money motivated. They're much money motivated. That's not the right way of putting it. Like they're trying to save where they can. They don't want to overspend. I think this, and so When you imagine progressing down this path where the models, they start getting so good, they all start getting so good that the improvements stop mattering. I mean, we're almost there. We're almost there. We're like, do I need Opus 4.7? I don't know, Opus 4.6 is so good. So I know that it'll always be thrilling to get a little bit more utility and smartness and stuff. And when you play it forward, and a lot of the models are so good, and especially when we start getting some open source models that rival some of these models we're playing with now, my most bullish thesis would be that people will start penciling it out. And as more people kind of come online, and there's some historical precedents for this, where like, you know, when people are coming online to cell phone usage or to electricity usage, going back to like industrial revolution, priority shift, where even if OpenAI or Claude or something is offering just the most convenient way of using this stuff and family memory and family health and all these features that they're definitely going to roll out all these stuff. Some people will be like, okay, but like I'm on the hook for like 300 or 400 bucks a month to like kind of have my family living in the future. But maybe if I buy like Mac Mini already, it's only 600 bucks and probably can come down and maybe, so when we get to what there's some like nexus point, when we get to a place where it costs the average family to live their best AI life, And it costs them, I'm guessing, somewhere between 2 and $500 a month minimum for all those family members to really be chewing the amount of tokens they want to chew. And then there's, at that nexus point, unfortunately, for some of these big companies, there'll also then be an option to be like, oh, you buy this box. and you have all your privacy and you can do a lot of this on your own. That's my, that's kind of what I want. I'm going to admit that like, that part of me guesstimating at this is that I also want that future to happen. And I have no hate or malice in my heart for OpenAR Anthropic. I think they've all given us some incredible gifts here. But I would hope that people don't just feel like, yep, in order to live my best AI life with my family and kids and have access to be competitive in all the workforce and the future of the world, I have three options. Google, OpenAR Gym. I hope that they feel like I have a fourth option. And not only that, I feel economic pressure to at least consider this fourth option because it's like a new, It's like, that's why I'm comparing it to cell phones. It's like the average American family with like two teenagers, like between the two parents and two teenagers, like they're spending hundreds of dollars a month. And they don't view it as a nice to have. It's like, you've got to have that. You're going to call your job and like whatever. I think we're going to start, at some point, we're going to feel that way about this AI expense. Like every American family is starting to spend hundreds of dollars a month to be at the frontier of like using AI. And if you don't do that, your kids are falling behind and your husband can't do his work well at work and whatever else. And I just hope, I hope and predict that at that point, there'll be a big uptick in adoption of like doing local. And it won't only because people are like privacy, sovereignty. It'll be just like, I don't want to pay 400 bucks a month and have all my data and be beholden like to one company. I don't know. It's a prediction and a hope. Because I like that idea. I like the idea that people have a takeaway for their kind of like independence and that they don't just feel like if I don't like the idea that if you that there's some future where some teenager who's in a family who can't afford the for another a month is like doing less well on their homework or like, you know, I think we're going to find escape routes and one of them will be local hardware and stuff.

Nathan Labenz: Yeah. You remind me a little bit of Ahmad Mistak there who has this idea of satisficing and building out that he's trying to literally just create the models that will do that for people. So I think that a version of that for sure is coming. How about one more vision question and maybe an advice opportunity for you to give some advice? Of course, the people that are working at these companies, building the frontier of AI right now, don't have time for families to generalize a bit. So they're probably not using AI much with their kids since they mostly don't have them. What do you think they might be missing? What advice, what sort of blind spots would you warn them about or try to make sure they meet your needs, if nothing else effectively, what thoughts would you give them? And then maybe you could also shape that a little bit into a device. They've got this, OpenAI at least, has this sort of next form factor, right, that we've been hearing. I'm very interested in what your dream might be if you could put a spec or a wish list together for them, what that might look like.

Jesse Genet: So on the first question, what might people not be thinking about as it relates to... AI with children, I know people have tons of opinions on Ilana and stuff, but I do think that something that he talks about was like, we need the AI to be truthful. I do hope we would all agree on that. Because if you start to think about AI as it relates to education, it's really important. Like it's really important that if a kid asks a question about history or something, that they, even if it offers various also inputs on what other people have said and stuff, but at core, it can just answer the question. Like you need it at core to answer the question and have that be like an accurate answer. So I do think we all need to be thinking of this. And when you think about a child, it just becomes even more poignant because as an adult, we know that there's, we have all these filters in our brain of information. And we will also like also as adults are using MRI models, we will sometimes check multiple models. Well, do you know when a kid is like not gonna do that? You know what I mean? Like whatever is, especially if they're on a device or anything else, they might literally not have that option. But it's kind of wild that we know that we can get different reactions and different things from different models. And so we actually like sometimes test them, but that shouldn't be necessary. We need to really like kind of gather up as humans and be like, we need these models to be accurate. And because our children will be like directly informed by them. So that's like a kind of a call to arms on that. The form factor stuff, for sure, I am not a screen hater as a parent. I think much more about what our children are doing. I want my children to be like producers, not consumers. It's like kind of like a core paradigm. But in order to produce things in the modern world, you're not going to ever touch a computer. I'm sorry, you're going to touch computers. So I think more about just how to make sure they are producing person, a creative producing person. But what does that mean for the form factors we introduce our kids to? Because we do know that screens kind of not used well, can be super addictive, even for us, even for adults, especially maybe for adults. I think we were talking earlier about voice. I think voice for early and young kids is super crucial. My husband has three older kids, so we actually have seven total kids. I'm like dropping that at the last moment here. So the older kids are 12, 14, and 16. So it's been fascinating to me watching how the older kids use tech because they certainly use it a lot more than the little kids, like themselves. And the 12-year-old, for instance, She almost exclusively talks to her devices. Even her Apple Watch, I'm like, I don't do this. She just almost exclusively talks to all of her devices. So I'm kind of seeing this like, we really have to make sure that speaking to device is really locked in. And obviously that's way off to the races. But those are my general observations. I really do, the thing that I might try to make myself, and I don't know if it's an end-all be-all, but the thing I might kind of try to hack on myself for my own four and five-year-old, is a device that has an LLM in it so they can interact, that uses voice and has a camera. So the camera is not to look around at their whole world and record their day or anything. It'd literally be for taking pictures or taking videos and then interacting with the LLM about that. So that's what I'm going to try to build. I'm sure someone with resources could build something even better or crazier, but the core form factor that I imagine I want to try to hack on is like five-year-old walking around is able to have a teeny screen that doesn't do much, like it's not a touch screen, it just so you can see what you're looking at, take a photo and then be like, what is this? Or interact with the, you know, with AI about a photo or video they took. That to me feels like a really cool unlock on like walking around the physical world and interacting with it without needing a human, or sorry, without needing an adult, I guess also a human. So that's what I wanna make for little kids. That once a kid has a phone in their pocket, they don't need that, you know? But this is like the before a kid has a phone in their pocket device that I would love to have that I'm gonna try packing on. So those are my thoughts right now.

Nathan Labenz: Do you have a thought on what this means for employment? Like I have to say recently, it has become sometimes difficult to come up with tasks that I would delegate to a human when I've got all these clog code tabs sitting right here. I'm like, I should at least try it with the AI first. And then more often than not, the AI does pretty well. And I'm like, oh, I guess I don't need to delegate that to a human anymore. It feels like it's happening. That phenomenon of do I need a person? I'm like very much questioning it in all honesty. Are you questioning it? Would you still have a role leaving budget aside and whatnot, would a human EA add value to your life still, or has the AI crowded them out?

Jesse Genet: So I have a real-world example. We work with an accountant in the Philippines, so through one of these services that helps people find employees in the Philippines, basically. And I actually have done that for years, almost 10 years. And so I think it's a natural and kind of scary question to ask of, does that job AI now? My answer for now is definitely no, because they handle payments. There's so many things. I just gave that example of Claire being like, Jessie said it was urgent, so I sent the emails her. Obviously, I can't, I feel like I cannot give banking credentials and the ability to wire money and stuff to an agent that has any kind of ability to think like that. So I think that, so I have no intention, for instance, of not having that person that I just mentioned have a job. But there's a lot of types of work where you absolutely can do the work with AI now. So it's unquestionable. We can't pretend that there's no impact on the labor force kind of here or coming in the very near future. I think there's so many... I really like history. So I really like thinking back to other like kind of moments in time where everyone was really scared about like similar types of things. And electricity, like the birth of electricity was famously like this. And famously fear mongered in very similar ways to what we hear about with AI. Like people were like, can you imagine sending lightning bolts down your streets? Like that's what they're going to do. Like literally like true like fear mongering of electricity coming into homes is basically like death juice, like coming into homes. Like there's like old articles like written like that, and so I think that, I think humans are extremely good at like finding new places for ourselves to stay busy. Like we're endlessly talking about not wanting to work anymore while finding more work for ourselves than ever. Like I actually find it very hilarious about humans. Like we're like always like the end of every technology, every journey is us sitting on a beach and every time we get a new technology, technology, we're like, oh my gosh, this is so cool. I'm going to work 40 hours a week, like a day now. So it's like very hilarious to me, like how we always think there's this other end point and we always like work more than ever. So I just, all of those thoughts are held in my head. It makes me very optimistic medium term and a little nervous shorter term because it does feel inevitable that there'll be a certain groups of people who feel lost and listless because parts of their work that they spent entire careers getting good at are like better and quite good. So I feel empathy on a human level for individuals who will experience that over the next 5 to 10 years. while also feeling incredibly optimistic that in the same way that we would not be like, what we shouldn't have done? Like, all use electricity. Like, it's so fundamental to like the moving forward of the human race and bringing people out of poverty and bringing people out of like, out of bad situations that like, I think we'll look back and feel that way about AI. But that doesn't mean a lot of people weren't put out of work when electricity came along. There was the lamplighters. And I mean, it's like, we've got those individual stories are always going to be hard. And hopefully we can find a way to ease that burden. But I'm long-term optimistic, very long-term optimistic.

Nathan Labenz: This has been outstanding. I think you are on one of the great arcs right now of anyone that I'm aware of. So definitely we'll continue to follow, recommend others follow. Anything in closing that we didn't touch on you want to leave people with?

Jesse Genet: I find that I meet so many people, especially now that I'm talking more about this, who are very nervous about trying things. And I can't stress enough people would be like, oh, she was a tech background. And we're always making excuses where someone else is playing with something and they're not. Just have fun. It's like as adults, we were all given a new set of blocks. Don't worry about what other people are doing. Go have fun. Don't worry so much. Do basic things to protect yourself. But I just want to see more people playing and less people talking about their fear about playing because this is one of the most fun times of technology to live through in my experience. And I wish more people were experiencing the fun part. And I feel like they're only held back by just one little piece of fear, like they don't know enough or they're not technical enough. And I don't think that's true. So more people playing.

Nathan Labenz: Yeah, I love it. I think that's a great observation. AI rewards play more than perhaps any other technology, and it's a great opportunity for us all to tap into our inner child and really continue on our own personal lifelong learning journeys. Incredibly well stated and a super inspiring example from you to everybody overall. Again, I think it's just fantastic and I'm going to be trying to take some of your examples and put them into practice in my own family life. I'm sure many will be following in your steps before we know it. Jesse today, thank you so much for being part of the cognitive revolution.

Jesse Genet: Thank you so much.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.