My Positive Vision for the AI Future, from the Existential Hope Podcast

This episode explores building a positive vision for the AI future, discussing how AI could transform daily life, from self-driving cars to personalized tutoring and improved health. It also covers fostering societal expectations and the impact of fiction on AI's development.

My Positive Vision for the AI Future, from the Existential Hope Podcast

Watch Episode Here


Listen to Episode Here


Show Notes

In this special crossover episode from Beatrice Erkers' Existential Hope podcast, The Cognitive Revolution's host explores the crucial, often-neglected question of building a positive vision for the future in the AI era. The discussion delves into what a new social contract and daily life could look like, covering transformative applications from self-driving cars and personalized tutoring to democratizing access to expertise and radically improving health. Listeners will gain insights into fostering higher societal expectations for progress, understanding Eric Drexler's "comprehensive AI services," and the profound impact of fiction in shaping AI's future.

LINKS:

Sponsors:

Google AI Studio:

Google AI Studio features a revamped coding experience to turn your ideas into reality faster than ever. Describe your app and Gemini will automatically wire up the right models and APIs for you at https://ai.studio/build

Shopify:

Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive

Tasklet:

Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai

PRODUCED BY:

https://aipodcast.ing

CHAPTERS:

(00:00) Sponsor: Google AI Studio

(00:31) About the Episode

(02:43) Introducing Nathan Labenz

(03:50) A Positive AI Future (Part 1)

(12:23) Sponsors: Shopify | Tasklet

(15:31) A Positive AI Future (Part 2)

(15:31) Excitement for Self-Driving

(19:25) Visions for 100 Years

(24:58) Near-Term Transformative AI

(31:08) AI's Impact on Sectors

(41:16) Comprehensive AI Services

(52:16) Balancing Hope and Risk

(56:51) Raising Societal Ambitions

(01:01:30) Branching Sci-Fi Narratives

(01:06:00) Lessons From Podcasting

(01:09:52) Outro

SOCIAL LINKS:

Website: https://www.cognitiverevolution.ai

Twitter (Podcast): https://x.com/cogrev_podcast

Twitter (Nathan): https://x.com/labenz

LinkedIn: https://linkedin.com/in/nathanlabenz/

Youtube: https://youtube.com/@CognitiveRevolutionPodcast

Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431

Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk


Transcript

Introduction

Hello, and welcome back to the Cognitive Revolution!

Today I'm excited to share a crossover episode from the Existential Hope podcast, hosted by Beatrice Erkers, where I recently had the pleasure of appearing as a guest.

If you've followed this feed for any length of time, you've heard me say that the scarcest resource is a positive vision for the future.  What sort of social contract should we be working toward in the AI era? What do we hope daily life will look like?  And what sources of value will continue to resonate in a post-scarcity world?

These questions, in my view, get far too little attention from the AI community today, myself included, and so Beatrice's invitation was a welcome challenge to spend some time developing and articulating some of my own ideas on the topic.  

As you'll hear, we cover a lot of ground in this conversation, exploring the mundane-but-transformative deployment of self-driving cars, the potential for AI-powered individual tutoring for all students, my hope that access not just to expertise but also to experiences could be democratized, the promise of radically improved health through AI-accelerated science & medicine, and more. 

We also touch on Eric Drexler's "comprehensive AI services" vision as a possible way to square our desire for superhuman-quality services with our need for safety and control, my sense that we should encourage higher expectations for progress across society as a whole, and why I think that writing fiction is one of the most powerful ways in which an individual can shape the future.  

With that in mind, I encourage everyone to check out the Tomorrow's AI and AI Pathways projects that we discuss, and to challenge yourself to help flesh out how these different AI futures might actually unfold in a positive way – and of course subscribe to The Existential Hope podcast – where true visionaries, including Nobel Laureate David Baker and Adam Marblestone, CEO of Convergent Research, share their positive visions for the future.

For now, I hope you enjoy this exploration of my own partial vision for a positive AI future, with Beatrice Erkers from the Existential Hope podcast.


Main Episode

Beatrice Erkers: I am very happy today to be joined by Nathan Labenz, who you run the Cognitive Revolution podcast, and we just were talking a bit briefly before starting this recording about just how many episodes it is that you've done. It's so many. So it's, it's a bit intimidating, I think, to interview such an experienced podcast host. Feel free to, to direct me if you have any prompts.

Nathan Labenz: Not at all. Well, thank you. I'm excited to be here. Looking forward to this conversation. And for my part, I'm basically just obsessed with AI and trying to understand it as well as I can. And of course, it's such a horizontal technology. It's touching all aspects of life and society that there's a never-ending number of angles to come at it from. So eight episodes a month, honestly, isn't really enough to get after all the angles that I would like, but it's probably as many as anybody could reasonably produce, and certainly I don't expect anybody to listen to all of them. But it's been a really fun learning journey for me. I honestly don't consider myself to be very charismatic in all honesty, so I mostly just pinch myself that anybody wants to listen to it at all. But I'm looking forward to this conversation here today with you.

Beatrice Erkers: Yeah. Well, today's angle that we have is going to be like, you know, the theme of this podcast, Existential Hope. And I know you've said that the scarcest resource is a positive vision for the future. So I think That's what we're going to try to dig into today, and especially in relation to AI, I think is like the most interesting question right now. So I don't know, let's start. If you woke up 10, 20 years from now and we had a really good positive future with AI, what would you see around you?

Nathan Labenz: It's a hard question. And I say that all the time. The scarcest resource is a positive vision for the future. I really do mean that. And I don't think that I am particularly advantaged in terms of having a crystal ball. Another one of my common jokes is my crystal ball gets real foggy more than a few months out. So I'm in kind of uncomfortable territory trying to see farther into the future than that. But here we go. I think we are going to see just about everything change. The reason I called my podcast The Cognitive Revolution is by kind of obvious analogy to the industrial revolution, the agricultural revolution. You go back into these before times for these previous revolutions and what people were doing before versus after is just totally different, right? I mean, at some point we were small bands of hunter gatherers, you know, always on the move, always searching for food, kind of living literally like hand to mouth. You know, then we sort of figured out how to grow food and that, you know, created a whole different thing and much bigger scale at which people could come together and live together. And those economies of scale created, you know, the sort of beginning of the technology exponential that looked really flat for a long time, but, you know, seemingly was kind of already on an exponential even then when people didn't know it. And the same thing has happened again with the industrial revolution. Mechanizing farming took us from a scenario where, whatever, 80, 90% of people were growing. growing food to now today, I think that's only like 2% of people in the developed world are needed to grow food because machines can do a lot of that work. So I think the same thing happens for cognitive labor. And if AI were to stop progressing today, I think we already have powerful enough AI to automate the majority of cognitive work. We don't have everything wired up in the right way. We don't have all the data structured in the right way for AI to consume it. So there's a lot of plumbing. sort of work, you know, implementation sort of work that would need to be done to realize that dream of automation. And that would take, I think, five to 10 years and probably longer because I always tend to underestimate the timeline to implementation. But I think we do have enough sufficiently powerful AI that we could automate a majority of cognitive work already. And then the question is like, well, what are we going to do if everybody has moved from, you know, in the past roving around to settling down and farming and then from farming to, You know, factory jobs and then from factory jobs to these sort of white collar cognitive jobs, or at least, you know, significant part of the economy. What do people do if AIs can handle the majority of that? And I don't know. You know, I think one candidate answer is sort of the caring economy broadly could be like the next big thing. I think that's a little bit challenging even there because you do see these studies and survey results that often show people prefer talking to AIs for a lot of things. AI doctors, for example, tend to get higher ratings on sort of bedside manner than human doctors because they do have some unfair advantages, right? They can be infinitely patient. They're not time-bound in the way that human doctors are time-bound. So they'll answer all your questions, and then they don't really... have any constraints on how much time they can spend with you. So I wouldn't say that's like a exactly fair head-to-head comparison, but there are some real advantages there. I'm not sure how that shakes out. Teaching is another area where I think we will want, for as long as the world is at all recognizable, we will want humans to be role models for the next generation of humans. But we already see schools where AIs are now starting to be responsible for all the content. and humans are kind of moving into a more guide, coach, mentor, motivator sort of role where the AI gives you the lessons and grades your homework and the humans are just kind of there for these like softer skills. Is there enough demand for that that people will like need to do that for jobs and that those jobs will like absorb all the people that are probably going to get displaced from the jobs they're currently doing? I don't know. I think that seems like a stretch and it's kind of a rocky transition, but that's at least one answer. I think another answer is just like we might actually have that life of leisure that people have dreamed about for a long time. Famously, Keynes said 100 years ago or whatever now that by the time we get to today, we should be working 15-hour weeks. We're obviously not, but maybe that's another way that things could go. I think Zuckerberg has had really interesting, and like many things in AI, I have very mixed ambivalent feelings on what Zuckerberg is up to. But in his post and message where he introduced the personal superintelligence concept, one of the things that he pointed out, which I thought was really interesting, is the macro trend trend that people are just spending less time working and more time socializing, creating, consuming media. Maybe consuming media is like a little bit crowding out the connecting and socializing, which, you know, isn't necessarily all to the good. But the sort of big trend of the shift from work to sort of leisure maybe accelerates and maybe, you know people start to do a lot of stuff in VR and AR and maybe we start to get you know Neuralink becomes like a very broad uh technology and these these experiences especially if they're literally connected into your brain could start to be like extremely compelling so you know Maybe we're spending a lot of time in VR is one answer for the 10 years from now future. Beyond that, it's like really hard to say, right? Classically, people are like, well, we didn't know what the cell phone was going to bring us. Nobody had Uber in mind when we introduced the iPhone. So what are the apps that are going to be built on the AI Technology Foundation? I think is very very early and kind of very hard to say. But yeah, I think, you know, one of the things I think could be really interesting is a sort of radical egalitarian mode of access to frontier technology. I mean, I often use the example of doctors again. It's a scarce resource today. It's obviously, you know, Not everybody can become a doctor. It takes a long time, a lot of training, very expensive, yada, yada. Not everybody can access a good doctor. With the AI technology, that should change and people should be able to get quality medical advice regardless of their means. So that's exciting. And I was reminded in thinking about this of the Andy Warhol quote where he goes on about how the great thing about American consumer culture, this was back in, I think, the '60s. is that everybody can get the same stuff. The president drinks Coke and movie stars drink Coke, and you can get your own can of Coke, and you know that it's the same as the one they're drinking, and it's all the same. Even if you were richer, you couldn't get a better Coke. That's been true of the iPhone. I don't think that'll be true of AI in exactly the same way, because I do think there will be probably... Frontier, like very high-powered systems that frankly not everybody needs on a daily basis, but there are still uses for. But if you think about that VR world, one of the big differences right now between the haves and have nots is just the sort of experiences that they can access. And that could perhaps become really collapsed, you know, where the sort of exciting life of adventure that is currently only available to the select few that, that could perhaps be made scalable through some combination of like neuralink type, you know, connections and VR all delivered low energy, low resource way that could scale to potentially everyone in the way that Coca-Cola did years ago. So I'm probably going to get a lot more wrong there than right, but those are at least some... you know, musings about just how different the future could be. And it's coming at us fast. This technology, you know, the industrial revolution took like 200 years, depending on how you want to count. The electrification of the United States took 60 years from like 1880 when electricity was invented to 1940 when basically everybody finally had electricity. Huge difference there was they had to actually build out the wires to everybody's house. You know, now we already have the wires that deliver the AI to the point of consumption. And so So you can have these like centralized upgrades where from one version to the next, the capability leap and what everybody at scale can access can flip much faster than in previous generations. So I think it's going to be a wild ride, exciting and also a little bit scary.

Beatrice Erkers: It's both a very exciting time to be a human and probably challenging coming up at least there's I feel like there's a lot of threads to pull on thank you for like being so concrete also about these things is there anything that you're like personally like this I would be just so excited for this personally.

Nathan Labenz: Well, I've dreamed of self-driving cars since I was a kid. I used to sit in the backseat with my mom or dad driving. And first of all, I was just like, so often we'd be sitting at a red light and nobody's going the other way. And that bothered me so much, even as a kid. And even then, it felt like if we had a little more will, that was probably solvable even without AI. You don't need AI to change the light when it's clear that nobody's coming, right? So that maybe will be a theme as we get into this of what is it going to take to be successful. little more social will, collective will to demand better. You know, higher expectations from the public, I think is like one thing that could be really critical to realizing the good future and making sure we don't get kind of bogged down as we have at times. But now we've got self-driving and again, it like works. You know, Waymo is amazing. I don't know if you've used one, but it's been a while actually since my last Waymo ride already, you know, fully autonomous, summon it with the app, it shows up, nobody's in there, you get in, it drives you where you want to go. What was really striking to me was how quickly I kind of got bored with it. I had been thinking about this for literally decades, but I found myself checking my phone five minutes into the ride and needed to intentionally remind myself, Hey, you've been looking forward to this for a long time. Put your phone away and actually try to observe this moment and savor the first experience of a real self-driving. But it was so good that it just kind of felt like a background reality literally within minutes. The safety data seems to suggest that it is already a lot safer than human drivers. The price point at which it is selling in the San Francisco Bay Area at least is like quite a bit higher than Uber. So it seems that people are willing to pay more for the self-driving experience, whether that's safety or because they, you know, don't want to talk to the driver. I'm not sure exactly what's driving that difference, but the difference in price seems to be like pretty. well-established, has been sustained for a while at this point. But yeah, that's a huge one. I dream of going to visit my grandmother, who lives four hours away, and being able to either work or ideally sleep on the way there, just do an overnight in the car. Also, you start to imagine the different form factors too, right? If I truly don't have to pay attention, then you can have a very wide range of car types. You could have sleeper cars that you just get into and go to sleep and get up and get out of bed at your destination, and it's like a mobile hotel room. That alone would be be an incredible improvement and should come with 30,000 fewer road deaths in the United States. And I think there's a million road deaths annually across the world. So it's going to take a while for that to be built out. But yeah, I've been waiting for that one for a long time.

Beatrice Erkers: Yeah, I agree. The first time I went in a Waymo, I was also just like, wow, and quickly get used to it. But I think that it's really one of those things that also just feels like it makes sense. I feel like when I went in one for the first time, it just felt Like, Oh, why aren't we already doing this? I actually did a special episode recently on autonomous vehicles and what we need to do to get them coming as soon as possible. Imagine Friday night going to bed in your car and waking up Saturday morning and being out in nature or something like that. That would just be amazing as well. So if we zoom out a bit though, one thing I think also with existential hope that's interesting to think about is just really big visions also for the future. Just if we think big about how good the future could be, because in the previous prompt I gave you 10 to 20 years, I'll give you 100 years if you want to, even longer. Of course, your crystal ball gets fuzzy. Or foggy, maybe is the term. But if you get to dream, what do you think would be a best case scenario, especially in relation to AI?

Nathan Labenz: Yeah, that's a tough one for sure. I find the fog of the... I don't like the term war to describe what's going on in AI because I want to make sure AI developments are nothing like war for as much and for as long as possible. But the fog of war of even just what's going on with AI right now, I find to be a really hard to penetrate thing. I mean, even among people who are obviously extremely informed, very knowledgeable, even titans of the field, right? There's just these like super fundamental disagreements around what currently exists, what's going to happen in the immediate term and the farther you go out into the future. It just gets radically difficult, I think. But I mean, I'm on board with the people who would hope that we would, you know, cure all the diseases, like, you know, things that were totally fantastical. And, you know, it was like, well, it's nice of you to dream that, but okay, like, you know, the. until quite recently, that stuff just seemed to be totally limited to the, you know, the realm of dreams. Now, I do think with AI's ability to grok what is going on at many different levels of biology, the potential for us to actually hit something like a Kurt Swiley and escape velocity on, you know, every year, your life expectancy increases more than a year. It no longer seems totally far fetched, you know, even just in, In preparing for this, I was looking at this recent paper about the creation of novel antibiotics. We haven't had many antibiotics created in a long time, but out of a single group at MIT, multiple new antibiotics with new mechanisms of action that are effective against antibiotic resistant microbes were just discovered. I think it's another kind of interesting sign of the times that that sort of thing, I swear, would have been All anybody could talk about if it had happened when I was a kid. And now, something that's as big as that, I find most people, even who are relatively plugged in, just haven't heard of it because there's so many other things. And I increasingly have these blind spots too, even though I've kind of created for myself the job of just trying to keep up with all this stuff. So yeah, I think, you know, curing all the diseases still sounds like a bit hubristic perhaps, but it does seem to be increasingly, you know, not totally fantastical. And I would certainly like to live longer than my current life expectancy would suggest and healthier, you know, which is obviously critical, but I think it's sort of a straw man, but you know, people often, if they're not... exposed to this kind of thinking much. They sort of say, well, that'll suck. You'll be older, you'll be decrepit for all these later years. And obviously that's not the real hope. So that's a big one. Becoming A multi-planetary species, I think also is really... a great aspiration for humans. Definitely, I think Elon has become hard to defend in some ways, for sure, and I won't defend all of his actions by any means, but the general idea that we should aspire to get off of planet Earth and get out into space, I think that makes a ton of sense. It's really interesting too, and I don't think we're going to have good answers on this for a while. This might be one of the last questions that we have any traction on, but do we think that non-carbon forms or something that's truly very different from us in terms of a substrate could carry on our values, our consciousness, you know, our intent, our volition, whatever, into space for us. I really don't know. You know, do, do AI, another way to come at that is do AIs have any moral value? Are they moral patience? Do they experience anything? Does their experience matter? I'm really radically uncertain on those questions. It does seem like if you were to say, what is the best way for us to project ourselves into space, getting away from the current form of our body, would probably be a natural part of a lot of design plans for that to happen. But I'm really unsure if we should be confident that we could create something on a totally different substrate and feel like it matters in the same way that we are confident that we matter. So I don't know. But anyway, there's. There's a long time to figure out some of those details, and we'll see what comes. But I think the goal of getting out into space is very worthy, and we should definitely dream those kind of big dreams. And can the AIs sort of help us get there, or do the AIs sort of take the baton and actually go out and do that? There's this idea of the worthy successor, which I think is, on the one hand, a dangerous idea that we should not lean into. in the immediate term without having a lot of these difficult questions answered far more than we have them answered today. But if we did have those answers, then I could imagine, if I really felt like I understood where does consciousness come from and believed that these things had it and that they're having positive experiences, then I could imagine a sort of worthy successor that would genuinely be worthy and might be a lot more suited to travel through space over great distances and great lengths of time. But yeah, I don't know, that's all pretty fuzzy stuff, I suppose.

Beatrice Erkers: No, yeah, well, I mean, fussy, but I think also they're concrete ideas and I think all very interesting. And I agree, hard to be confident on, but just on the consciousness part, I feel like it's one of those things that even if we obviously cannot be certain of it, it's just such a big if true that it's worth thinking about a little bit already to some extent, just because of that, it feels like. And then to scale it back a little bit, zooming back to the here and now, is there anything you think is maybe a bit underestimated in terms of near-term term AI applications, like that's maybe like boring, but transformative. Is there anything like that that you've come across recently?

Nathan Labenz: Yeah, I think, I mean, the inference time scaling paradigm, I think folks like Dario and Sam Altman have been talking about this for a while, but it's hard for people to make the leap with them. And even for me, I think I'm always trying to keep up with, you know, what the true frontier visionaries are thinking and envisioning. And I do think in the boring but transformative, this just came up in a couple of different threads, the idea of the spreadsheet, right? People used to sit there with big pieces of paper and pencil and do the calculations and have to erase and fill in again or whatever. And then you had the spreadsheet and the spreadsheet could just make a change. And your change could propagate through all the calculations in an auto updating way. I think this idea of like auto updating or things that are kind of running in the background for us could be, you know, in a world where life is still mostly like very recognizable, could still give us a ton of value. For one thing, just like imagine a second opinion for everything. I kind of live this way myself today where, you know, if I'm gonna send an important correspondence or if I'm working on some sort of deal or I get a contract from someone and I got to sign it, I'll take that contract, run it through three or four AIs and be like, I'm this party. Here's the previous communication. Here's the contract I just got. What should I be concerned about? And now the AIs are, their outputs are so much and so fast that sometimes I'll then take the three or four of those, put them into another window and be like, okay, give me a single comprehensive summary of all four of these to kind of dedupe those points and try to make sure I get Then once I work through that and end up, you know, coming to some idea of like, am I ready to accept this? Okay, great. That's cool. But maybe I have some points I want to discuss. I'll bring that back to the AI as well and get. It's not about having it, although I do think this will be coming, whether we're going to have more and more autonomous agents and we'll certainly have more and more decision-making delegated to AIs. But for the moment, it's not about having the AI tell me what to do, but it's about kind of having that second check or third, fourth check on all the things that I do. And that just gives me the ability to move a lot faster with a lot more confidence, a lot more accuracy in what I'm doing. I think you could also see that in all sorts of matchmaking, whether that's like economic or romantic or even just like getting together with friends, like why don't we hang out with friends every night? One reason is coordinating that stuff takes a lot of time. And by the time you're done with your workday, you're like, well, I don't know. Who's available? And do I have time for this? And can we even figure it out? But the AIs can definitely do that sort of thing if we set them up to run in the background across, I think, a pretty wide already and certainly growing number of different kind of matchmaking problems. So I think that will be really interesting too. Just greasing the wheels of commerce, greasing the wheels of dating markets, all these things are relatively high friction. This is another example where things, Exactly how do you get to the good equilibrium is going to be an interesting challenge. We are seeing now in hiring, one of the examples I always give to business owners for things they should be doing with AI that they're probably not yet is they should have an agent that is going out and just searching for candidates that they might want to proactively reach out to all the time. Every CEO of a startup or mid-size company or whatever, if you said like, Hey, should you be doing more proactive recruiting? They basically all say, yeah, we should, ideally, right? But I mean, who has time for that? Similarly with cold outbound sales prospecting, should you be doing? All else equal, if you could just add on 10 good targeted emails to possible new customers every day, would you do that? Yeah, probably should do that, no doubt. But again, who has the time? Well, the AIs do have the time. There is the question now of how do we deal with all that volume on the receiving end? Companies are starting to report that it is getting harder to separate the real resumes, so to speak, from the AI There are interesting ideas about maybe you should have to pay a dollar to apply to a job to kind of limit just the spray and pray approach, something like that. I think these are the kind of new mechanisms that are gonna take some time to develop. We're gonna have to encounter some of these problems and then live with them for a minute and then figure out solutions. But I definitely firmly believe that there's just a lot of value to be unlocked in matches that are not made, deals that are not struck just because people don't have the time. But if you could, there's another big principle is like some things are hard to do, but easy to verify. I think a lot of deals kind of fall into this. It's hard to find the next customer. It's hard to find the engineer you want to hire. It's maybe hard to find the person you would be interested in going on a date with, whatever. When it's presented to you, a lot of times you can recognize it pretty quickly when you see it. If we can get the AIs to sort of propose good ideas to us and then we can quickly verify them. I think there's just a huge amount of value to be created by, you know, automating away a lot of that friction.

Beatrice Erkers: Yeah, you could find the best friends or partners ever if you were able to like have everyone in the world almost scanned or so by an AI.

Nathan Labenz: I'm doing this a little bit in my family. My wife and I have three kids and, you know, there's always, question of like, what are we going to do on the weekend? Can we get these kids out of the house? Three boys and they go wild if they don't get out of the house. So it has become a priority for me to find something to take them and do on ideally like every weekend day. But the search for that is another one of these things. It's kind of tough. ChatGPT, they're pretty good at, hey, just what's happening in Detroit this weekend? ChatGPT now even remembers my family and kind of knows what I've been interested in in the past. really good at doing certainly a much more comprehensive search than I would do on all the little weekend festivals and this and that. And so we are actually getting out more as a result of the search costs for something interesting to do having dropped significantly. And I would say that's like a 2025 thing. In 2024, a bit. But now it's getting good to the point where I can get the kids to bed on Friday night and be like, Okay, AI, what should we do tomorrow? So these kids are not going off the walls by the mid-afternoon. And more often than not, I get a pretty good answer.

Beatrice Erkers: That's really interesting and like a really concrete use case. Yeah, I definitely trust it more with like travel planning and stuff like that these days as well, just for like advice and like comparing options and stuff like this. Is there like a specific sector that you think is maybe a bit more ripe to be like, have AI fully like integrated and like shape its trajectory? I mean, do you think that... like science, healthcare, education, governance, I think both in terms of like what the potential is for AI to transform it, but also just like that you think would actually be able to deal with some change right now.

Nathan Labenz: Yeah, I mean, I think it's all of the above, really. And I think it's just a question of timing, both on the development side and probably on the adoption side. The kind of canonical first answer is software engineering, and that seems to be driven by the fact that the AI developers themselves are software engineers. They're interested in solving their own problems. It's also driven by the fact that it's really, at least comparatively, really easy to validate the outputs of the models in software work because it's just text. The text gets compiled, it gets run, you get an error message like really quickly. I just saw in the last, I think, 24 hours, Replit, which is a platform that I love for building software, introduced their agent V. And one of the big things that's different about their third generation agent from the previous is that it now both writes the code to build your application, but then it will spin up a browser and it becomes your QA agent. So the thing kind of goes through this. full loop of create, actually try to use yourself, find the issues, not just the issues of like, did it compile, was there some immediate runtime error, but from a user standpoint, it actually takes on the role of the user and uses the browser and clicks in the same way that a human would use the software and then finds issues that way and then comes back and tries to fix them. The tightness of that, I think one good heuristic for like how quickly will things come to different industries is just how closed is that loop and how fast can that feedback process spin? And increasingly, the pattern of use that you'd have as an end user of a system like that looks kind of like the training paradigm because the, you know, we... I wouldn't actually sign onto the idea that pure scaling of pre-training is over. My sense is that narrative has been a little bit overblown, but clearly there has been a big shift from just trained on all the internet with pure next token prediction to, okay, now we also need to sculpt the behavior of these things and teach them to be good at particular tasks of interest. But the way that that's happening, especially in software, it looks almost indistinguishable from, that's a little too strong, it's distinguishable, but it looks a lot like what you're doing as an end user, they give the AI a task, it takes a number of attempts to try to solve that task. If it can get at least one time where it gets it right, then that's enough of a signal for it to get reward and learn to steer more toward the right solutions than the wrong solutions. And that can all happen in like a pretty tight feedback loop. So if you were to compare and contrast that with, say, medicine, I alluded to these antibiotics. There, you've got at least some part of the feedback loop that's just a lot slower. You know, you can't run. And again, maybe this loop will get closed too because a big part of how they're developing the antibiotics in the first place is with these in silico experimental setups where it's like, okay, we have some idea of maybe a target in a bacteria that if we could, we have, you know, from general biology knowledge, we know that if we could disable that, then, you know, that would kill the bacteria. But how could we do it, right? So now you can generate huge numbers of candidate small molecules or larger molecules. I mean, you've got a wide range of space to explore here, but generate huge numbers of these molecules, then run a simulation to see, does this bind? Does it seem like it would work? Some other tools allow you to say, like, would it be binding to other random things, you know, and maybe causing collateral damage? Or is it like hyper specific to this particular target that we have? And you can get pretty far I think the ratio on the paper, I should look this up, but I think the ratio of things that they put forward as candidates to ones that actually worked was like not super high. In other words, they were able to get a pretty high hit rate out of the in silico experiments. It's still though going to have to go through, at least for now, a clinical trial process and the ultimate feedback of like fewer people are dying from bacterial diseases. That's going to take a while. So a lot of things will be rate limited, I think, by those kinds of bottlenecks. If there's any bottleneck in the system, it'll slow down the iteration relative to, you know, pure software engineering. But the other thing that will start to happen is you'll have these in silico experiments or simulation environments, right? And that's also happening in self-driving a lot, right? They don't just train on actual data. They also augment that data in a ton of different ways and create, you know, all sorts of scenarios that they maybe never encountered in the wild, but which you would definitely want to be able to handle. I've seen examples where they'll show a helicopter landing in the highway in front of the car. That may never have happened in any training data that they had, but they still wanted to make sure that you wouldn't drive directly into the helicopter that just landed in front of you. It's not to say that these offline bottlenecks have no workaround, but they're certainly a lot harder. I guess the way I think about what's really ripe is where there are the fewest of those that They could be social too. In education, you know, right now, I'd say that already there's never been a better time to be a motivated learner. If you turn on the ChatGPT teach and learn mode and you go into voice mode and you put give it access to your screen for me to learn about biology. That's by far the best way to do it because there's so much background knowledge, so many terms. But I'm trying to understand what's going on at the intersection of AI and biology. The biology part, there's just so much. But when the AI can look over your shoulder and you can just kind of casually say, Hey, what's this? Or why does this even matter? Why are they even talking about this? Truly, there's never been a better time to be a motivated learner. And then the bottleneck maybe becomes like, Do we have a system that is designed to create and encourage motivated learners? What exactly is the purpose of the school system? What exactly are the incentives that, and sort of how do people in those systems understand their own interests? You know, those may prove to be important bottlenecks as well. But you know, if you want to learn something, AI can help you dramatically accelerate that already today.

Beatrice Erkers: Yeah, for sure. I didn't even know that there was a teach and learn mode. ChatGPT.

Nathan Labenz: That's It's relatively new. Yeah, I'm hoping to get the product. manager from that onto the podcast to to talk about it more. But yeah, and there's other, you know, there's Khan Academy was an early pioneer of this, and I did an episode not too long ago with a founder of a school system called Alpha School, which has become kind of famous recently. They do academics in two hours in the morning, and then the afternoon is entirely enrichment, you know, the projects, field trips, you know, group work, whatever. Kids get to kind of explore their passions and their interests. And this is the same thing I alluded to earlier with the AI is entirely responsible for delivering the content. They're not soft on academics at this school. Like you still have to learn all the same stuff in the US, we have the core curriculum. which is some sort of, I don't even know a lot about it, but it's the sort of officially sanctioned, this is the stuff that you're supposed to learn, and this is what public school kids are measured on and so on. And the founder of this school system is kind of like, I want to show that what we're doing works on that level. We're not going off and creating our own curriculum. Our kids are scoring super high on the same exact tests that all the other kids are taking, but we're doing it in two hours. AI is doing all the content, AI is doing all the evaluation, the adults at at the school are now playing these other roles of mentor, coach, guide, et cetera. And two hours a day, you know, seems to be enough for what traditional classroom delivery, you know, the, what do they call it? The sage on the stage. They've got good little matches for this in education. The sage on the stage model, you know, it's just not that efficient. So yeah, it's cool. I think my kids are a little young for this now, but I, think that they will have a radically different educational experience than I did, that's for sure.

Beatrice Erkers: Yeah. I heard about the Alpha School. That seems amazing. I'm very glad that someone is sort of doing it already. So that, yeah, like your kids can have it soon, hopefully as well. I also wanted to comment on the antibiotics thing. I forgot to... comment on that before. That was something that I had completely missed, for example, and I'm very interested in this space, so that's amazing. And it also just reminds me of, so this podcast is the Existential Hope Podcast, it's part of Forsyth Institute, and we were co-founded by Eric Drexler and Christine Peterson. And Eric Drexler in his old book, nanosystems from the 90s, I think, early 90s, wrote about basically like the, what do you call like the design space. And he was foremost thinking about like molecular machines and what we could be doing. But I think that it's, yeah, like in silico and these things are just really, really promising and interesting. And hopefully like the bridge between like a world of bits and world of atoms. And so hopefully we'll see a lot more of that soon. I wanted to sort of go towards the Drexler idea because I know that another thing that I've heard you mention a few times on your podcast is this idea that he had of like comprehensive AI services. So it's like this web of more specialized AIs rather than like general agents. You have any takes on comprehensive AI services. And I'd also be curious because I'd be curious to hear if you have thoughts on like how that compares to, for example, something like a tool AI approach. or like scientist AI, if you've also heard that one recently. How do you think about these sort of approaches and like both how you think they differ, but which ones you think are most promising?

Nathan Labenz: Yeah, it's a great question. I love the comprehensive AI services vision. And I guess I think, you know, one thing I've observed in a couple different realms of life is that anything in pure form is dangerous. You know, that could be like sugar, Purified out of, you know, naturally occurring sugar rich food, or it could be, you know, cocaine purified out of coca leaves. You go to the Andes and people chew coca leaves, you know, their whole lives, and it's not a problem. But you purify it to cocaine and, you know, you're immediately dealing with something that's like pretty dangerous. You know, heroin from poppy seeds, right? Like there's a lot of examples of this. And I generally think that what seems to be stable in nature is some sort of ecology, some sort of equilibrium, some sort of buffered system, right? It's the way we maintain homeostasis in our bodies is like a lot of buffers. So it's that any, you know, insult that comes into the system kind of runs through like multiple layers of defenses. And hopefully, you know, we have enough of those layers and they kind of can each push back in their way to, you know, neutralize and, and, and ultimately, you know, be resilient to that threat. So. I think the idea of like the singleton and all this stuff is like, you know, contentious, there's pros and cons, everything, but the idea of like a singleton or some sort of superintelligence that can do everything and is like way more powerful than everything else, that to me doesn't feel stable. And that's not to say that it couldn't ever be achieved, but I don't like the idea of a superintelligence that is better, you know, than all humanity at every task because I have no idea how we would control such a thing, and I just have to imagine that it would probably spin out of control, and it might achieve its goals to the degree that it has goals, but I have a hard time imagining that we could be in stable, enduring equilibrium with such a thing for, you know, a long time. So I tend to prefer the idea of a more, you know, competitive, interactive, buffered system. And I think that, at least for me, is kind of at the core of this comprehensive AI services idea. It's like safety through narrowness. It's not to say that the AIs aren't really good at what they do. They could be superhuman at what they do, but in the same way that we have, you know, superhuman chess players that like can only play chess and we have superhuman protein folding AIs that can only fold protein and you really don't have to worry because you sort of know what kind of inputs they can accept and what kind of outputs they can generate. You don't really have to worry that that's going to do something surprising to you. It could be surprising locally within the domain of like, oh my God, I didn't expect that this protein would ever work this way or whatever, but it's going to be kind of in its lane. And I think that would be a really good design decision if we could manage it to try to have AIs that are potentially superhuman in their domain, but are in a pretty fundamental way limited to their domain so that they don't go run off and do end around on whatever guardrails we've tried to put in place and surprise us in really negative ways. I think it, you know, it is not a given necessarily that the sort of gradual disempowerment crowd would maybe say like, okay, even that, what's the role for the humans in that picture? And I think, you know, there's definitely still some hard questions to answer there. We're starting to see these, you know, just in the last few months, there's been like three new memes, new kind of phrases coined, gradual disempowerment being one, the intelligence curse being another one, and the abundance trap is the third one that I've just recently come across. And they, they all seem to be getting at this idea that like, if the AIs are doing everything, like what are we going to do? And is there going to be the incentive, you know, for, from the sort of entities, whether they be governments or maybe AIs or who knows, corporations, what incentive are they going to have to invest in the people if we're not really needed to do the economically required work in the same way that we used to be required? So I wouldn't say that the comprehensive AI services idea solves all I think people still have really some good questions. And there's also the idea that if this is a very, I guess you could have kind of multiple delivery models for that, or you could have a comprehensive AI services that's like decentralized in its creation, decentralized in its ownership. Episode of the podcast coming out in two days is about user-owned AI through a crypto scheme that is designed, and a scheme makes it sound, I think, little. less than it is. Let's say a crypto is kind of a scheme, but that sounds negative. Protocol, I guess is protocol is maybe the right word that would allow people to contribute to training and have sort of a claim on the future revenue from inference from models that they contribute to in a decentralized way. That's fascinating stuff as well. It does create risk if People, of course, worry about weapon creation. That's kind of the canonical one of if everybody has an AI that can, if asked, create a bioweapon and help you distribute it, then we're going to have probably some people are going to do that. And then what are we going to do? Can we be ready for that? The one way to answer it would be with the superhuman biodefense agent as part of the array of the comprehensive AI services. Part of being comprehensive would be having great biodefense, I suppose. Yeah, I mean, I guess one other take on the whole comprehensive AI services thing is it makes me quite uncomfortable that it seems like the plan at the Frontier AI developers is basically to try to get the AIs to be able to do the AI research as soon and as well as possible. Use that to accelerate the AI research, which is already going super fast, but it's going to be, they're like, well, we've got like 500 really good researchers here at DeepMind or at OpenAI or whatever, but if we had AIs that could perform at that level, we could have 5 million. be amazing. And I'm kind of like, oh God, that seems like a recipe for potentially creating something super powerful, but also kind of losing control over what it is we're creating and, you know, creating something that maybe could be this sort of, you know, so, you know, so incredibly refined and powerful form of intelligence that it, you know, kind of pierces through all the buffers that we currently have. And so time and speed is like, you know, this sort of comprehensive AI services thing. Sounds like a slower developing plan. I think that would probably be good, but I'm not sure how we get from the trajectory we're on where we have multiple people kind of credibly approaching that tipping point where the AIs are going to start to do the AI research. OpenAI reported with O3 that O3 was able to do 40% of the pull requests of real OpenAI work that's actually like put into their code base. And exactly whatever that measures. I mean, there's always these like caveats and debates. How meaningful is that? What should we really understand that to mean? But it was like zero to 5% in the previous generation of models. So it like clearly represents some meaningful scale up. So yeah, I don't know. How do we get to a comprehensive AI services that could come online in the timeframe before some of these like Manhattan Project style things fly a little too close to the sun? That's maybe the toughest question for me on the vision. And that's maybe where we do need some sort of regulation. People who dream of abundance are often quite allergic to the notion of regulation. Certainly, regulation has denied us a lot of abundance that I think we rightfully should have at this point, like cheap nuclear energy for one, being super flagrant and probably warp speed for these new antibiotics would be another. But boy, the risk of getting AI to do all the AI research before we even really know what we're trying to create is one thing that I do think without wanting to too much of a party pooper. I do think the government might have a little role in constraining that sort of thing, trying to change the incentive, because right now they're all racing each other. And it's hard to imagine how they get off that track and do something more buffered, more stable. Right now, it doesn't even seem like they have that much of a plan for that, to be honest with you. And by the way, we're also seeing that the AIs are like scheming more and they're starting to be deceptive. By the way, they're also becoming more situationally aware when we do these evals, we're starting to see more and more that they're actually recognizing that they're being evaluated, which means that we have a harder time trusting that what they're doing in the evaluation process is even representative of what they're going to do outside of it. And they're just kind of like, hopefully we'll solve that along the way. Maybe the AIs will be able to help, you know, don't worry about it is kind of the general vibe. And I'm definitely not comfortable with all that.

Beatrice Erkers: Yeah.

Nathan Labenz: How do we get from there to the comprehensive AI services vision? That's a tough one, but I would love to see something more like that become kind of the default vision.

Beatrice Erkers: Yeah, I think that. So we recently did a little world building exercise of what would it look like to actually have a tool AI future, which I think is kind of the same as comprehensive AI services, depending on exactly how you define them, but basically just thinking about what it would look like to have AI that's mainly focused on being a tool. So it's limited maybe in some of its identicness and some of its generality, but very, very useful to us. And I think that the main thing that we thought could potentially put us on that trajectory, because I agree it's not the trajectory that we're on right now, would be some sort of legal or insurance insurance driven way to get there, as in that insurance companies probably maybe don't want to cover systems that are too opaque or like that no one is like liable for in any way. So yeah, that's what we thought was like the most probable one, if anything, but Yeah, I like that.

Nathan Labenz: I'm going to do a podcast before too long. I actually just made a very, very small personal investment in the AI underwriting company and they are trying to realize that vision. It would maybe be, I extra nice if there was like a mandated insurance requirement, because one thing the companies could do today is just not buy any insurance and not have to deal with it. So yeah, if we required insurance and brought that whole mechanism of trying to model out the risk and price it and whatever, some things might be uninsurable. And if they're uninsurable, maybe they can't happen. I think that could be Yeah, it could be really good. I'm not really an investor for financial returns. I mostly just throw very small amounts of money into things that I believe in and want to see exist and want to be on the team. But in that sense, I am personally invested in that notion. So I do really like that.

Beatrice Erkers: Well, great to hear that someone is already working on it. One thing that I also wanted to just talk to you about is you feel like you're one of not that many people that are trying to balance like taking seriously both the sort of transformative positive opportunities of AI, but also like the risks. So I'm a little bit curious to hear what do you think it's like to sort of balance that tension and like how maybe you keep yourself from sliding too much into one or the other and like trying to like stay real about it?

Nathan Labenz: It honestly comes very naturally to me and I kind of just feel like the updates that we get on a regular basis sort of require that. I don't really know how I could have any other worldview than this classically ambivalent, super excited about the upside. I hope my fear is a healthy fear of the downside. There is some actual fear there for sure. But you just see these Eureka moments. I have one presentation that I call Eureka moments, bad behavior. And it's like, you see these antibiotic things. There was one where, this is from a Stanford group under Professor James Zhao, where they created what they called the virtual lab. which was a human gives a problem to an AI. The AI gets to spin up its own other agents. They can give it, you know, various tools. The AIs are getting quite good at using tools. And something like one or 2% of the overall tokens in this process were human. The rest were all AI. And they ended up designing new treatments for novel strains of the COVID virus as well. from, and I contrast that with the antibiotic thing where in that case, it was like the human scientists were using these very specific purpose-built AI models. In this virtual lab setting, the AIs were using these very purpose-built specific AI models, but doing a similar job to what the human scientists were doing. And ultimately, in both cases, creating new treatments for diseases that had evolved to evade our previous treatments for them. So that's amazing, right? I mean, again, How can you not be super excited about that? But then the next, as I scroll through Twitter, and I do find Twitter, honestly, still to be the best place to stay up to date, for better or worse, but the next post will be like, Deception is on the rise in the latest models, and we're starting to see these scheming behaviors. And how do you look at something that has that power but also reflects back to us some of the worst tendencies that we have and not feel these kind of dual-track feelings? I don't know, to me, that just seems like the only place to be. I just did an episode of the podcast with a guy who is an executive coach to a bunch of people at various AI companies, including Sam Altman. His name is Joe Hudson. And I asked him a similar question. And he said, in his experience, everybody at the Frontier Companies has this mindset. He said that from the outside and what you see on Twitter, he's like, that's a little bit misleading. Because you've got booster accounts and you've got doomer accounts. And I don't rule out that either of those could be right. Especially, I do think we should not dismiss the doomers. But his view was everybody that he's ever met at these frontier companies, he's got a personal testimonial from Sam Altman on his executive coaching website. Everybody's ever met at these places, he says, has that dual-track mindset. They're all seriously grappling with the ramifications of their work. They're all asking themselves, Are we doing the right thing? So I think that's encouraging relative to maybe how it is commonly understood. How do we reconcile that with the racing is another interesting question. I also asked him if he ever expected that we'll see an AI developer stand down because they feel like we can't go any further in a responsible way. His answer was no. So he said they're problem solvers and they will just look at that as another problem to solve. And they will not stand down. They will just. say we can solve this one just like we solved the last thousand problems that we came across. So there do seem to be some contradictions in that overall report from the kind of mindset and cultural perspective of what's going on at the companies. But at least at that first level, I was glad to hear that there's at least serious engagement with both sides. And for me, that's just like, I don't know, maybe not a great answer, but I feel kind of compelled to that worldview by the developments that I see on a regular basis.

Beatrice Erkers: I agree with you. Actually, you put it very simply, that it comes naturally to you. And when I think about it, I think it comes naturally to me as well. So maybe that's a good answer and simply put. But however, it does seem like, like you said, on Twitter, that's not really the impression necessarily. I mean, depending on what bubble or hole you end up in, to some extent, it feels like, or at least what we're trying to offer in this podcast, obviously, is thinking about the positive trajectories because the the negative ones are very easy to like to some extent agree on and also just to to envision or like there's definitely like very concrete ideas about how things could go poorly in relation to AI and always sort of in relation to to new tech. Do you have any thoughts on like why you think that is and what you think is maybe missing from the discourse that could help us maybe aim better towards the more positive futures, basically.

Nathan Labenz: I mean, it is hard to, you Eliezer has this famous, at least in my mind, idea that you can't predict what something's smart. This is also, I think, goes back to maybe Werner Vinge. Boy, I'm not as deeply read in science fiction as I should be, but the idea that if you could predict what the thing would do, then you would be as smart as the thing. And if it's genuinely smarter than you, then one of the ways in which that presents is that you can't predict what it's going to do. Eliezer then adds to that, but you can predict that you will lose to it in a competition. And if you're going back to the superhuman chess player, you can't predict the moves that the superhuman chess player will make. If you could, you would be a superhuman chess player yourself, but you can predict that you will lose to the superhuman chess player because it is indeed a superhuman chess player. So I think it is just, and we also kind of touched a little bit on the nobody envisioned Uber trope with the iPhone. I think that's very real too. It's the collective process of invention and innovation and remixing everything is superhuman relative to any individual. So I think both sides of this are just like pretty hard to envision and we probably should expect to be surprised. You know, we should probably expect the future to be quite weird, quite alien, quite surprising. And yeah, I don't know, what can we do about it? I do think higher standards in politics, of course, we have all these like culture war preoccupations. I've often wondered, why is nobody running on just a super pragmatic idea of, here are five ways that we're gonna make daily life better for everybody. And on my list, self-driving cars would be one of those things. We're going to get those things out to everybody. Everybody hates their commute, right? It's, as far as I understand, highly correlated with unwellness, the length of your commute and your grumpiness level seem to be strongly correlated. If that's true, and everybody kind of knows that, I don't think many people are like, I know you hear it occasionally, but most people I don't think love their commute. Why is nobody prioritizing these sort of daily things? things. Similarly with energy, why has nobody said, you know, energy could be, like your electricity bill could be 10% of what it is if we just built some power plants to make it so, and we wouldn't even necessarily have to make the environment worse to do that. This is amazing stuff. Why is nobody pushing that? I don't feel like I have a great answer for that, but Tyler Cowan famously said that one of the most high-impact things you can do is help individuals raise their personal level of ambition, and maybe there's like a society equivalent of that. Can we help society raise its level of expectations? Where is my flying car? But for real, you know, like really, where is it? And why is nobody even talking about it? Are we like, why has our leadership sort of lost all connection really to even making material life better, right? It's everything we hear is about how we're going to split up the pie a little bit differently. You almost never hear about how you're going to. make a more prosperous future that can kind of give more to everyone. Even though in some theories that's like basically central to a democratic country working at all is the idea that because there's a little more every year through economic growth, like that sort of provides the grists to make all these deals and make living together possible because it's not a zero-sum game. Everybody can feel like they're winning. But yeah, we've kind of lost track of that. I don't know why, I'm not sure how to fix it, but if people were encouraged in mass to demand better, that seems like it could really help. I think we don't really, most of these things are not, if we don't screw it up and we can avoid all the downside things, I think the upside things are pretty well on track to happen. So I don't know. It's a great question. I don't have a great answer though.

Beatrice Erkers: Well, I mean, that's good news that you think they're on track to happen. And I think also like a few very interesting points. I feel like the, to some extent, the point you had about racing ambitions or something like that, it feels to me a little bit like what the US to some extent has done, and especially the Bay Area, just with tech and just raising the ambition and expectations of what startups can do. And I also just like this idea of, instead of just thinking about how to split up the pie differently, it's maybe more interesting to think about how you can grow the pie. And that's another, I think, really nice and useful Drexler idea of like the Paridotopia where, you know, everyone gets it a little bit better, or at least no one gets it worse. And I think that that's, you know, if we could grow the pie, that would be a great way to go at it. I would also just, in relation to that, like to ask is, if you could rewrite like the sci-fi canon around AI, what kinds of stories or narratives do you think we should do more of? Like, is there anything you'd like to see more of in general?

Nathan Labenz: Yeah, I mean, I do confess I'm not as well read as many thinkers in this area. It's other people should have better answers to that question, I think. I do think, obviously more positive visions, kind of goes without saying, maybe more branching scenarios would be interesting. I mean, the AI 2027 scenario that recently made waves, I thought was notable for multiple reasons, but one big one was that it had multiple endings. It may be 1 twist on the like help society, you know, demand more or raise our Our expectations raise our ambition is to make it somehow clearer to people that this is up to us. We get to decide what we're going to do, even if it's not an individual decision. France built a bunch of nuclear power plants, right? And they today have, for all the things that are going not super well in France right now, they have relatively abundant nuclear energy that it's not contributing to the carbon problem. And we could have had that. And we don't. So these like hinge points in history where certain things could go one way or another way and showing just how different the future ends up being based on whether you do or don't make certain decisions or deploy certain technologies, whatever, maybe that could kind of help people adopt a more like possibility sort of oriented mindset, that this isn't something that is just one linear story that maybe everybody sort of sees everything as entertainment now. I don't want to get too connected to this or I don't want to be attached to this idea too much because I'm just kind of coming up with it. But I do think there's a way in which you see these moments sometimes where people are like, even in the midst of history, sort of relating to it as a viewer, you know, as sort of a passive consumer of content. And is there any way to create content that could kind of snap people back out of that mindset? If we've gotten so used to just consuming video content that even real life, we start to sort of treat as like something that's just unfolding, you know, for us in a way that we don't influence in the way that video content does, you know, like what if Netflix had, you know, a sort of, as And AI could really enable this too, right? I mean, the ability to create these forking scenarios, why doesn't it exist today, at least with high production value stuff? Probably because it's really expensive to create even the single mainline story. So branching stories that people are only going to explore a few branches of, it's maybe not economical, but AI could perhaps make economical, that sort of thing. What if you had to sit there in your Netflix thing and like make decisions, you know, and then get a world out that was meaningfully influenced by the decisions that you made. You know, could, could we sort of teach people that there is like real consequence to decisions and that, you know, your agency really matters? Could we incept that idea through a, an entertainment medium? I don't know, but that's the biggest idea that comes to mind is trying to bring some sense of the contingency of history and the agency that people have, if only collectively, to decide what the future's gonna look like. Bringing that to the fore seems like something that I would hope would.

Beatrice Erkers: Yeah, that's actually a really interesting point, like the branching scenarios. Didn't Black Mirror do something like that a few years ago? But yeah, I agree. It'll probably be coming.

Nathan Labenz: Yeah, there have been a couple experiments like that, certainly.

Beatrice Erkers: Yeah, but none that maybe has made a huge impact. I also, just because it's very on the topic, one thing that we did, so we've done a bunch of world-building experiments with the existential whole program at Foresight. And one thing that we did recently was just, it's not necessarily a branching scenario, but it's more like two options for potential AI futures, where one was the tool AI one that I mentioned recently, and another is just like a more DAC approach. And so I think that's also just like, it's interesting because it shows that there are different paths we could take, and it is to some extent up to us. So yeah, but I agree. And I know that FLI also did something recently called Tomorrow's AI. where I think they like explore also different options. But yeah, so there's like some things like that coming. I want to, I mean, we're at time basically. So wrapping up, I wanted to like a little bit shift gear and also just pick your brain on podcasting. Do you have any like, you know, after all the episodes that you've done, What are the main lessons that you're learning? Do you have any recommendations for someone hosting a podcast as to what you think would be a good thing to do?

Nathan Labenz: Not really, to be honest. I sometimes call myself the Forrest Gump of AI. And what I mean by that is I'm just kind of stumbling my way through and often find myself in interesting places and is kind of an extra usually in notable events. But I haven't been that strategic about it. I think the Really, the main thing that I try to do is just, with apologies to Tyler Cowen, again, have the conversation I want to have. I felt like when I started this, I'm not a content person, honestly, really at all. Never created much content before, and I don't think I would be doing it now if it weren't for my AI obsession. The way that the podcast started for me was my friend Eric was starting a podcast network, and he said, Hey, all you do is talk my ear off about AI. Why don't we record a couple of these and see if it becomes a podcast? I was like, Oh, I don't know. I don't know how to do that. If you watch my feed on YouTube, you can see the production value remains relatively low, but he was like, We'll take care of everything for you. All you have to do is talk. If it works, it works. If it doesn't, it doesn't. I was like, Okay, I'll try it. The mindset that I went into it with, which I think has served me reasonably well, although I certainly can't say it will generalize from my situation to others, but basically I was just like, I just want to learn as much as possible. If I can get people to teach me stuff or have interesting conversations. I live in Detroit, so I'm not at the epicenter of AI, which is obviously in the Bay Area. I can be kind of more plugged in and have conversations I wouldn't otherwise get to have this way. And if I can do that and nobody listens to it, but I get value from it, then that'll be great. That alone could be a win. So I really went into it with an attitude of just if I'm having conversations that I want to have and I'm learning from them, that's enough for me to be happy with the the way I'm spending my time. And anything else was basically just gravy. The audience is not that big. Metrics are tough. I feel like it's not clear, honestly, how many people are listening sometimes. I know when we put out an episode with V and his audio quality is bad, I know it because I do get a bunch of messages telling me that we need to get Zv a mic. So there are at least some people listening. Yeah, I don't know. I'm kind of haphazard in that respect and kind of like almost strategically not strategic or strategically just focused on my own personal growth and then kind of let the chips fall where they may. I also came into it with the luxury of I had a number of different things going. I wasn't trying to make it my full-time job. It's still not my full-time job. So I'm not in a position where I'm forced to think too much about what episodes do well and what don't and what the numbers look like. Kind of can't help but do a little bit of that, but mostly try to stay true to the original idea of just I want to learn as much as possible. These conversations can be like a good regular cadence for meaningful learning and patching my blind spots. And the rest has all kind of just been letting the chips fall where they may, to be honest.

Beatrice Erkers: You know, I think to some extent that feels like great advice because it's encouraging to hear that you can just do it for the joy of it and like just feel thinking about things you're curious about.

Nathan Labenz: I think Joe Rogan actually would describe himself pretty similarly from what I've heard. For years, he was just shooting the **** with his comedian buddies, and then it was some jiu-jitsu buddies or whatever, and then it blew up. But I think he did a lot of episodes before he became huge, and mostly he was just getting high and having fun, I think.

Beatrice Erkers: That's true. Same with Tim Ferriss or something.

Nathan Labenz: Before podcast recordings, but I am having fun.

Beatrice Erkers: Yeah, yeah, it's true. It's a wide range. And with you, I guess it's a wide range within a narrow topic. But like you say, it's a very broad technology, obviously. It touches on everything. But yeah, I think that's all we have time for, Nathan. Thank you so much for coming. It was really, really nice to chat to you about all of this. So thank you.

Nathan Labenz: My pleasure. Thanks so much for the invitation. This has been really fun.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.