The Path to Utopia, with Nick Bostrom – from Clearer Thinking with Spencer Greenberg

The Path to Utopia, with Nick Bostrom – from Clearer Thinking with Spencer Greenberg

In this special cross-post episode of The Cognitive Revolution, Nathan shares a fascinating conversation between Spencer Greenberg and philosopher Nick Bostrom from the Clearer Thinking podcast.


Watch Episode Here


Read Episode Description

In this special cross-post episode of The Cognitive Revolution, Nathan shares a fascinating conversation between Spencer Greenberg and philosopher Nick Bostrom from the Clearer Thinking podcast. They explore Bostrom's latest book, "Deep Utopia," and discuss the challenges of envisioning a truly desirable future. Discover how advanced AI could reshape our concept of purpose and meaning, and hear thought-provoking ideas on finding fulfillment in a world where technology solves our pressing problems. Join us for an insightful journey into the potential evolution of human flourishing and the quest for positive visions of the future.

Originally appeared in Clearer Thinking Podcast: https://podcast.clearerthinkin...

Check out the Clearer Thinking with Spencer Greenberg Podcast here: https://podcast.clearerthinkin...

Deep Utopia Book: https://www.amazon.com/Deep-Ut...

Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork....


SPONSORS:

Oracle: Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive

Brave: The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR

Omneky: Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/

Weights & Biases Weave: Weights & Biases Weave is a lightweight AI developer toolkit designed to simplify your LLM app development. With Weave, you can trace and debug input, metadata and output with just 2 lines of code. Make real progress on your LLM development and visit the following link to get started with Weave today: https://wandb.me/cr


RECOMMENDED PODCAST:

This Won't Last.

Eavesdrop on Keith Rabois, Kevin Ryan, Logan Bartlett, and Zach Weinberg's monthly backchannel. They unpack their hottest takes on the future of tech, business, venture, investing, and politics.
Apple Podcasts: https://podcasts.apple.com/us/...
Spotify: https://open.spotify.com/show/...
YouTube: https://www.youtube.com/@ThisW...


CHAPTERS:
(00:00:00) About the Show
(00:00:22) About the Episode
(00:02:58) Introduction to the podcast
(00:03:26) Dystopias vs utopias in fiction
(00:07:29) Material abundance and utopia
(00:14:57) AI and the future of work
(00:20:10) AI companions and human relationships
(00:22:48) Sponsors: Oracle | Brave
(00:24:52) Sponsor message: Positly research platform
(00:25:53) Surveillance and global coordination
(00:44:18) Sponsors: Omneky | Weights & Biases Weave
(00:45:48) Sponsor message: Transparent Replications project
(00:47:10) AI governance challenges
(00:49:36) Deep Utopia book's purpose
(00:53:09) Global coordination strategies
(00:59:13) The vulnerable world hypothesis
(01:05:18) Bostrom's meta-ethical views
(01:08:32) Listener question on meditation
(01:10:08) Outro


Full Transcript

Full Transcript

Nathan Labenz: (00:00) Hello and welcome to the Cognitive Revolution where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week we'll explore their revolutionary ideas and together we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz joined by my cohost, Erik Torenberg. Hello and welcome back to the Cognitive Revolution. Today, I'm pleased to share a special cross post episode from the excellent podcast, Clearer Thinking with Spencer Greenberg. Spencer is a mathematician, entrepreneur, and researcher who explores ideas that matter through in-depth conversations with leading thinkers. And in this episode, Spencer speaks with philosopher Nick Bostrom. Bostrom is perhaps best known for his book Superintelligence, which took on the challenge of analyzing the potential dangers associated with smarter than human AIs while the very possibility still seemed remote. But this conversation is actually mostly about his latest book, Life and Meaning in a Solved World. So why is it so difficult for us to envision a truly desirable future? Bostrom argues that our current lives are structured by what he calls an exoskeleton of instrumental necessities. That is the things that we need to do to survive and thrive are the things that we've evolved to value. In a world where advanced AI could handle all these tasks, things like purpose, achievement, and the satisfaction of overcoming challenges could all potentially disappear, leaving us with a profound new challenge. How do we find purpose and meaning in our lives when there's nothing that we truly have to do? As you'll hear, Bostrom faces this challenge head on, attempting to sketch the outlines of a truly desirable technology utopia, touching on concepts including quiet or subtle values, including things like aesthetic beauty, acts of creativity, and commitments to honoring ancestors or upholding traditions. These are currently overshadowed by more pressing concerns, but perhaps could come to the foreground in a technology enabled future. They also discuss artificial purpose or the creation of arbitrary goals and constraints that could give our lives structure, not unlike the sense of purpose that some find in video games today. They also assess the possible need for global coordination to achieve such a future as well as the associated risks of pervasive surveillance. I've often said that the scarcest resource is a positive vision for the future, and this book is a great prompt for all of us to begin to think more deeply about how we might find continued meaning in our lives in a world where today's most pressing problems are solved. As always, if you're finding value in the show, we'd appreciate it if you take a moment to share it with friends or post a review on Apple Podcasts or Spotify. And we always value your feedback and your topic or guest suggestions. You can submit those either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. Now, I hope you enjoyed this thought provoking conversation about the potential evolution of the very meaning of human flourishing between Spencer Greenberg and Nick Bostrom from Clearer Thinking with Spencer Greenberg.

Josh Castle: (03:08) Hello, and welcome to clearer thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you joined us today. In this episode, Spencer speaks with Nick Bostrom about the potential of AI driven utopia and the search for meaning in a post instrumental world.

Nick Bostrom: (03:33) Nick, welcome. Hey, Spencer.

Spencer Greenberg: (03:36) I don't know about you, but I found that there's a lot more interesting vivid descriptions of dystopias out there than of utopias. Do you agree with that?

Nick Bostrom: (03:46) Yes. I think that's true. I wonder what that says about us humans, but it does seem to be a pattern. Like most people could probably rattle off a bunch of different dystopias, know, Brave New World, 1984, Handmaid's Tale, but the average person would have trouble, I think, naming even a single utopia.

Spencer Greenberg: (04:07) Yeah. And it's interesting because it seems important to figure out where we want society to go, not simply what we want to avoid.

Nick Bostrom: (04:13) Yeah. And it gets worse, like, once you actually look at the utopias that have been attempted, by and large, you wouldn't actually wanna live in any of them. They often seem too pat or too sort of they become a little bit sort of like a plastic toy or something, like cloying often. And so it's actually hard to think of a true utopia. What's easy, like you could imagine if you take the current world and you imagine some small improvement, that's easy enough. Like the current world except without childhood leukemia seems just better. But once you start to add a whole bunch of different improvements then eventually you get to the point where it no longer seems attractive. It's almost like plastic surgery, right? So you could imagine if somebody has some big wart or something on their face, probably you improve their performance by removing the wart. But if you sort of just keep going with more and more supposed improvements, you kind of enter Michael Jackson territory.

Spencer Greenberg: (05:14) Yeah. One thing I've observed that's relevant to that is that people seem to agree much more on what we don't want than on what we do want. Right? Like, everyone can agree we want less illness, we want less poverty, we want people to have fewer issues when they're growing up, etcetera. But then you say, okay, what do we want society to be like? Do we want everyone to be sitting around and having intense pleasure all the time? Do we want people to still be working hard to achieve their goals? Do we want lots of little micro societies that are completely different? That's when people seem to really start to disagree.

Nick Bostrom: (05:49) Yeah. But it's not just that we disagree with each other. It can be hard even as an individual if you try to conceive of utopia that would be attractive. Just even just to you yourself. That is already a pretty difficult challenge. And I think more so if we consider not just a few rearrangements in the current condition like a little bit more money and a little bit better medicine, but more profound transformations of the human condition. Such as we might attain technological maturity.

Spencer Greenberg: (06:22) Why do you think it's difficult for people to think of utopias that even they themselves would think are ideal? Is because people have multiple values and as you try to build a utopia those values come into conflict, or some other reason?

Nick Bostrom: (06:33) I suppose our self conception and our goals that we have, and the kind of applied values that we pursue, are conditioned on our current location. So we have various projects going on, things we're trying to achieve, But all of that is kind of based on where we are now. And so if you imagine a very different situation where none of that applies, it might seem that we are a little bit at sea and it doesn't relate to us and it seems almost like we are contemplating somebody else's life. So that could be something that we are anchored in our immediate surroundings and our current predicament. And anything that takes us too far out of that is kind of also losing ourselves and our life in some sense.

Spencer Greenberg: (07:19) One thing you talk about in your book, Deep Utopia, is that the regular world imposes these constraints on us and gives us these things to strive for. And then suddenly, if we imagine a world where production levels are so high that we really don't need to strive for anything, we kind of lack those constraints, and it kind of leaves us out at sea.

Nick Bostrom: (07:39) Yeah. I have this metaphor of the insect with the exoskeleton that holds these squishy bits together and gives it shape. And analogously, the various instrumental necessities that we face in our lives and that all humans so far ever on this planet have faced. It's always been part of our environment that there are a whole bunch of things we need to do. You know, you need to make money to pay the rent, need to brush your teeth to maintain a healthy mouth, you need to do this, that and the other. And that's like our lives are sort of lived inside these constraints. And so just as the insect has an exoskeleton, our souls a kind of exoskeleton of these instrumental necessities. If we one day attain post instrumental condition, as I call it, then there is a question of what happens to our souls. Do we just become contented pleasure blobs? Or would there still be something that gives structure to our psyches and our lives?

Spencer Greenberg: (08:45) It seems that if you go back a few hundred years and you ask people what utopia looks like, there's going be a lot of focus on just material abundance. Wouldn't it be wonderful if we just had enough food to eat all the time and and great shelter, comfortable shelter, and there was always the right temperature and and so on? And yet to kind of a shocking degree, many people in the world have those things today. Not not everyone, of course. There's still people living in poverty. But many people still live this way, and yet it still feels very far from utopia.

Nick Bostrom: (09:13) Yeah. The there's this so there's, like, all it seems like pretty much every different culture has, like, some notion of this land of plenty, the land of cocaine, the like, there are different names for it. But I think if you imagine some medieval peasant living under great material deprivation with back breaking labor from morn to dusk. Imagining a condition in which there was plenty of food and you could rest as much as you wanted and it's like a continual feasting would already be enough to like kind of just give you a wow. That's like a fantasy. But now I think like that wouldn't really account for for most people. We already have fridges stuffed full of food. And, you know, although we are not yet liberated from all economic work, some people are and even the ones who are not, like are now having holidays and weekends and stuff. And we're almost struggling with the opposite end of this. Like the the kind of boredom or lethargy or lack of purpose that can come from having it too easy. And that creates a different source of misery and need.

Spencer Greenberg: (10:31) I'm not certain this story is true, but a friend of mine went and visited a tribe, and through a translator, she was asking them about their afterlife. The way that the translator explained their afterlife was that, basically, it's just like normal life except the cows give more milk. And I thought that was so surprising. Right? It's like, it's just, oh, wait. It's just things are just a little bit more comfortable.

Nick Bostrom: (10:55) Yeah. Now if you did end up in this afterlife with a lot of milk, and then you might, like, dream in that life, like, what would be, like, after that life. And then you might think, well, you know, the the trees also give a bunch of, like, a lot more fruit and the fruits are sweeter. And then like, you could repeat this and then, oh, maybe your bed is even softer and more comfortable. But like, if you sort of iterate far enough in that direction then you do eventually get to this notion, I call it a solved world and even a plastic world. These are two different concepts I introduce in this book Deep Utopia. Which describe, characterise a much more radically profound and in some sense improved condition that we might attain, I think perhaps not long after the machine intelligence transition if things go well. And I think this really forces us to confront some pretty fundamental questions in philosophy, like what ultimately gives value to life. What gives meaning to life. And other values that Some values are easy to like, just clearly we could have a lot more of them in this condition. But there are also some other values that seem to be at risk of being undermined in such a solved world.

Spencer Greenberg: (12:15) Yeah. You mentioned this question of sort of what do we do when we have abundance. And an interesting place to start there is to look at what we have done. As we've made more and more production, in theory, people could work a lot less because, you know, we've got all of this food, we've got all of this comfort. Why are we working so so hard? So what historically has happened as production has increased?

Nick Bostrom: (12:39) We do work a bit less. So, we've taken some of our increased productivity out as leisure. Some of that is we have longer childhoods and education, but we also have weekends off and we work shorter hours and maybe work less hard than the peasant did. And then longer sick leave and maternity leave and paternity leave and then longer retirements. But we've only taken out some of our increased productivity for leisure, like most we have taken out as increased consumption. John Maynard Keynes, the famous economist, wrote this essay, it's like almost 100 years ago now, where he, at the time predicted a century hence, that people would work way less because he he kind of extrapolated then recent increases in productivity and thought if that continued another hundred years, we would be like 4 to 8 times richer and then we would scarcely need to work and what would we be doing all day long. Now, as it turns out, we do work maybe 25% less or something, but overall greed has triumphed over sloth. And so so we we just spend more. And and a lot of that is on positional goods. So you buy more and more expensive luxuries to try to one up the other. And the more like, the fancier the clothes and the cars that your neighbors have, more you have to spend to keep up or overtake them. So there is an element of zero somneness to our consumption rat race. And so that's where a lot of our increased economic wealth has gone, I think, in addition to sort of absolute improvements and also some increases in leisure.

Spencer Greenberg: (14:29) It's funny because with positional goods, could spend an unlimited amount of money on them, right? Like because it's all just relative?

Nick Bostrom: (14:35) So you could have like the billionaire with his mega yacht. And then there's another billionaire. Both of them want to have the biggest yacht in the world, but it's impossible, no matter how much economic growth there is, for them both to have their preference satisfied. So if you have a 200 meter long yacht and somebody else builds one that is 210 meters long, then you need to upgrade yours to get 20 meters more and that could keep going. So in a solved world it's not defined by the idea that all preferences can be satisfied or that people wouldn't want more money. I'm sure some people at least have unlimited appetites. But that doesn't imply that people would still be working because although there might be desire for more, it doesn't mean that you could get more by working yourself. Like if AIs can do everything better and cheaper and more efficiently than humans can, that would just not be any demand for human labor. Now I think there are some exceptions to that, but like as a broad pattern, I think you could have something approximating full unemployment if AI succeeds.

Spencer Greenberg: (15:53) How linked are these kinds of arguments to AI in particular? Do see a way that one day humanity could get to a utopia even without super intelligent AI?

Nick Bostrom: (16:04) In principle, yes. You could get, I think, most or perhaps almost all of the same affordances without advanced AI. You could imagine, for example, there are a lot of intellectual tasks that need to be done in the economy, right? And that humans now do. Maybe given unlimited time you could develop software that for each of those tasks you would figure out how to automate it using non AI techniques. And I think realistically the way if we do get there, that we will get there, will be through AI. That just seems a lot easier to build one AI that can do learn to do all these different tasks than to sort of make a specific software program for every particular task.

Spencer Greenberg: (16:49) In your book Superintelligence, you analyze risks from AI in great depth. But I think for this conversation, let's assume that AI doesn't go horribly wrong. Right? Like, it doesn't, know, kill all humanity. Let's let's say it's still under the control of humans to a reasonable degree. Even with that assumption, it seems like people may be very worried that, okay, if AI start producing as well as humans at almost every task, does that leave anything for humans to do, and what kind of society does that look like ultimately?

Nick Bostrom: (17:21) Yeah. And that's I'm ultimately optimistic about the possibility of having a really desirable and wonderful type of existence in a post instrumental world. But I do think it does go through some significant challenges like the the book isn't trying to sell Utopia or it's not like an argument for Oh here is how great it would be. It rather tries to think through the full implications of what it would mean to to to have a solved world. And then there there are, at least at first sight, like quite disconcerting implications of that for for like would it really be attractive to live in this world? Like it seems some values at least would be potentially undermined here. And I do think ultimately the kind of existence that would make sense here significantly different from our familiar human existence. It might need require us to sort of give up some values or reconsider them in a fairly fundamental way.

Spencer Greenberg: (18:26) So what are some of those values we might have to reconsider?

Nick Bostrom: (18:29) Well, take purpose. So some people think that it's important for human beings to have purpose in their life and that their life is diminished if it lacks purpose. Now, if you think of purpose as something that is worthwhile that you have to exert effort over some period of time to try to achieve and that draws on your different, maybe, capabilities, It's at least at first sight not clear that that would still exist in Utopia in as much as, at least for a very wide range of different things you might want to achieve, there would be shortcuts. It would be easier just to ask your AI system to do them. And so future lives might have less purpose and meaning. Like it might seem that there is nothing that we are needed for. A lot of people today take a kind of pride in being a breadwinner or in making a positive contribution to society at large perhaps, or at least on a smaller scale. Like they think they benefit their family in some way by being around. Or at the very least themselves, like there are various projects you can undertake to try to educate yourself or get fit or you redecorate your home to get a nicer home. Like all of these little things that we are engaged in little projects. And that's like a kind of big constituent of what we're up to, of our lives. If all that went away and it was all just on tap, it it might seem to create a kind of purposeless, meaningless existence.

Spencer Greenberg: (20:10) A friend of mine said to me fairly recently that she still prefers talking to me about her problems than she does talking to an AI, but that that's not true of all of all of her friends. So there's some friends she'd rather talk to the AI. And there there was something a little scary about her saying that. It's like, well, okay. So maybe next year, she'll prefer to talk to the AI than talk to me. Right? And, yeah, and then you start to think, well, yeah, it like, you know, if you really push this thought experiment forward and you say, well, what if really there's nothing you can do better than the AI? Right? And and what does that mean for for all the things you care about trying to do in your life?

Nick Bostrom: (20:46) Yeah. So that would be one instance. Like, if other people right now need you in different ways, you know, whether it's for conversation or support or they're like children or relatives or friends. And yeah, if AI has just become better as social companions, that would kind of remove one way in which we can be practically useful and of help and value to other people. And then if you generalize that, then you start to kind of get a sense for this challenge. And that so the book doesn't try to shy away from that, it kind of dives straight into this and, like, tries to take that on in its its full force.

Spencer Greenberg: (21:24) So how do you think about starting to have a a great utopia even if it's gonna fundamentally be different than kind of what we are used to trying to do in our lives?

Nick Bostrom: (21:34) Well, we can if we start from a kind of empty slate and then one can consider what values we could add there in Utopia. I mean, should first say, although the book doesn't dwell much on this, but it is a super important thing in the actual all things considered situation, which is just the opportunity to get rid of a whole bunch of negatives that currently plague the human condition and indeed the animal condition as well. I think that already alone, just getting rid of all the bad stuff would possibly be reason enough. But let's set that aside and just consider, like, what positive values could exist in Utopia. First we have pleasure. Now, I think it is very easy to dismiss that as yeah, sure, they could have some super drug and like kind of be blissed out junkies or maybe direct forms of pain manipulation. And to kind of sniff at that. I think actually that alone is a much more serious and possibly attractive proposition than a lot of people would give credit for. And I think the people who are kind of very down on this form of hedonism might easily change their mind if they actually got to sample some of this pleasure that would be in the offing at technological maturity.

Hey. We'll continue our interview in a moment after a word from our sponsors.

Josh Castle: (23:06) Whether you're a marketing manager, a product engineer, a CEO, a researcher, or a social scientist, You sometimes need to know what lots of people think about a thing, or you might want to have people enroll in a study or experiment. But recruiting study participants can be time consuming, error prone, and expensive. Well, good news. Positly is here to help. Positly addresses the common pain points that researchers encounter when recruiting study participants. It aims to solve common research problems and dramatically improve the speed, quality, and affordability of human subject research. With Positly, researchers, marketers, and product developers are empowered to produce better results by accessing high quality participants through an easy to use web interface, making it easy to run surveys on thousands of people in mere hours, and it can now be used to recruit people in over a 100 different countries. To learn more and to give your research project superpowers, visit positly.com. That's positly.com.

Spencer Greenberg: (24:15) People seem to find pleasure more palatable when it's linked to bigger things. There are a lot of people that say, well, if all life was was the most base pleasure, right, like the pleasure you might get from, you know, doing heroin or something like that, then, like, maybe that feels worse. But if let's say it was, viewing beautiful art, but it was so beautiful that it would you know, that you were in this incredible awe state, maybe that would be more appealing.

Nick Bostrom: (24:40) Yeah. And so this is one thing that we can add, that I think we should add, but it is it is, I think oh, yeah, like, just worth dwelling very briefly at least on on the the just the raw pleasure itself. It's it's kind of intellectually uninteresting and trite and like but it might ultimately be the most important single thing. Although I do think there are additional elements that we can add. And so you could combine the pleasure as in positive hedonic tone with experience texture. So this would, for example, be the appreciation of beauty, as you mentioned. And you could kind of take pleasure in contemplating beauty or in understanding truths or in admiring goodness. And so you don't just have the pleasure but you also have some other maybe more complex and rich mental content attached to the pleasure. And already that seems a lot perhaps more appealing, and indeed some traditional conceptions of heaven consider it as being a kind of state of contemplating perfect goodness in the form of God and then experiencing love and happiness as a result of that. But the experience is the kind of conjunct of the thing being contemplated and then a kind of emotional response to that. And both of these things clearly would be possible in a solved world to extremely high degrees. Extremely high degrees of pleasure and also extremely clear and strong and sophisticated or well targeted forms of contemplation or other mental content. So those two we can kind of put those in the bank, you have pleasure and you have experience texture. And we can then ask can we add further elements to these things. So one thing we don't yet have, right, is any type of activity or pursuit or purpose. Like at the moment we have kind of blissful experiences of contemplating various things. But you could then think why wouldn't we also be able to engage in various types of activities if we think various forms of activities are intrinsically valuable. You could create artificial purpose. So you we already do this today, like we when we play games we sort of set ourselves some goal and then we try to achieve it. Some arbitrary goal. Like you want to try to get this golf ball into a sequence of 18 holes using only this little inconvenient implement, the golf club. These are random goals you just set yourself and then once you have these goals then you can engage in the activity of golf. So similarly in Utopia people could set themselves arbitrary goals and that would enable the activity of pursuing those goals. Importantly here you would include in the goal constraints on how you are supposed to achieve it. So just as if you set yourself the goal of doing well on the golf course it's kind of part of the goal that you are only supposed to use a golf club to propel the ball. As opposed to picking it up with your hand and placing it sequentially in each hole. So similarly in Utopia there would be all these technological shortcuts you could use to achieve, like, a particular outcome and so you would have to bake into the goal the idea of achieving the outcome only using a certain limited set of permissible means. If what you want is then an activity that you yourself have to engage in.

Spencer Greenberg: (28:38) Right, you can't just tell the AI to do the thing for you, that defeats the purpose.

Nick Bostrom: (28:42) Yeah. But you just make that part of this arbitrary goal you adopt. And so then we can add one more element. So we have pleasure, right? We have the experience texture and now we also have artificial purpose that allows us to engage in various forms of activities that could be intrinsically valuable. And already I think it starts to seem a lot richer, like a future of contemplating beauty and playing games while greatly enjoying ourselves doing these things seems maybe at least more attractive to many than being this kind of blissed out pleasure blob, like kind of a junkie enjoying some super drugs sprawled out on on a kind of flea infested mattress in a dark room, but like where there's nothing else going on except the pleasure itself. Now we already have something more, and it's starting to look better, I think. And and I think that we can even add a little bit more to this.

Spencer Greenberg: (29:43) Right. So what element would you add next?

Nick Bostrom: (29:45) Well, some people might think, although nice, you could have these artificial purposes, it's still not quite the same as as real purposes. Like, you don't you don't really need to have the ball go into each of these 18 holes Or or to to defeat the the boss monsters in this computer game. Like they are kind of fake purposes in some sense, like as opposed to now there is like a whole bunch of stuff in the world that actually needs doing. And so you might think it can add value to the life if if it has realer purposes or more natural purposes. Purposes that exist not just because we arbitrarily just ourselves made up some some goal. And now, could we have such things in Utopia? Well, I think yes, there would be some opportunities for this. The perhaps easiest way to see it is if you imagine you have two people, A and B, and let's suppose that A just happens to have a preference that B's preferences be satisfied, like you care about the other person. And then, like, if now B happens to have a preference that you do a certain thing on your own, then now you have a real purpose if you want to actually achieve your goal of satisfying B's preferences, the only way you can do that is by doing this thing yourself that B wants you to do.

Spencer Greenberg: (31:14) So essentially because we have preferences that other people do specific things, not just that they cause those things to occur by asking their AI agent to do it.

Nick Bostrom: (31:23) Yeah. So so you could even give a purpose to a friend by sort of adopting this the preference or goal or, like, setting yourself up in such a way that you will be more happy if they do a certain thing then once you've done that then they have an actual, a real purpose to do that thing if they care about you and your preferences. So in this particular reductionist case with like A and B setting it's a little bit hokey perhaps. But I think sort of subtler versions of this are actually quite common in that we have various shared cultural commitments and commitments to say for various traditions, for example, that we might just want to uphold. And those traditions might call for us ourselves to engage in various forms of practice. Yeah, and the tradition wouldn't count as having been continued or honored if what instead we did was create a bunch of robots who went around performing the rituals or whatever. Like, in order for the tradition and successfulity to have been continued for certain types of tradition, it may require our own our own involvement.

Spencer Greenberg: (32:35) Right. An example of this, I think, that can be poignant here is imagine you're writing a speech for someone's wedding. If you were to just ask an AI to write it for you and put no effort into it, and the AI spits it out, even if it's a really good speech, there's a way in which it feels like you didn't satisfy the obligation, and whoever asked you to to kind of speak at their wedding might be disappointed that you didn't actually put your own thought into it.

Nick Bostrom: (32:59) Yeah. And I I think these are these are fairly ubiquitous, actually. And and more broadly, I think, I suspect that there is a whole bunch of aesthetic reasons for doing various things that would come into view in this condition of a solved world. And if you think about it like the aesthetic significance of something often depends on who did it and why they did it and the means by which they achieved it. As opposed to just the kind of the particular artifact that results. And so our lives might become more like artworks. And to achieve a particular expressive content of those artworks, it would in many cases call upon us to do things on our own steam. And so more broadly, I have this notion of kind of quiet values or subtle values. I think there might exist a whole bunch of these that more or less obscure to us currently. And so just as during the day you go outside, you don't see any stars, right? But it's not because they are not there, it's because the sun is there and it's so much brighter. But similarly in our current lives there are these pressing instrumental needs, horrors going on in the world, things we have to do in our own lives or various kinds of catastrophes happen, that these are the louder values. So we need to fight injustice and prevent pain and we need to, like all these things that kind of fill our conscious minds. And so, but we don't see what I think is there, which is a constellation of these subtler values. But if all these kind of pressing urgent moral needs one day were all taken care of, then I think this richer canopy of subtler values would come into view or potentially come into view. And like just as during nighttime our pupils expand to take in more light. So in this future it would be appropriate for us to become attuned to place more weight on these subtler reasons for doing stuff, including aesthetic reasons, broadly construed. And subtler things like honoring your ancestors or upholding traditions or achieving various kinds of aesthetically beautiful shapes in your life and in the way you relate to other people could just constitute a larger portion of reasons for doing stuff. But that they might be really important, it's just that their importance is kind of is up calibrated because the things that are currently important would no longer be there, it would make sense to care more about these subtler things.

Spencer Greenberg: (36:01) It seems that a lot of meaning we get in practice comes from our relationships with other people. And that even if we're in a situation where there's nothing that we can really add because there's agents that can do it better than us, we could still have deep relationships. It could be very fulfilling. How does that kind of work into your vision of Utopia?

Nick Bostrom: (36:19) Yeah. I think a lot of these purposes that could remain in in a sold world would broadly arise from sociocultural entanglements. Other people, not just other individual people, but also other sort of cultural phenomena and commitments and our participation in traditions and communities of various sorts. I think that to the extent that there are still kind of natural purposes remaining, I think a lot of them would come from that source. It is not completely obvious how people will choose in this respect. So we'd already like to anchor it a little bit more to the here and now. So we have these increasingly capable social companion bots that are being or will be created in the near term future, right? We already have the chat bots but you will have more multimodal with like maybe visual avatars and voice and it will all become, I mean you can already see the beginnings of this and I imagine some of these might become very compelling to some people. One might wonder whether Okay, so it might just not have time to play out if AI timelines are really fast But if you imagine kind of AI frozen in its current state or a couple of years from now, and we have that level of technology, would people start to spend more and more their time interacting with these social artificial intelligences as opposed to real humans? And would that be good or bad? So like the first answer that pops into most people's mind would it seems to be bad, like it's much better to spend time with real people. I wonder whether that's like one of these generational things. So us old fogies said, I was like, Oh no, the only real you gotta spend time with real people. That's how we did it. Like if a new generation grows up with this kind of stuff, maybe they just think that's like we are having these hang ups. And yeah, and it can be really hard in situations like that to form some I mean it's easy to form an opinion but like to form an opinion that reflects more than just your own idiosyncratic upbringing and personality. And that actually goes down more to bedrock of value. That's like a really hard evaluation to do.

Spencer Greenberg: (38:45) I don't know if others will find this compelling, but to me, it matters a lot if those I'm interacting with are conscious. In other words, that they're they're actually having internal experiences. For example, they're feeling pleasure and pain and so on. If I was talking to AIs that were not conscious, that to me, that would seem to sap a lot of the meaning out of it.

Nick Bostrom: (39:05) I mean, it it might be that after a while, if if you were, like, in the habit, like, you would kind of forget about this question. Like, just as some people think consciousness don't exist, like there are eliminativists about consciousness, right? Like some philosophical view thinks that it's like a confused concept and actually nobody is conscious, it's like we should just get rid of the very notion because it's like like a philosophical confusion. I think they still then get on with their lives pretty much in the normal way. So I'd I'd imagine that eventually people would just settle into something that was not very tightly coupled to some abstract philosophical belief. Of course it's also possible that some of these digital companions are conscious or will be conscious. Like you could have artificial persons that are conscious, it's just that they have maybe been designed to be more optimal as social interaction partners. And therefore kind of more compelling. Just as some people are more kind of charming or compelling as social intro like some are just annoying and tiresome and full of themselves and like puffing empty air and like irksome and others are really wonderful human beings that you want to spend time with. Similarly, of these AI companion bots might just become, like, much better in the same ways that some humans are better at being friends. So I this is maybe a good time again to remind the listener that I'm not advocating specifically for this. Like I'm not trying to sell a particular vision here, I'm just trying to look at what it actually is and take it on. I obviously also understand the aspect of this that seem repellent, like the idea that we would have these kind of highly optimized AI companion bots that we would spend all our times with instead of interacting with human beings, that there is like a kind of at least an initial kind of yuck reaction to that. But I wanna not just stop at that initial yuck reaction, but kind of dwell in the discomfort and then see if one can understand precisely why it is and whether ultimately it makes sense or whether it's just like a kind of prejudice.

Spencer Greenberg: (41:17) So changing tacks a little bit, assuming that humanity continues advancing AI, it gets incredibly advanced, and we're able to keep it under control, what do you see as some of the kind of concerns about how it could lead to a utopia versus a dystopia and what we should be thinking about there?

Nick Bostrom: (41:34) Well, there's a whole bunch of practical difficulties between where we are now and attaining anything like a solved world. So we have the alignment problem, of course, then kind of various versions of the governance problem, and also the problem of the ethics of digital minds that we want the future ultimately not just to go well for human beings but also for other morally considerable beings that we will share the future with hopefully, including animals and maybe some of these digital minds. Whether because they are conscious or they have other attributes that give them various forms and degrees of moral status. And so that's a lot there. The book just brackets all of that in order to actually get to the point where we can ask the question what's then? Cause I think at some point somebody needs to, like, should probably ask that, even though, like, most of our time should be focused on making sure we actually get there as opposed to, like, destroy ourselves beforehand.

Hey. We'll continue our interview in a moment after a word from our sponsors.

Josh Castle: (42:47) Science is built on replication. Our confidence that a particular hypothesis is true increases the more times we can conduct experiments and get results that are consistent with the original research. Unfortunately, psychology and other social science fields have been undergoing a replication crisis for the past several years, meaning that researchers have tried but failed to replicate experimental results from the past few decades. And this is deeply troubling because it calls into question many of the things we thought we knew about how humans work. To help solve this replication crisis in psychology, the team at Clearer Thinking has launched a project called transparent replications that seeks to celebrate high quality research while also shifting incentives toward more replicable, reliable methods. They accomplish this by conducting rapid replications of recently published psychology and human behavior studies in prominent academic journals with the aims of celebrating the use of open science best practices, improving reliability, and promoting clarity. Once the transparent replications team has completed a replication, they make their results freely available on their website for anyone to read. To read those results and other essays by the team, visit replications.clearerthinking.org.

Spencer Greenberg: (44:07) The alignment problem is really about getting AIs to do what we want and not do things we don't want. How would you describe the governance problem?

Nick Bostrom: (44:15) Well, it's a broad category. One aspect of it is making sure that we humans don't use these AI tools, even if we assume they are aligned, for bad purposes. All kinds of other very general purpose technologies that have been developed have been used both for good and for bad, right? So you have, like, people engaging in warfare, some people oppressing other people, all kinds of mischief. And so with AI similarly, very powerful and very general tool, there is all kinds of opportunities for misuse, both for conflict and for oppression and for other types of malfeasance. And so broadly speaking, might think of the governance problem as like how to ensure at least that the preponderance of uses are positive. It also interacts with the alignment problem in that you might potentially need various forms of governance, regulations and oversight to ensure that the alignment problem gets solved in time and that alignment solutions get implemented. So these kind of interact in various complex ways. So it's like a little bit arbitrary how you break these out, I think of the alignment problem as a technical problem that people who are, like, good with math and computer science need to figure out by being clever. And then governance problem is more like a broader political challenge, where it's not so much that there is like a clever little answer somebody comes up with, but it's more like a continuous effort by many people to try to achieve a more benevolent and cooperative condition in the world. And then the third is like, I call it the ethical problem of digital minds, part of that is philosophical and kind of computer science y to figure out which minds actually have what kinds of morally relevant attributes. But it then also quickly becomes a challenge for human empathy and ethics and ultimately for governance to ensure that whatever we figure out regarding the question of how these minds ought to be treated actually also gets implemented in practice. I would say that it's not a book that has the structure of here is a thesis, here is the argument for the thesis, and it's more designed to be an experience, to try to put the reader into a position to think for themselves seriously and deeply about these questions. With the right kind of attitude, an open minded form of curiosity and benevolence. Because ultimately I think somebody somewhere, if things go well, will need to make up their mind about what we actually want about the future. And it's a really hard deliberation And you would hope that they don't just kind of take out some cashed thought, you know, or or do some sort of off the cuff thing or like project onto the future some, like, random little feature just being a function of their current level of neurotransmitters but that it kind of I think it's possible that the answers people might give might depend quite a lot on how they come at this problem, how the sort of the attitude with which they enter this deliberation. And so, what the book ultimately, it has a secret purpose, is like if some group, whether it's some people in an AI lab or like government or some humanity wide deliberation process, that it would be good for there to be something to read in preparation for going into such a deliberation. And I'm hoping this book will A, equip them with certain concepts and put various questions more clearly into focus, but also help prepare a certain kind of attitude of benevolent generosity and open mindedness and playful contemplation that I'm thinking is likely to make that deliberation go better than some of the alternative attitudes with which one could come at it.

Spencer Greenberg: (48:13) Hopefully, if we start approaching Utopia, the people involved will read your book and will stir some important reflection on what we actually want Utopia to be like.

Nick Bostrom: (48:24) Yeah.

Spencer Greenberg: (48:26) Alright. Before we wrap up, let's jump into a rapid fire round. I'll ask you a bunch of questions, try to get your relatively relatively short answer, although they're gonna be complex questions. So first question for you. So you wrote this book Superintelligence. Since you've written it, a lot has happened in the AI world. We've seen large language models like ChatGPT. We've seen many different breakthroughs in AI. And I'm wondering, have those developments in the technology changed your view considerably about AI, or do you kind of stick to your guns to what you wrote in Superintelligence?

Nick Bostrom: (48:59) Yeah. In broad brush talks, I think it's held up really well, and I haven't changed my mind except there's more granularity and we can see more specifics about the particular shape. Like the idea that current AI systems are very anthropomorphic, very human like, with human like idiosyncrasies is a bit surprising. The idea that to get the best out of one of these large language models you almost have to give it a little pep talk sometimes, right? Think step by step, this is really important, my job depends on your answer. Like that you actually get the AI to do better by doing that, that would probably have seemed a bit ridiculous 10 or 20 years ago. That's like anthropomorphizing the AI. Yeah. That's where we are. Yeah. I'll stop there because you wanted short answers.

Spencer Greenberg: (49:47) So some of what you write about in both in Superintelligence and in Deep Utopia involve global coordination, which seems like something that the world struggles with. What do you think are some of the most promising strategies for improving global coordination?

Nick Bostrom: (50:01) Well, I mean, one, if if the world ends up being a singleton, which is like this concept I have of of a world that is coordinated at the highest level, mean, perhaps the most likely way for that to happen is through the AI transition. And then either one actor gets enough power to just impose itself on the world or maybe post AI technologies allow for easier ways of coordinating and solving coordination problems. But probably what you meant to ask was more like what can we push on today to improve global coordination. And unfortunately it's quite hard to find some really high leverage things in that space. I mean, there are little bits and bobs here and there that one can point to perhaps, but probably if there were an easier way, it would already have been done.

Spencer Greenberg: (50:50) So one potential means of global coordination that you've written about is the idea of a kind of single state actor that kind of controls the world globally, maybe monitors everyone and everything to make sure that the, you know, really dangerous technology isn't used. Obviously, when people hear about that kind of idea, it's kind of terrifying. It sort of sounds like, you know, a one world dictatorship. Do you see ways of preventing dangerous technology that don't involve sort of such close monitoring, or do you see ways of involving such close monitoring that don't come across as so authoritarian?

Nick Bostrom: (51:25) Yeah. It's kind of funny, like sometimes people accuse me. Look, so I had this paper, The Vulnerable World Hypothesis some years ago and like one concept introduced there is this, I called it the freedom tag, which is a kind of surveillance device. Imagine a kind of neck bracelet people wear that records everything they hear and with omnidirectional cameras that's continuously uploaded to some sort of like, freedom offices or whatever, where they're like some people then think, Oh, they have accused me. Like, Oh, he's so ominous. He's advocating the freedom. Obviously I named it Freedom Tag on purpose to really emphasize its Orwellian character. Now, it is fairly plausible, however, like, whether it's good or bad, that I think there will be more and more transparency and ability to surveil what people are doing in more and more detail and also eventually what people are thinking. And already current AI technology, I think, would be able to do a lot here. So for a couple of decades we've been it's been possible to sort of record everybody's phone conversations, right? And everything they write on their social media, etcetera. And government security organizations like to do this. But so far it's not really the only way you've been able to use it is like if there is a particular person of interest that you could then kind of assign the human analyst to read through what they have said and written. But now, even with current AI tools, think you could do sort of mass analysis, so like analyzing what everybody is saying and writing and therefore what they are thinking about the government and do sentiment analysis and stuff. And you probably could, with current or very near term technology, pretty accurate results from that. And then you could imagine coupling that to some sort of social credit score system that would penalize people who express wrong think. Who, like, you could have like an autumn AI that digs through everything they've ever said and done to try to dig up some dirt or some bad thing they said, you know, decades ago or, and then kind of reduce their reputation accordingly. And so the techno the potential in this technology, I think, already exists there to for a social equilibrium to emerge that is like one where there is much less obscurity and forgetfulness, and where it's possible for one system to see and have fine grained ways of differentiating its reaction to everybody. And so, I mean, I don't need to, I think, say that there is obvious ways for that to be dystopian, but maybe there are also some ways for it to not be dystopian and good.

Spencer Greenberg: (54:23) Well, you know, I don't want leave people thinking that you're pro authoritarian single state government, unless you are. So, like, give maybe could you just paint momentarily, what's a good version of that where we're all monitoring it all the time?

Nick Bostrom: (54:36) Well, I mean, think of somebody living in, like, a like, small community, like a kibbutzim or like a little village. Like, probably like a lot of attractive features about that. And people probably knew a lot about what each person was like and what they said. And, you know, and so it doesn't have to be a totalitarian nightmare, guess. If you imagine this scaled up, maybe you would have a world free of crime. A world free where you can't be a jerk because a fraud and a jerk goes around ripping off or taking advantage and exploiting one person and then moving on to the next, because like the track record would be obvious for everybody to see, and so that could be stronger incentives to do kind of kind and pro social things. It's very hard. We don't have the kind of political or social science that allows us to make very clear predictions about what happens if you change some of these fundamental parameters of the collective information system. For what it's worth, my kind of gut level attitude is more like, I'm more on the side of the punks, oh, like, these individuals stick it to the man and the apparatus and the state bureaucracy, like, that's like, hoorah for that. But if step back and reflect on what would actually make the future go best, I probably don't really know. It's really hard to tell. So I'm more on the agnostic end of that, I think.

Spencer Greenberg: (56:00) You mentioned the vulnerable world hypothesis, this idea that every time we develop a new technology, it's kind of like drawing a ball from an urn. Sometimes it's a white ball, where it's a good technology, it helps the world. Sometimes it's a gray ball, It has a mix of good and bad aspects. But maybe sometimes it's a black ball that ends life on Earth. Obviously, we haven't drawn a black ball yet. We haven't ended life on Earth, but we might. And and what I'm wondering is, what's your best estimate of how likely we are to draw black balls? Do is there anything we can say about that, or is it is it just too hard to possibly know?

Nick Bostrom: (56:35) Well, it looks like AI timelines are fairly short. So, like, you could if if there's gotta be a black ball, probably, like, comes out of something enabled by intermediate levels of AI, like maybe some bio weapon design AI tool that could kind of before we get superintelligence, you could imagine something that allows the world to get destroyed. Or some sort of social dynamics disturbing applications of AI, like with some of these surveillance or automated propaganda type of things. Those would be the most likely bets. I don't know exactly what the probability like AI itself is interesting because it's slightly different from other existential risks in that it is also something which if it goes well, could protect us against a whole host of different existential risks. And even determining exactly what counts as an existential catastrophe is difficult with respect to AI. Because like with a lot of other things, like the world blows up, there is nothing after, it's pretty clear it's an existential catastrophe. With AI, it's more like there is a spectrum. The world gets radically transformed and at the other side of that, what exactly exists and how valuable is that? We presumably don't want to insist on on there being human beings in exactly their current forms running around on this planet for millions of years doing the same old human things. Like that itself would seem a little bit of a letdown, I think. On the other hand, if it's a paperclip maximizer, maybe we think that also fails to realize a lot of the potential for value. But it's to say that the concept of an existential risk has a value component, as well as a descriptive component. And in the case of AI existential risks in particular, it seems the value component becomes particularly prominent.

Spencer Greenberg: (58:22) So you came up with this very influential idea, the simulation argument, which sort of which essentially argues that we might be more likely to be living in a computer simulation than most people acknowledge. I'm wondering, have your probabilities changed over time of how likely you think we are living in a simulation, and where where do they sit at right now?

Nick Bostrom: (58:42) Well, I tend to punt on giving an actual number. Many have asked, but none have been answered. So now, mean, I guess like maybe it crept up a little but not much. I think for other people it might be reasonable to increase their probability in the simulation hypothesis. So if you think about the simulation argument, don't I think the paper was like 2003 or something, I circulated it a few years before that. At the time I think it might have been a bigger imaginative leap for people to conceive of a level of technology that would make it possible to create realistic computer simulations with conscious beings in them. I think the decades of technological progress since then should make it easier. We have I mean, just virtual realities is like higher resolution and people play these immersive computer games, like that should just make it easier to imagine how if we continue to make progress we would eventually get to something super realistic. And then with AI as well, it just seems like, you know, a shorter step from where we are now to where we would actually have sort of digital minds that fully as sophisticated as humans. And so there's kind of less opportunities where you could hop off the train between where we are now and where the capability exists for running ancestor simulations than there was back in the early 2 thousands. And so in that sense I think it would make sense for probabilities to creep up a bit as well.

Spencer Greenberg: (1:00:15) Are there any concrete ways that you behave differently or live differently because of this possibility we are living in a simulation?

Nick Bostrom: (1:00:22) I think maybe a greater sense of humility with respect to the ultimate things. If you contrast it, so take the other extreme, some kind of archetype, like the classical sort of atheist materialistic evolution, like Richard Dawkins type, right? Like there's like kind of fairly well defined inventory of the world and where we are in the world. We are on this planet, there's a bunch of stars, you know, we began this long ago, then we die, and when we die we rot and that's the end. Like all of those relative to that kind of worldview would seem pretty confident implications. Whereas if you take the simulation argument seriously and the simulation hypothesis in particular it suggests it could very easily be the case that there are many more possibilities. There might be a lot more in the world than is dreamt of in this kind of naive scientific picture. Right? That could be other simulations, that could be a sort of basement reality under the simulations, that could be whole hierarchies of simulations, simulators who designed this, that could be afterlives of various kinds. There's a lot more possibilities conditional on the simulation hypothesis. And so, since we know very little about that whole space of possibilities, it, I think, induces a kind of epistemic humility that I think also can then translate into a more almost spiritual humility, a sense of our own smallness and how much we are in the dark with respect to the ultimate things that shape our ultimate destiny.

Spencer Greenberg: (1:02:03) Final question before we wrap up. You're sometimes described as utilitarian, although I think that you don't actually identify as one. How would you describe your meta ethical views and your views on kind of questions around are there objective moral truths and so on?

Nick Bostrom: (1:02:17) Yeah. I don't have a good label. And in general, I always struggle with labels. They seem very confining, like all these isms that people love to subscribe to. I I never really like, it always seems a bit of a strain to me. Like I tend to think like in multiple channels, like in sort of super positions that maybe eventually they collapse and I get some more conviction on particular views. But my attempts to articulate the kind of metaethics recently, I wrote this paper Basecamp for Mount Ethics. It's not a proper paper, it's more like some thinking notes, so it might not be useful to anybody. It's kind of obscure, but it's like an attempt to outline one direction that I'm thinking in, in terms of metaethics. I also like the idea of a moral parliament that I came up with. This is the idea that when you face some practical moral problem rather than say pick the most probable moral theory that you can think of and do what it says, you should instead, as it were, assign probabilities to different moral theories and then you imagine that these moral theories each get to send delegates to an imaginary parliament. And the number of delegates they get to send is proportional to the probability you assign to the theory. And then you imagine these delegates of the different theories deliberating and bargaining and compromising under ideal conditions. And then you should do what this parliament recommends that you do. The idea here being that even a moral theory that you think is less probable but that happens to care particularly intensely about some matter might get its way in that case, in return for sort of conceding to other moral theories in other cases that it thinks are less important. And so you could, I think, get a great level of wisdom and lower propensity to fanaticism by thinking in terms of this moral parliament model than it's really more like a metaphor than a formal model. But that's better than just picking your favorite moral theory and running with it.

Spencer Greenberg: (1:04:24) Nick, great to speak with you.

Nick Bostrom: (1:04:25) Thanks for coming on. Good to talk to you, Spencer.

Josh Castle: (1:04:32) Thanks again for listening. We always love to hear from our listeners. So if you have questions or comments for us, just send us an email at clearerthinkingpodcast@Gmail.com. This episode was edited by Ryan Kessler and transcribed by We Amplify. Uri Bram is the podcast's factotum. If you like our show, then we'd really appreciate it if you could rate and review us wherever you get your podcasts and tell your friends about us on social media. We also hope you'll subscribe to our email newsletter called One Helpful Idea. Each week, we'll send you one idea that we think is really valuable that you can read about in just 30 seconds along with that week's new podcast episodes, an essay by Spencer, and announcements about upcoming events. To sign up for that newsletter or to find show notes, transcripts, and more info about the show, visit podcast.clearerthinking.org.

Josh Castle: (1:05:25) A listener asks, what types of meditation have you tried and which ones have seemed most impactful to you?

Spencer Greenberg: (1:05:31) I think I've tried over 30 types of meditation. It doesn't mean I've gone deep in them. I'm certainly not a meditation expert, but I've done a lot of experimenting. One of my favorite types of meditation is where I try to notice a good feeling in my body and mind, and then I try to let it grow. This is kind of related to what people sometimes call Jhana meditation. And I like to just do it sometimes in the morning, like, for 30 seconds, just kind of let this good feeling grow, and I find that really nice. I've also done a bunch of, for example, meditation where you focus on your breath. I did that for about a year. I did that almost every morning, and I kind of tracked some variables around that. And I found that interesting, and I found that it kind of made me more aware of my state changes, like when I you know, when my emotion would shift or things like that. So I think it it helped in a kind of introspective way, but I didn't really notice a lot of other benefits other than maybe making me feel calmer. I've also tried a lot of wacky meditations, meditations involving visualizations, meditations involving changing sensations in my body. That was actually how I first got interested in meditation. I realized one day, this is before I knew anything about meditation, that I could make a feeling of pins and needles in my arm if I kind of focused on it. And it would get really, really convincing to the point where I thought, well, maybe my arm just has pins and needles, and then I would move my arm and the feeling would go away suddenly. So that was really surprising and interesting to me that I could do that with my mind. And then I started thinking, what else can I do with my mind interior in my body and affecting the way that I perceive things? And started exploring meditation through that.

Nathan Labenz: (1:07:03) It is both energizing and enlightening to hear why people listen and learn what they value about the show. So please don't hesitate to reach out via email at tcr@turpentine.co, or you can DM me on the social media platform of your choice.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.