Balaji Srinivasan on Polytheistic AI, Human-AI Symbiosis, and Prospects for AI Control
Balaji Srinivasan discusses AI gods, human-AI symbiosis, and the future control of AI with Nathan on the Cognitive Revolution Podcast.
Watch Episode Here
Video Description
Balaji Srinivasan, investor, former CTO at Coinbase and GP at a16z, and writer of The Network State, joins Nathan to discuss polytheistic AI and AI gods, human-AI symbiosis, and how AI will be controlled. If you need an ecommerce platform, check out our sponsor Shopify: https://shopify.com/cognitive for a $1/month trial period.
SPONSORS:
Shopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of ALL eCommerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries.From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/cognitive
Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist.
X/SOCIAL:
@labenz (Nathan)
@balajis (Balaji)
@eriktorenberg (Erik)
@CogRev_Podcast
TIMESTAMPS:
(00:00:00) - Episode Preview
(00:06:07) - Evolution of AI Systems
(00:09:39) - Challenges of AI Robustness
(00:29:54) - Intersection of AI and Politics
(00:52:14) - Regulation
(00:52:44) - Slippery slope of precedent setting
(00:54:00) - Role of AI in political chaos
(00:54:41) - Emergence of decentralized AI
(01:05:39) - AI's future as amplified intelligence
(01:13:00) - Polytheistic model of AI
(01:32:00) - Role of AI in Habitat Degradation
(01:33:32) - AI's ability to do differential diagnosis
(01:34:04) - ABC of economic apocalypse: AI, Bitcoin, China
(01:36:00) - Evolution of AI in chess and medicine
(01:38:22) - Difference between AI existential threat and economic disruption
(01:40:19) - Impact of cryptography on public infrastructure
(02:07:40) - Role of tribalism in AI
#balaji
Full Transcript
Transcript
Balaji Srinivasan: 0:00 They don't care about AI safety. What they care about is AI control. Do I think we eventually get to a configuration like that? Maybe. Where you have an AI brain is at the center of civilization, and it's coordinating all the people around it. And every civilization that makes it is capable of crowdfunding and operating its own AI. You know, our background culture influences things in ways we don't even think about. So much of the paper clip thinking is like a vengeful god will turn you into pillars of salt. The polytheistic model of many gods as opposed to 1 god is we're all gonna have our own AI gods, and there'll be war of the gods. Man machine symbiosis is not some new thing. It's actually the old thing that broke us away from other primate lineages that weren't using tools. Then the question is, what's the next step? Which is AI is amplified intelligence. It is that the AI human fusion means there's another 20 Elon Musks or whatever the number is. That's good.
Nathan Labenz: 0:56 Hello, and welcome to the Cognitive Revolution, where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas and together we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz joined by my cohost, Eric Torenberg. Hello and welcome back to the Cognitive Revolution. Today, my guest is Balaji Srinivasan. In tech circles, Balaji needs no introduction. But for folks from other backgrounds, Balaji is a serial startup entrepreneur who's founded and ultimately sold highly dissimilar technology companies including Teleport, which helped people move around the world to realize opportunities. Council, which provided genetic testing for couples planning to have children, and earn.com, a paid email on the blockchain startup, which ultimately sold to Coinbase where Balaji became CTO. Along the way, he's also taught statistics at Stanford and been a general partner at Andreessen Horowitz as well. Today, as an independent thinker, investor, and author of the network state, Balaji is extremely prolific in both text and audio formats. And as you'll hear, whether for the first time or the fiftieth, he is an incredibly creative thinker who relentlessly develops and iterates on new paradigms for understanding a fast changing, often chaotic world. He's also a very associative and interdisciplinary thinker who constantly adds dimensions to any analysis. Such horsepower can be hard for a podcast host to rein in, but I personally find it extremely stimulating. So in this conversation, I tried to strike a balance between letting Balaji go off as only he can do, contributing what I hope are worthy versions of core AI safety arguments and supporting results from recent research, and occasionally steering us back toward what I see as the most critical questions for the AI big picture. If there's 1 area where Balaji and I disagree most consequentially, it's on the question of how independent AI systems are likely to become over the next 5 to 10 years. Balaji thinks that AI systems need to be at least symbiotic with humans because physical computers can't replicate themselves without human support. While I think there's at least a significant chance that we get AIs that are so independent of humans that their behaviors and interactions become the primary drivers of world history. In Balaji's own words, he does expect massive economic and social disruption from AI, but doesn't think that quote unquote can't turn the killer AI off scenarios are likely, at least for a long while, due to factors like the existence of adversarial inputs that can paralyze AIs, particularly those with open model weights. The observation that even decentralized programs like the Bitcoin network can't run independently without continuous human support, and the premise that to control the physical world, AIs will need to direct either large numbers of humans who are notoriously difficult to control or highly agile robots which don't yet exist. With all that in mind, in the first half of this conversation, you'll hear Balaji's analysis of the likely impact of AI in a world where powerful AI systems do come to exist, but humans retain control, resulting in a human AI symbiosis similar to how believers relate to their gods or citizens relate to their governments. Then in the second half, we really dig into the question of just how confident we should be that AI won't prove to be even more revolutionary than that. After more than 2 hours of recording, I was the 1 who ran out of time today, but I really enjoyed this conversation with Balaji. He is as good natured and curious as he is opinionated, and we have continued to exchange links and arguments offline such that I hope we'll have another episode to share with you in the future as well. As always, if you're enjoying the show, we'd ask that you'd take a moment to share it with a friend. And with that, here's part 1 of an all angles look at how AI will shape the future with Balaji Srinivasan. Balaji Srinivasan, welcome to the Cognitive Revolution.
Balaji Srinivasan: 5:15 Alright. I I feel welcome.
Nathan Labenz: 5:17 Well, we've got a ton to talk about. You know, obviously, you bring a lot of, different perspectives to everything that you think about and, work on. And today, I wanna just try to muster all those different perspectives onto this, you know, what I see is really the defining question of our time, which is like, what's up with AI and, you know, how's it gonna turn out? I thought maybe for starters, I would love to just get your baseline kinda table setting on
Balaji Srinivasan: 5:44 how
Nathan Labenz: 5:45 much more AI progress do you expect us to see over the next few years? Like, how powerful are AI systems going to become in, again, kind of a relatively short timeline? And then maybe if you wanna take a, you know, bigger stab at it, you could answer that same question for a longer timeline like the rest of our lives or whatever.
Balaji Srinivasan: 6:02 Sure. Let me give an abstract answer, then let me give a technical answer. You know, if you look at evolution, we've seen something as complex as flight evolve independently in birds, bats, and bees. And, even intelligence, we've seen fairly high intelligence in dolphins, in whales, in octopuses. Know, octopus in particular can do like tool manipulation. They've got things that are a lot like hands, you know, with tentacles. And so that indicates that it is plausible that you could have multiple pathways to intelligence, whether, you know, we have carbon based intelligence or we could have silicon based intelligence that just has a totally different form where the fundamental thing is an electromagnetic wave and data storage as opposed to, you know, DNA and so on. Right? So that's like a plausibility argument in terms of evolution as being so resourceful that it's invented really complicated things in different ways. Okay? Then in terms of the technical point, I think as of like right now, I should probably date it as, like, 12/11/2023 because this field moves so fast. Right? My view is, and maybe you'll have a different view, is that the breakthroughs that are really needed for something that's, like, true artificial intelligence that is human independent. Right? Maybe the next step after the Turing test. I've got an article that, you know, we're writing called the Turing thresholds, which tries to generalize the Turing test to like the Kardashev scale. You know, if you've energy thresholds, like, what are useful scales beyond that? And right now, I think that what we call AI is absolutely amazing for environments that are not time varying or rule varying. And what I mean by that is, so you kind of have, let's say, 2 large schools of AI, and obviously there's overlap in terms of the personnel and so on. But there's like the DeepMind school, which has gotten less press recently but got more press, you know, a few years ago, and that is game playing, right? It is, you know, superhuman playing of Go with AlphaGo. It is, you know, all the video game stuff they've done where they learn at the pixel level and they don't they just teach the very basic rules and it figures it out from there. And it's also, you know, the protein folding stuff and what have you, right? But in general, I think they're known for reinforcement learning and and those kinds of approaches. I mean, they're good at lot of things, but that's what I think DeepMind is known for. Of course, they put out this new model recently, the the Gemini model. So I'm not saying that they're not good at everything, but that's just kind of what they're maybe most known for. And then you have the OpenAI ChatGPT school of generative AI, and it include stable diffusion. And just as a pioneer, even if, you know, they're not I don't know how much they're used right now, but basically, you know, you have the diffusion models for images and you have large language models and now you have the multimodals that integrate them. And so the difference, I think, with these is the reinforcement learning approaches are based on an assumption of static rules. Like the rules of chess, the rules of Go, the rules of a video game are not changing with time. They are discoverable. They're like the laws of physics. And similarly, like the body of language where you're learning it, English is not rapidly time varying. That is to say the rules of grammar that are implicit aren't changing. The meanings of words aren't changing very rapidly. You can argue they're changing over the span of decades or centuries, but not extremely rapidly. Right? So therefore, when you generate a new result, training data from 5 years ago for English is actually still fairly valuable, and the same input roughly gives the same output. Now, of course, are facts that change with time, like who is the ruler of England. Right? The queen of England has passed away. Now it's the king of England. Right? Just facts that change with time. But I think more fundamentally is when there's rules that change with time. You know, you have, for example, changes in law in countries. Right? But most interestingly, perhaps changes in markets because the same input does not give the same output in a market. If you try that, then what'll happen is adversarial behavior on the other side. And once people see it enough times, they'll see your strategy and they're gonna trade against you on that. Right? And I can get into other technical examples on that, but I think and probably people in the space are aware of this, but I think that is a true frontier is dealing with time varying, rule varying systems as opposed to systems where the implicit rules are static. Let me pause there.
Nathan Labenz: 10:19 Yeah. Think that makes sense. I think the you know, in the very practical, just trying to get, V calls it, mundane utility from AI, that is often kind of cashed out to AI is good at tasks, but it's not good at whole jobs. It can handle these kind of small things where you can define you know, what good looks like and and tell it exactly what to do. But in the sort of broader context of, you know, handling things that come up as they come up, it's definitely not there yet. And I agree that there's likely to be some synthesis, you know, which is kind of the the subject of all the Q Star rumors recently, I would say, is kind of the the prospect that there could be already, you know, within the the labs, a beginning of a synthesis between the I kind of think of it as, like, harder edged reinforcement learning systems, you know, that are, like, small, efficient, and deadly versus the, like, language model systems that are, like, kind of slow and soft you know, but have a sense of our values, which is really a remarkable accomplishment that they're able to have even an approximation of our values that seems reasonably good. So, yeah, I think I agree with that framing. But I I guess I would, you know, still wonder, like, how far do you think this goes in the near term? Because I have a lot of uncertainty about that, and I think the field has a lot of uncertainty. You'll hear people say, well, it's never gonna get smarter than its training data. It'll kind of level out where humans are. But we certainly don't see that in the reinforcement learning side. Right? Like, once it it usually don't take too long at human level of these games, and then it, like, blows past human level. Interestingly, you do still see some adversarial vulnerability. Like, there's a great paper from, the team at FAR AI, and I'm I'm planning to have Adam Gleave, the head of that organization, on soon to talk about that and other things where they found, like, a basically a hack where a really simple but unexpected attack on the superhuman Go player can defeat it. So you do have these, like, very interesting vulnerabilities or kind of lack of adversarial robustness. Still, I kind of wondering, like, where do you think that leaves us in, say, a 3 to 5 years time?
Balaji Srinivasan: 12:38 Obviously huge uncertainty on that. It's really hard to predict something like this. Just to your point, generative AI is generic AI, right? It's like generically smart, but doesn't have specific intelligence or creativity or facts. And as you're saying, just like we have, you know, adversarial images that can fool programs that are trained on a certain set of data and they just give some weird, you know, pattern that looks like a giraffe, but the algorithm thinks it's a dog. You can do the same thing for game playing and you can have out of sample input that can beat, you know, these very sophisticated reinforcement learners. And an interesting question is whether that is a fundamental thing or whether it is a workaroundable thing. And you'd think it was workaroundable, you know, because there's probably some robustification because these pictures look like giraffes, you know, and yet they're being recognized as dogs. So you would think that the right proximity metric would group it with giraffes, you know, but maybe there's some, I don't know, maybe there's some result there. My intuition would be we can probably robustify these systems so that they are less vulnerable to adversarial input. But if we can't, then that leads us in a totally different direction where these systems are fragile in a fundamental way. So that's 1 big branch point is how fragile these systems are. Because if they're fragile in a certain way, then it's almost like you can always kill them, which is kind of good, right, in a sense that there's, that, you know, almost like the, you know the 50 IQ, 100 IQ, 150 IQ thing?
Nathan Labenz: 14:20 Like the meme?
Balaji Srinivasan: 14:22 Yeah, the meme, right? So the 50 IQ guy's like, These machines will never be as creative as humans or whatever. 100 IQ is, Look at all the things they can do. The 150 IQ is like, Well, there's some like equivalent result, you know, that's like some impossibility proof that shows that the dimensional space of a giraffe is too high and we can't actually learn what a true giraffe I don't think that's true, but maybe it's true from the perspective of how these learners are working. Because my understanding is people have been trying and I mean, I'm I'm not on the cutting edge of this, so, you know, maybe someone but my understanding is we haven't yet been able to robustify these, models against adversarial input. Am I wrong about that?
Nathan Labenz: 15:02 Yeah. That's definitely
Balaji Srinivasan: 15:04 We'll continue our interview in a moment
Nathan Labenz: 15:05 after a word from our sponsors. There's no single architecture as far as I know that is demonstrably robust. And on the contrary, you know, even with language models, there's a we did a whole episode on the universal jailbreak where especially if you have access to the weights, not to change the weights, but just to kind of probe around in the weights, then you have a really hard time, you know, guaranteeing any sort of robustness.
Balaji Srinivasan: 15:32 The conjecture is see, for humans, you can't, like, mirror their brain and analyze it. Okay? But we have enough humans that we've got things like optical illusions, stuff like that that works on enough humans, and our brains aren't changing enough. Right? A conjecture is if you had, as you said, open weights. Open weights means safety because if you have open weights, you can always reverse engineer adversarial input, and then you can always break the system. Conjecture.
Nathan Labenz: 16:03 Yeah. That's again with Adam from Far AI. I'm really interested to get into that because they are starting to study, as I understand it, kind of proto scaling laws for adversarial robustness. And I think a huge question there is what are the kind of frontiers of possibility there? Do you need how do the orders of magnitude work? Do you need another 10x as much adversarial training to half the rate of your adversarial failures? And if so, you know, can we generate that many? It may always sort of be fleeting.
Balaji Srinivasan: 16:39 So far AI and they are so they're working on cutting edge of adversarial input.
Nathan Labenz: 16:44 Yeah. They're the group that did the attack on the AlphaGo model and found that like, you know and what was really interesting about that, I mean, multiple things. Right? First, that they could beat a superhuman Go player at all. But second, that the technique that they used would not work at all if playing a quality human Or it's, you know, it's a strategy that is trivial to beat if you're a quality human Go player, but the AlphaGo is just totally blind to it.
Balaji Srinivasan: 17:09 You know, that's why I say the conjecture is if you have the model, then you can generate the adversarial input. And then so if that is true, and that itself is an important conjecture about AI safety. Right? Because if open weights are inherently something where you can generate adversarial input from that and break or crash or defeat the AI, then that AI is not omnipotent. Right? You have some power words you can speak to it, almost like magical words, that'll just make it power down, so to speak. Right? It's like those movies where the monsters can't see you if you stand really still or if you you don't make a noise or something like that. Right? They're very powerful on dimension x, but they're very weak on dimension y. A kind of an obvious point, but, you know, I'm not sure how important it's gonna be in the future. Your next question was on, like, you know, humanoid robots and so on. And before we get to that, maybe obviously, but all of these models are trained on things that we can easily record, which are sights and sounds, right? But touch and taste and smell, we don't have amazing data sets on those. Well, I mean, there's some haptic stuff. Right? There's probably some, you know, some work on taste and smell and so on, but there's 5 senses. Right? I wonder if there's something like that where, you might be like, okay. How are you gonna outsmell a, you know, a robot or something like that? Well, dogs actually have very powerful sense of smell, and that's being very important for them. You know? And it may turn out that there's maybe it's just that we just haven't collected the data, and it could become a much better smeller or whatever or, you know, taster than anything else. I wouldn't be surprised. It could be a much better wine taster because you can do molecular diagnostics. But it's just kind of I just use that as an analogy to say there's areas of the human experience that we haven't yet quantified. And maybe it's just the upper term is yet. Okay? But there's areas of the human experience we haven't yet quantified, which are also an area that AIs at least are not yet capable in.
Nathan Labenz: 19:14 Yeah. Guess maybe my expectation boils down to I think the really powerful systems are probably likely to mix architectures in some sort of ensemble. Know, when you think about just the structure of the brain, it's not I mean, there certainly are aspects of it that are repeated. Right? You look at the the frontal cortex and it's like there is kind of this, you know, unit that gets repeated over and over again. In a sense, that's kind of analogous to say the transformer block that just gets, you know, stacked layer on layer. But it is striking in a transformer that it's basically the same exact mechanism at every layer that's doing kind of all the different kinds of processing. And so whatever weaknesses that structure has and with the transformer and the attention mechanism, there's some pretty profound ones like finite context window. You kind of need, I would think, a different sort of architecture with a little bit of a different strength and weakness profile to complement that in such a way that kind of more similar to like a biological system where you kind of have this dynamic feedback where we have obviously thinking fast and slow and all sorts of different modules in the brain and they kind of cross regulate each other and don't let any 1 system, you know, go totally, you know, down the wrong path on its own, right, without something kind of coming back and and trying to override that. It seems to me like that's a big part of what is missing from the current crop of AIs in terms of their robustness. And I don't know how long that takes to show up, but we are starting to see some, you know, possible you know, I think people are maybe thinking about this a little bit the wrong way. They're just in the last couple weeks, there's been a number of papers that are really looking at the state space model kind of alternative. It's being framed as an alternative to the transformer. But when I see that, I'm much more like, it's probably a complement to the transformer or, you know, these 2 things probably get integrated in some form because to the degree that they do have very different strengths and weaknesses, ultimately, you're gonna want the best of both in a in a robust system. Certainly, if you're trying to make an agent, certainly, if you're trying to make, you know, a humanoid robot that can go around your house and, like, do useful work, but also be robust enough that it doesn't, you get tricked into attacking your kid or your dog or, you know, whatever. You're gonna wanna have more checks and and balances than just kind of a single stack of, you know, the same block over and over again.
Balaji Srinivasan: 21:41 Well, so I know Boston Dynamics with their legged robots is all control theory, and it's not classical ML. They've it's really interesting to see how they've accomplished it. And they do have essentially a state space model where they have a big position vector that's got all the coordinates of all the joints and then a bunch of matrix algebra to figure out how this thing is moving and all the feedback control and so on there. And it's more complicated than that, but that's, you know, I think the v 1 of it. Sorry. It was there I wasn't following this though. Are you saying that there's papers that are integrating that with the kind of generative AI transformer model? You know, what's what like, what's a good citation for me to look at?
Nathan Labenz: 22:19 Yeah. Starting to. We did an episode, for example, with 1 of the technology leads at Skydio, The US's champion drone maker. And they have kind of a similar thing where they have built over a decade, a fully explicit multiple orders of spanning multiple orders of magnitude control stack. And now over the top of that, they're starting to layer this kind of it's not exactly generative AI in their case because they're not generating content, but it's kind of the high level, can I give the thing verbal instructions, have it go out and kind of understand, okay, this is a bridge? I'm supposed to kind of, you know, survey the bridge and translate those high level instructions to a plan and then use the the lower level explicit code that is is fully deterministic and, you know, runs on control theory and all that kind of stuff to actually execute the plan at a low level. But also, you know, at times, like, surface errors up to the top and say, like, hey. We've got a problem. You know, whatever. I'm not able to do it. You know? Can you now, at the higher level, the semantic layer, adjust the plan? That stuff is starting to happen in multiple domains, I would say.
Balaji Srinivasan: 23:37 Yeah. And so I think that makes sense. It's basically it's like generative AI is almost the front end, and then you have almost like an assembly like you give instructions to Figma, and the objects there are their shapes and their images and so on. It's not text. You give instructions to a drone, and the objects are like GPS coordinates and paths and so on. And so you are generating structures that are in a different domain, or it's like in VR, you're generating 3 d structures again as opposed to text. And then that compute engine takes those 3 d structures and does something with them in a much more rules based way. So you have like a statistical user friendly front end with a generative AI, and then you have a more deterministic or usually totally deterministic, almost like assembly language back end that actually takes that and does something. That's what you're saying.
Nathan Labenz: 24:27 Right? Yeah. Pretty much. And I would say there's another analogy to just, again, our biological experience where it's like, I'm, you know, sort of in a semi conscious level. Right? I kind of think about what I wanna do, but the low level movements of the hand, you know, are are both, like, not conscious. And also, you know, if I do encounter some pain or, you know, hit hit some, you know, hot item or whatever, like, there's a quick reaction that's sort of mediated by a lower level control system. And then that fires back up to the brain and is like, hey. You know, we need a new plan here. So that is only starting to come into focus, I think, with you know, because obviously, these I mean, it's amazing. As you said, it's all moving so fast. What is always striking to me I just and I I kind of, like, recite timelines to myself almost as like a mantra. Right? Like, the first instruction following AI that hit the public was just January 2022. That was OpenAI's TextDaVinci 0 0 2 was the first 1 where you could say, like, do x, and it would do x as opposed to having, you know, an elaborate prompt engineering type of setup. GPT-four, you know, just a little over a year ago finished training, not even a year that it's been in the public. And, you know, it's it it has been amazing to see how quickly this kind of technology is being integrated into those systems, but it's definitely still very much a work in progress.
Balaji Srinivasan: 25:45 Yeah. I mean, the tricky part is, like, the training data and so on with like, a large existing scale company like a Figma or DJI that has millions or billions of user sessions will have a much easier time training. And they have a unique dataset. And then everybody else will not be able to do that. So there is actually almost like, I mean, a return on scale where the massive dataset, if you've a massive clean dataset in a unique domain that lots of people are using, then you can crush it. And if you don't, I suppose I mean, there's lots of people who work on 0 shot stuff and sort of sort of, but it still strikes me that it'll probably be an advantage to see those sessions. You know? I find it hard to believe that you could, you know, generate a really good, like, drone command language without lots of drone flight pads, but we could see.
Nathan Labenz: 26:40 And where it doesn't exist, people obviously need deep pockets for this, but the likes of Google are starting to just grind out the generation of that. They've got their test kitchen, which is a literal, you know, physical kitchen at Google where the robots go around and do tasks. And when they get stuck, my understanding of their kind of critical path as as I understand they understand it is robot's gonna get stuck. We'll have a human operator remotely operate the robot to show what to do. And then that data becomes the bridge from what the robot can't do to what it's supposed to learn to do next time. And they're gonna need a lot of that, you know, for sure. But they increasingly have you know, I don't know exactly how many robots they have now. But last I talked to someone there, it was, like, into the dozens. And, you know, presumably, they're continuing to scale that. I I think they just view that they can probably brute force it to the point where it's, like, good enough to put out into the world. And then very much like a Waymo or a Cruise or whatever, they probably still have kind of remote operators even when the robot is, like, in your home. You know, when it encounters something that it doesn't know what to do about, raise that alarm, get the human supervision to help it over the hump, and then, you know, obviously, that's where you really get the scale that you're talking about. This raises a couple questions I wanted to to ask that are conceptual. So, you know, obviously, there's huge questions around, like, again, highest level, how is all this gonna play out? 1 big debate is to what degree does AI favor the incumbents? To what degree, you know, does it enable startups? Obviously, it's both, but, you know, interested in your perspective on that. Also really interested in your perspective on, like, offense versus defense. That's something that a lot of people now and in the future, right, that seems like it probably really matters a lot, whether it's a a more offense enabling or defense enabling technology. So love your take on on those 2 dimensions.
Balaji Srinivasan: 28:38 Hey. We'll continue our interview in a>
Nathan Labenz: 28:40 moment after our words from our sponsors.
Balaji Srinivasan: 28:42 So, like, offense or defense in the sense of disenable disruptors or incumbents?
Nathan Labenz: 28:47 Both in business and in, like, you know, potentially outright conflict. I'd be interested to hear your analysis on both.
Balaji Srinivasan: 28:53 Alright. Lot of views on this. So, obviously, if you've got a competent existing tech CEO, you know, like, who's still in their prime, like, Ahmjad of Replit or, you know, Dylan Field of Figma. Or, you know, those are 2 who I thought of who are very good and, you know, will be on top of it. And Amjad is very early on integrating AI into Replit, and it's basically built that into an AI first company, which is really impressive. Those are folks who cleanly made a pivot. It's as big or bigger than and comparable to, I would say, the pivot from desktop to mobile that broke a bunch of companies in the late 2 thousands and early 20 tens. Like Facebook in 2012 had no mobile revenue roughly at their at the time of their IPO, and then they had to, like, redo the whole thing. And it's hard to turn a company 90 degrees when something new like that hits, you know? Those that are run by kind of tech CEOs in their prime will adapt and will AI ify their existing services. And the question is, obviously there's new things that are coming out like PICA and Character dot ai. There's some like really good stuff that's out there. The question is, you know, will the disruption be allowed to happen in The US regulatory environment? And so my view is actually that you know, so this is from like the Network State book. Right? I talk about you know, people talk about a multipolar world or unipolar world. The political axis is actually really important in my view for thinking about whether AI will be allowed to disrupt. Okay? Because we'll get to this probably later, but the 6 40 k of compute is enough for everyone executive order. You know, 6 40 k of memory, the apocryphal he didn't Bill Gates didn't actually say it, but that that quote kind of gives a certain mindset about computing. That should be enough for everybody. So the 10 to the 20 sixth of compute should be enough for everyone, Bill. I actually think it's very bad, and I think it's just the beginning of their attempts to build like a software FDA, okay, to decelerate, control, regulate, red tape the entire space, just like how, you know, the threat of nuclear terrorism got turned into the TSA. The threat of, you know, terminators and AGI gets turned into 1000000 rules on whether you can set up servers, and this last free sector of the economy is strangled or at least controlled within the territory controlled by Washington DC. Now why does this relate to the political? Well, obviously, this, you know, you can just spend your entire life just tracking AI papers, and that's moving like at the speed of light like this, right? What's also happening, as you can kind of see in your peripheral vision, is there's political developments that are happening at the speed of light, much faster than they've happened in our lifespans. Like there's more, you just noticed, more wars, more serious online conflicts. Like, you know, there's a sovereign debt crisis. All of those things I can show graph after graph of things looking like their own types of singularities. You know? Like military debts are way up. You know, the long piece that Steven Pinker showed, it's looking like a u that's suddenly way up after Ukraine and some of these other wars are happening, unfortunately. Right? Interest payments, whoosh, way up to the side. What's my point? Point is I think that the world is going to become from the Pax Americana world of just like basically 1 superpower, hyperpower that we grew up in from 'ninety 1 to 2021 roughly, that we're going to get a specifically tripolar world. Not unipolar, not bipolar, not multipolar, but tripolar. And those 3 poles I kind of think of as NYT, CCP, BTC, or you could think of them as, and those are just certain labels that are associated with them, but they're roughly US tech in The US environment, China tech in the China environment, and global tech in the global environment. And why do I identify BTC and crypto and so on with global tech? Because that's a tech that decentralized out of The US. And right now people think of crypto as finance, but it's also financiers. Okay? And in this next run up, it is, I think, quite likely about, depending on how you count, between a third to a half of the world's billionaires will be crypto. Okay? Around, you know, I calculated this a while back. Around Bitcoin at a few 100 thousands, around a third to a half of the world's billionaires are crypto. That's the unlocked pool of capital, and those are the people who do not bow to DC or Beijing. And they might, by the way, be Indians or Israelis or every other demographic in the world, or they could be American libertarians, or they could be Chinese liberals like Jack Ma who are pushed out of Beijing's sphere. Okay? Or the next Jack Ma. You know, Jack Ma himself may not be able to do too much. Okay? That group of people who are, let's say, the dissident technologists who are not going to just kneel to anything that comes out of Washington D. C. Or Beijing, that is the that's decentralized AI. That's crypto. That's decentralized social media. So you can think of it as, you know, where we talked about on the recent PirateWirers podcast, freedom to speak with decentralized censorship resistant social media, freedom to transact with cryptocurrency, freedom to compute with open source AI and no compute limits. Okay? That's a freedom movement, and that's like the same spirit as a pirate bay, the same spirit as BitTorrent, the same spirit as Bitcoin, the same spirit as peer to peer and end to end encryption. That's a very different spirit than having Kamala Harris regulate a superintelligence or signing it over to Xi Jinping Thought. And the reason I say this is I think that that group of people, of which I think Indians and Israelis will be a very prominent, maybe a plurality, right, just because the sheer quantity of Indians are like the third sort of big group that's kind of coming up, and they're relatively underpriced. You know? China is I don't wanna say it's priced to perfection, but it's something that people when I say priced, I mean people were dismissive of China even up until 2019. And then it was after 2020, if you look, that people started to take China seriously. And what I mean by that is the West Coast tech people knew that China actually had a plus tech companies and was a very strong competitor, but the East Coast still thought of them as a third world country until after COVID when now, you know, the East Coast was sort of threatened by them politically, and it wasn't just blue collars but blue America that was threatened by China. And so that's why the reaction to China went from, oh, who cares? It's taking some manufacturing jobs to this is an empire that can contend with us for control of the world. That's why the hostility is ramped up, in my view. There's lot of other dimensions to it, but that's a big part of it. So India is also kind of there, but it's like the third, and India is not going to play for number 1 or number 2. But India and Israel, if you look at, like, tech founders, depending on how you count, especially if you include the diasporas, it's on the order of 30 to 50% of tech founders. Right? And it's obviously some, you know, very good tech CEOs and, you know, Satya and Sundar and investors and whatnot. Those are folks Indians do not wanna bow to DC or to Beijing. Neither do Israelis for all kinds of reasons. Even if Israel has to, you know, take some direction from The US now, they're bristling at it. Right? And and then a bunch of other countries don't. So the question is, who breaks away? And now we get to your point on the reason I had to say that is that's preface, the political environment, this tripolar thing of US tech and US regulated, Chinese tech and China regulated, and global tech that's free. Okay? Of course, there's even though I identify those 3 poles, there's, of course, boundary regions. EAC is actually on the boundary of of US tech and decentralized tech. You know? And I'm sure there'll be some Chinese thing that comes out that is also on the boundary there. For example, Binance is on the boundary of Chinese tech and global and decentralized tech, if that makes any sense. Right? There's probably others. Apple is actually on the boundary of US tech and Chinese tech because they make all of their stuff in China. Right? So these are not totally disjoint groups, there's boundary areas, but you can think of them. Why is this third group so important in my view? Both the Chinese group and the decentralized group will be very strong competition for the American group for totally different reasons. China has things like WeChat, these super apps. I mean, obviously not like like super WeChat is a super app, but they also have, for example, their digital Yuan, right? They have the largest, cleanest data sets in the world that are constantly updated in real time that they can mandate their entire population opt into. And most of the Chinese language speaking people are under their ambit, right? So that doesn't include Taiwan, doesn't include Singapore, doesn't include, you know, some of the Chinese diaspora. But basically anything that's happening in Chinese, for 99 percent of it, 95%, whatever the ratio is, they can see it, and they can coerce it, and they can control it. So they can tell all of their people, okay, here's $5 in, you know, digital yuan. Do this microtask. Okay? All of these digital blue collar jobs, both China and India, I think, can do quite a lot with that, and I'll come back to that. So they can make their people do immense amounts of training data, clean up lots of data sets. Once it's clear that you have to build this and do this, they can just kind of execute on that. And they can also deploy. I mean, in many ways, The US is still very strong in digital technology, but in the physical world, it's terrible because of all the regulations, cause all the nimbyism and so on. It's not like that in China. So anything which kind of works in The US at a physical level, like the Boston Dynamics stuff, they're already cloning it in China, and they can scale it out in the physical world. You already have drones, little little sidewalk drone things that come to your hotel room and drop things off. That's already, like, very common in China. In many ways, it's already ahead if you go to the Chinese cities. So the Chinese version of AI is ultra centralized, more centralized, more monitoring, less privacy, and so on than the American version, and therefore they will have potentially better data sets, at least for the Chinese population. And so WeChat AI, I don't even know what it's going to be, but it'll be probably really good. Okay? It'll also be really dangerous in other ways. Okay. Then the decentralized sphere has power for a different reason because the decentralized sphere can train on full Hollywood movies. It can train on all books, all m p threes, and just say, screw all this copyright stuff. Right? Like what Sci Hub and, you know, LibGen are doing. Because all the copyright, first of all, it's not it's like Disney lobbying politicians to put, like, another 60 or 70 or 90. I don't even know what it is. Some crazy amount of copyright so you can keep milking this stuff, and it doesn't go into public domain, number 1. And second, you know how Hollywood was built in the first place? It was all patent copyright and IP violation. Essentially, Edison had all the patents. He's in New Jersey ish, okay, that East Coast area. And Neil Gabler has this great book called An Empire of Their Own, where he talks about how immigrant populations, you know, the Jewish community in particular, also others, went to Southern California in part so they could just make movies without Edison coming and suing them for all the patents and so on and so forth. And they made enough money that they could fight those battles in court, and that's how they built Hollywood. Okay? So, you know, 1 of my big theses is history is running in reverse. And I can get to why, but it's like 1950 is a mirror moment. You go more decentralized backwards and forwards in time. It's like these you have these huge centralized states like The US and USSR and and China. You know? All these things exist, then their fist relaxes as you go forwards and backwards in time. For example, backwards in time, the Western frontier closed, and forwards in time the Eastern frontier opens. Backwards in time you have the robber barons. Forward in time you have the tech billionaires. Backwards in time you have Spanish flu. Forward in time you have COVID-nineteen. And I've got dozens of examples of this in the book. The point is that if you go backwards in time, the ability to enforce patents and copyrights and so on starts dropping off, right? You have much more of a grand theft auto environment. And you go forwards in time, and that's happening again. So India in particular, for many years, basically just didn't obey Western patent protections, and all these stupid rules basically, you know, it's a combination of artificial scarcity on the patent side and artificial regulation on the FDI side. That's a big part of what jacks up drug costs, where these things cost only cents to manufacture and they sell them for so much money. All the delays, of course, that are imposed on the process, the only way they can pay for it, the manufacturers, is to take it out of your hive. What India did is they just said, we're not going to obey any of that stuff. So they have a whole massive generic drugs and biotech industry that arose because they built all the skills for that. That's why they could do their own vaccine during COVID, and they're 1 of the biggest biotech industries in the world because they said screw Western restrictive IPs and other stuff. Right? So I was actually talking with the founder of Flipkart. That's India's largest exit. And we were talking about this a few months ago, and what we want is for India and other countries like it to do something similar, not just generic drugs, but generic AI, meaning just let people train on Hollywood movies. Let them train on full songs. Let them train on every book. Let them train on anything. And you know what? Sue sue them in India. Right? And have the servers in India and let people also train models in India because that's something that can build up a a domestic industry with skills that the rest of the world, you know, people will want the model output. They'll wanna use the the software service there, and they'll be fighting in court on the back end. This is similar to how all of the record companies fought, Napster and Kazaa and so on, but they couldn't take down Spotify. Do you know that story? Do you remember that? Basically, because Spotify was legitimately, you know, a European company and did a combination of execution and, you know, negotiation, they couldn't take them down. They did take down Napster. They took down Limewire. They took down GrooveShark. And Kazah had Estonians. I don't know exactly how it was incorporated, it was probably too US proximal, and that's where they were able to get them. But Spotify was far enough away that they couldn't just sue them, and they actually genuinely had European traction. That's why the RIA had to negotiate. Negotiate. So being far away from San Francisco may also be an advantage in AI because it means you're far away from the bluest city in the bluest state in The union. This relates to another really important point. When you actually think about deploying AI, there's those jobs you can disrupt that are not regulated jobs. Like, you know, obviously programmers are not thank God, you don't need a license to be a programmer. But programmers adopt this kind of stuff naturally. Right? So GitHub Copilot, Replit, we just, boom, use it, and now it's amplified intelligence. Okay? But a lot of other jobs, there's some that are unionized and then some that are licensed. Right? So Hollywood screenwriters are complaining. Right? Journalists are complaining. Artists are complaining. This is a good chunk of blue America. If you add in licensed jobs like lawyers and doctors and bureaucrats, right, you know, especially lawyers and doctors, very politically powerful MDs and JDs, they have strong lobbying organizations, the AMA and, you know, ABN and so on. Basically, is part of the economic apocalypse for blue America. Okay? It just attacks these overpriced jobs. And I say overpriced relative to what an Indian could do with an Android phone, what a South American could do with an Android phone, what someone in The Middle East or the Midwest could do with an Android phone. Now those folks have, you know, been armed with generative AI. They can do way more. They're ready to work. They're ready to work for much less money, and they're a massive threat to Blue America. Blue America is now feeling like the blue collars of 10 or 20 years ago where the blue collars had their jobs, you know, going to China and other places. Right? And they were mad about that. Factories got shut down and so on. That's about to happen to Blue America. Already happening. Okay? And so that's going to mean a political backlash by blue America of protectionism. Again, already happening. And the AI safety stuff, that's a whole separate thing, but it's going to be used. I'm going to use a phrase, and I hope you won't be offended by this. Have you heard the phrase useful idiots, like by Lenin or whatever? Okay. It basically means like, okay, those guys, you know, they're useful idiots for communism and so on. So there's, let me put it, like naive people who think that the US government is interested in AI safety are trying to give a lot of power to the US government. And the reason is they haven't actually thought through from first principles what is the most powerful action in the world, a COMECTA. They're trying to give power to the US government to regulate AI safety. But the government doesn't care about safety of anything. They literally funded the COVID virus in Wuhan, credibly alleged. Right? There's at least it is a reasonable hypothesis based on a lot of the data. Matt Matt Ridley wrote a whole book on this. There's a lot of data that indicates a lot of scientists believe it. I'm I'm actually like a bioinformatics genomics guy. If you look at the sequences, there is a gap and a jump where it looks like this thing could have been engineered or partially engineered or evolved. There's the, you know, Peter Dasan. There's Zenglishi. There's actually a lot of evidence here. So the US government and the Chinese government are responsible for an existential risk. You know, by studying it, they created it. Okay? They're responsible for risking nuclear war with Russia over this piece of land in Eastern Ukraine, which probably is going to get wound down. Okay? So they don't care about your safety at all. They're not like These are immediate things where we can show, and there's nobody who's punished for this. Nobody's fired for this. You know, literally rolling the dice on millions, hundreds of millions of people's lives has not been punished. In fact, it's not even talked about. We're past the pandemic and, you know, these institutions can't be punished. So they don't care about AI safety. What they care about is AI control. And so the people in tech who are like, well, the government will guarantee AI safety, that's actually what we're gonna actually get is something on the current path like what happened with nuclear technology, where you got nuclear weapons but not nuclear power, or at least not to the scale that we could have had it. Right? We could have had much cheaper energy for everything. Instead, we got the militarization and the regulation and the deceleration. Worst of all worlds where you can blow people up, but you can't build nuclear power plants. And like even getting into nuclear technology, forget about just nuclear power plants. We don't have nuclear submarines. We don't have nuclear planes. All that kind of stuff. I don't know if nuclear planes are possible, but I do know nuclear submarines are possible. You could do a lot more cruise ships, a lot more stuff like that. You could probably have nuclear trains. You know, I have to you have to look at exactly how big those are. You know, not a I don't know exactly how big those engines are and what the spies, but I wouldn't be surprised if you could. We don't have that. Why don't we have that? Because we had the wrong fear driven regulation in the early seventies. Putting it all together, I think that the current AI safety stuff is similar to nuclear safety stuff, that the US government has a terrible track record on safety in general. It doesn't care about it. It funded the COVID virus, credibly alleged. It definitely risked nuclear war with Russia recently. A hot war with Russia was the red line we were not supposed to cross, and we're now, like, way into that. So it doesn't care about AI safety. It doesn't care about your safety. And it's also not even good at regulating. And so what it cares about is control, and we are going have potentially a bad outcome where Silicon Valley in San Francisco is the Xerox PARC of AI. May maybe that's too strong. Okay? But basically, it develops it, and there's a lot of things it can't do because it lobbied for this regulation that is going to come back and choke it. And then the other 2 spheres will push ahead because it's not about the technology. It's also about the political layer. You know the Steve Jobs saying actually, it's Alan Kay by way of Steve Jobs. If you're really serious about software, you need your own hardware. Right? So if you're really serious about technology, you need your own sovereignty. Because like what the AI people haven't thought about is there's a platform beneath you, which is not just compute, it is regulate. It's a law. Okay? And if the law doesn't allow you to compute, so much for all of your stuff above that. And I know you're saying, Oh, it's only a 10 to the 20 sixth compute ban, and so on and so forth. Have you seen the first IRS tax form? It's always, always super simple. It's only the super, super, super rich who's who are going to get in first. Doesn't matter to you. So that's called boiling the frog slowly. There's 1000000 know, slippery slope. Slippery slope isn't a fallacy. It's literally how things work. Right? You know, Apple, 1 of the reasons they, you know, they talk about not setting a precedent. Zuck is a very hard line on setting precedents because he understands the long term equivalent of setting a precedent. Right? The precedent setting is that they're setting up a software FDA, and they're gonna and and DC is so energized on this because they know how much social media disrupted them. That's why they're on the attack on crypto and AI. That's why they're on the attack on self driving cars. They want to freeze the current social order in amber domestically and globally. So they think they can sanction China and stop it from developing chips. They think they can impose regulations on The US and stop it from developing AI, but they can't. And also, by the way, they're totally schizophrenic on this, where when they're talking about China, they're like, we're going to stop their chips to make sure America is a global leader. This is Gina Raimondo who's saying. And then domestically, they're like, we're going to regulate you so you stop accelerating AI. We're not about AI acceleration. EAC is weird or whatever. Okay? So think about how schizophrenic that is. Okay. You're going be far ahead of China. You're also going to be make sure to control The US. So they want to try and slow what they actually want is to freeze the current system in amber, try to go back to pre 2007 before all these tech guys disrupted everything, but that's not what's going to happen. So but they're going to try to do it. And so everybody who's still loyal to the DC sphere, which includes an enormous chunk of AI people, and because they're all in a lot of them are in San Francisco. Right? And the political chaos of the last few years was not sufficient for them to relocate yet. Not all of them. I mean, Elon is in Texas, and that it may turn out that Grok, for example, and what they're doing there, because he's a very legit I mean, you know, he's Elon, so he's capable of doing a lot. He's very early on OpenAI. He understands, you know, the right? It may turn out that Grok becomes Red AI or the community around that, you know? And OpenAI and DeepMind are still Blue AI. And we have Chinese AI, and we're going to have decentralized AI. Okay. Let me pause there. I know there's a big download.
Nathan Labenz: 51:37 Well, for starters, I would say broadly, I have a pretty similar intellectual tendency as you. I would broadly describe myself as a techno optimist libertarian on just about every issue. And I think your analysis of the dynamics is super interesting. And I think it you know, a lot of it sounds pretty plausible, although I I'll kind of float a couple of things that I think may be bucking the trend. But I think it's maybe useful to kind of try to separate this into scenarios because the all the analysis that you're describing here seem if I understand it correctly, it seems to have the implicit assumption that the AI itself is not going to get super powerful or hard to control. It's like, if we assume that it's kind of a normal technology, then you're off to the races on this analysis and then we can get into the fine points. But I do want to take at least 1 moment and say, how confident are you on that? Because if it's a totally different kind of technology from other technologies that we've seen, if it's more you raise the gain of function research example. If it's that sort of technology that has these sort of non local possible impacts or self reinforcing kind of dynamics, which need not be like a Eliezer style snap of the fingers fume. But even over, say, a decade, let's imagine that over the next 10 years that AI's kind of multiple architectures develop and they sort of get integrated and we have something that kind of looks like robust silicon based intelligence, you know, maybe not totally robust, but like as robust or more robust than us and running faster and, you know, the kind of thing that can, like, do lots of full jobs or maybe even be tech CEOs, then it kind of feels like a lot of this analysis probably doesn't hold, right? Because we're just in a totally different regime that is just extremely hard to predict. And I guess I wonder, first of all, do you agree with that? There seems to be a big fork in the road there that's like, just how fast and how powerful do these AIs become super powerful or do they not? And if they don't, then yeah, I think we're much more into realpolitik type of analysis. But I'm not at all confident in that. To me, it feels like there's a very real chance that, you know, AI of 10 years from now is and by the way, this is like what the leaders are saying. Right? I mean, OpenAI is saying this. Anthropic is saying this. Demis, you know, and Shane Legg are certainly, you know, saying things like this. It seems like they expect that we will have AIs that are more powerful than any individual human and that, you know, that that becomes like the bigger question than anything else. So do you agree with that kind of division of scenarios, first of all? And then maybe you could kind of say how likely you think each 1 is. And obviously, that 1 where it takes off is super hard to analyze. And I also definitely think it is worth analyzing the scenario where it doesn't take off. But I just wanted to flag that it seems like there's a big if you talk to the AI safety people, any world in which it's like, we're suing Indian AI firms in Indian court over IP is like a normal world in their mind. Right? And that's not the kind of world that they're most worried about.
Balaji Srinivasan: 55:06 I think that there have been some plausible sounding things that have been said, but I wanna just kind of talk about a few technical counterarguments, mathematical or physical, that constrain what is possible. Okay? And actually, Martin Casado and Vijay and I are working on a long thing on this where, you know, Vijay did folding at home. He's a physicist. Martin sold Nasira for, you know, 1000000000 dollars and and knows a lot about how a Stuxnet like thing could work at the systems level. And I've thought about it from other angles and, you know, some of the math stuff, which I'll get to. So for example, 1 thing and I'm gonna give a bunch of different technical arguments, and then let's kind of combine them. Okay? 1 thing that's being talked about is if you have a superintelligence, it can deliberate for 1000000 years, and then it can make 1 move, and it's going to outthink you all the time, and so on and so forth. Okay. Well, if you're familiar with the math of chaos or the math of turbulence, there are limits to even very simple systems that you can set up where they can become very unpredictable quite quickly. Okay? And so you can, if you want to, engineer a system where you have very rapid diversions of predictability so that I don't know. It's like the heat depth of the universe before you can predict out n timestamps. Do you understand what I'm saying? Right?
Nathan Labenz: 56:29 This is sort of akin to like a Wolfram, like, simple even simple rules can generate patterns such that you can't know them without literally computing them.
Balaji Srinivasan: 56:39 Yeah, exactly. Right? So at least right now with chaos and turbulence, can get things that are extremely provably difficult to forecast without actually doing it. Okay? You know, I can make that argument quantitative, but that's just something to look at. Right? It's almost like a delta epsilon challenge from calculus. Like, okay, how hard do you want me to make this to predict? Okay, I can set up a problem that is like that, right? It's basically extreme sensitivity to initial conditions lead to extreme divergence in outcomes. So you could design systems to be chaotic that might be AI immune because they can't be forecasted that well. You have to kind of react to them in real time. The ultimate version of this is not even a chaotic system. It's a cryptographic system where I've got a whole slide deck on this, how AI makes everything fake, easy to fake. Crypto makes it hard to fake again. Right? Because crypto in the broader sense of cryptography, but also in the narrower sense, I think crypto is to cryptography as, the Internet is to computer science. It's like the primary place where all this stuff is applied, but obviously it's not the equivalent. Okay? An AI can fake an image, but it can't fake a digital signature unless it can break certain math, you know, and and and so it's sort of like a, you know, solve factors, problems, something like that. So cryptography is another mathematical thing that constrains AI. Similar to chaos and turbulence, it constrains how much an AI can infer things. You can't statistically infer it. Okay? You need to actually have the private key to solve that equation. So that is another math so I'm going to rules of math. Right? Math is very powerful because you can make proofs that will work no matter what devices we come up with. Okay? You start to put an AI in a cage. It can't predict beyond a certain amount because of chaos and turbulence math. It cannot solve certain equations unless it has a private key is because of what we know about cryptography. Math. Okay? Again, if somebody proves P equals NP, some of this stuff breaks down, but this is within the bounds of our mathematical knowledge right now. Physics wise, physical friction exists. A lot of physical friction exists. And a huge amount of the writing on AI assumes by guys like Elijah, who I like. I don't dislike, you know. But it is extremely, there's 2 things that really stick out to me about it. First is extremely theoretical and not empirical. And second, extremely Abrahamic rather than dharmic or Sinai. Okay? Why theoretical and not empirical? It's not trivial to turn something from the computer into a real world thing. Okay? 1 of the biggest gaps in all of this thinking is what are the sensors and actuators? Okay? Because like if you actually build, you know, I've built industrial robot systems that, you know, 10 years ago, you know, a genome sequencing lab with robots. That's hard. That's physical friction. Okay? And a lot of the AI scenarios seem to basically say, oh, it's going to be a self programming Stuxnet that's going to escape and live off the land and hypnotize people into doing things. Okay? Now, each of those is actually really, really difficult steps. First is self programming Stuxnet, like this would have to be a computer virus that can live on any device despite the fact that Apple or Google can push a software update to 1000000000 devices, right? A few executives coordinating almost certainly can, I mean, the off switch exists, Right? Like, this is actually, like, the core thing. Lots of AI safety guys get themselves into the mindset that the off switch doesn't exist. But guess what? There's almost nothing living that we haven't been able to kill. Right? Like, can we kill it? This thing exists, and this is getting back to living off land. Even if you had, like, something that could solve some of technical problems that I'll get to, it exists as an electromagnetic wave kind of thing on a certain, you know, on chips and so on and so forth. Taking it out in the environment is like putting a really smart human into outer space, right? Your body just explodes and you die. It doesn't matter how smart you are. That's strength on this axis, but you're weak on this axis and, you know, so strength on the X axis is not strength on the Y or the Z axis. In AI, water on it. This is why I mean the 50 IQ, 150 IQ thing, you know, the 150 IQ way of saying it is strong on this X and weak on this X. And the 50 IQ way is pour water on it, disconnect it, you know, turn the power off. Okay. Right? Like, it'll be very difficult to build a system where you literally cannot turn it off. The closest thing we have to that is actually not Stuxnet. It's Bitcoin. And Bitcoin only exists because millions of humans keep it going. So need So that gets to the second point, living off the land. For an AI to live off land, meaning without human cooperation, okay, that's the next Turing threshold, an AI to live without human cooperation. It would need to be able to control robots sufficient to dig ore out of the ground, set up data centers and generators and connect them, and defend that against human attack. Literally a Terminator scenario. Okay? That's a big leap in terms I mean, is it completely impossible? I can't say it's completely impossible. But it's not happening tomorrow. No matter what your AI timelines are, you would need to have like 1000000000 or hundreds of millions of Internet connected autonomous robots that this Stuxnet AI could hijack that were sufficient to carve ore out of the earth and, you know, set up data centers and make the AI duplicate. We're not there. That's a huge amount of physical friction. That's AI operating without a human to make itself propagate, right? A human doesn't need the cooperation of a lizard to self replicate. For an AI to replicate right now, it would need the cooperation of a human in some sense because otherwise those humans can kill it because there's not that many different pieces of, you know, operating systems around the world. I'm just talking about the practical constraints of our current world. Right? You know, actually existing reality, not AI safety guys, you know, reality where all these things don't exist. There's just a few operating systems, just a few countries. If everybody's going with torches and searchlights through the Internet, it's very hard for a virus to continue. Okay? So A, on the practical difficulties, there's the technical stuff with, you know, with chaos and turbulence and with cryptography itself where AI can't predict and it can't solve certain equations. B, on the physical difficulties, it probably I mean, like, to be a Stuxnet, Microsoft and Google and so on could kill it. The off switch exists. Can it live off the land? No. It cannot because it doesn't have, you know, drones to mine ore and stuff out of the ground. And can it, like, exist without humans? Can it be this hypnotizing thing? Okay. So the hypnotizing thing, by the way, this is 1 of the things that's the most hilarious, self fulfilling prophecy in my view. Okay? And no offense to anybody listening to this podcast, but I think the absolutely dumbest kind of tweet that I've seen on AI is, I typed this in and, oh my God, it told me this. Like, I asked it how to make sarin gas and it told me X or whatever, right? That's just a search engine, okay? What basically a lot of these people are doing is they're saying, What if there were people out there that were so impressionable that they would type things into an AI and follow it as if they were hearing voices? And that's actually not the model or whatever that's doing it. That's like this AI cult that has evolved around the world, like Aum Shinrikyo, you know, that hears voices and does like the sarin gas. The point is an AI can't just like hypnotize people. Those people have to like participate in it. They're typing things into the machine or whatever. Okay? Now you might say, all right, let's project out a few years. In a few years, what you have is you have an AI that is not just text, but it appears as Jesus. What AI Jesus do? What would AI Lee Kuan Yew do? What would AI George Washington do? So it appears as 3 d, okay? So it's generating that. It speaks in your language and in a voice. It knows the history of your whole culture. Okay? That would be very convincing. Absolutely be very convincing. But it still can't exist without human programmers who are like the priests tending this AI god, whether it's AI Jesus or AI Lee Kuan Yew or something like that. The thing about the hypnotization thing that I really want to poke on that, are you familiar with the concept of the principal agent problem? Basically, in every time you've got like a CEO and a and a worker or you have a LP and a VC or you have, you know, an employer and a contractor, every edge there, there are 4 possibilities in a 2 x 2 matrix: win win, win lose, lose win, lose lose. Okay? And so for example, win win, you know, when somebody joins a tech startup, the CEO makes a lot of money and so does a worker. Okay? That's win win. Lose lose is they both lose money. Win lose is the CEO makes money and the employee doesn't. Lose win is the company fails, but the employee got paid a very high salary. So what equity does is it aligns people. That's where the Trump Council alignment comes from. It aligns people to the upper left corner of win win. That's when you have 1 CEO and 1 employee. When you have 1 CEO and 2 employees, you don't have 2 squared outcomes. You have 2 cubed outcomes because you have win win win, win win lose, win lose lose, etcetera. Right? Because all 3 people can be win or lose. Okay? CEO can be win or lose. Employee can be win or lose. Employee number 2 can be win or lose. If you have n people rather than 3 people, you have 2 to the n possible outcomes, and you have essentially a 2 x 2 x 2 x 2 2 by n hypercube of possibilities. Okay? It's all literally just 2 dimensions on each axis. There's tons of possible defecting kinds of things that happen there. So that's why in a large company, there's lose win coalitions that happen, where M people gang up on the other K people, and they win what other people lose. That's how politics happens. When you've got a startup that's driven by equity and the biggest payoff, people don't have to try to think, okay, well, I make more money by politics. They'll make more money by the win, win, win, win, win column because the exit makes everybody make the most money. That's actually how the OpenAI people were able to coordinate around, We won an $80,000,000,000 company. The economics helped find the sell that was actually the most beneficial to all of them, helped them coordinate. Okay? So you search that hyper Okay. That's the point of equity as lining. Still, despite all of this, that's 1 of our best mechanisms for coordinating large numbers of people in the principal agent problem. Despite all this, the possibility exists for any of these people to win while the others lose. Right? With me so far? And I'll explain why this is important. What that means is those thousand employees of the CEO are their own agents with their own payoff functions that are not perfectly aligned with the CEO's payoff function. As such, there are scenarios under which they will defect and do other things. Okay? The only way they become like actual limbs see, my hand is not an agent of its own. It lives or dies with me. Therefore, it does exactly what I'm saying at this time. I tell it to go up, it goes up. Tell it to go down, it goes down. Sideways, sideways, right? An employee is not like that. They will do this and this and sideways, sideways up to a certain point. And if you have them do something that's extremely against their interest, they will not do your action. Do understand my point? Okay. That is the difference between an AI hypnotizing humans versus drones. An AI controlling drones is like your hands. They're actually pieces of your body. There's no defecting. There's no lose wind. They have no mind of their own. They're literally taking instructions. Okay? They have no payoff function. They will kill themselves for the horde, right? An AI hypnotizing humans has 1,000 principal agent problems for every 1,000 humans, and it has to incentivize them to continue and has to generate huge payoffs. It's like an AI CEO. That's really hard to do, right? The history of evolution shows us how hard it is to coordinate multicellular organisms. You have to make them all live or die as 1. Then you get something along these lines. Like an ant colony can coordinate like that because if the queen doesn't reproduce all the ants, it doesn't matter what they're having sort of genetic material, okay? We are not currently set up for those humans to not be able to reproduce unless the AI reproduces. Do I think we eventually get to a configuration like that? Maybe. Where you have an AI brain is at the center of civilization, and it's coordinating all the people around it. And every civilization that makes it is capable of crowdfunding and operating its own AI. That gets me to my other critique of the AI safety guys. I mentioned that the first critique is very theoretical rather than empirical. The second critique is Abrahamic rather than dharmic or cynic. Okay? And, you know, our background culture influences things in ways we don't even think about. So much of the paper clip thinking is like a vengeful god will turn you into pillars of salt, except it's a vengeful, you know, AI god will turn turn you into paper clips. Okay? The polytheistic model of many gods as opposed to 1 god is we're all gonna have our own AI gods, and there'll be war of the gods, like Zeus and Hera and so on. That's the closest Western version. You know, the paganism that predated, you know, Abrahamic religions. But that's still there in India. That's still how Indians think. That's why India is sort of people have gotten so woke that they don't even make large scale cultural generalizations anymore. But it's true that India is just culturally more amenable to decentralization, to, you know, multiple gods rather than 1 god and 1 state. Okay? And then the Chinese model is yet the opposite. Like, they have like I mean, of course, they have their tech entrepreneurs and so on. But if India is more decentralized, China is more centralized. They have like 1 government and 1 leader for the entire civilization. Okay? The biggest thing that China has done over the last 20 or 30 years is they've taken various, you know, U. S. Things, and they've made sure that they have their own Chinese version where they have root. So they take U. S. Social media, and they made sure they had root over Sino Weibo. Okay? They make sure they have their own Chinese version of electric cars, the most Chinese version. So the private keys, in a sense, are with G. So that means that they also, at a minimum, you combine these 2 things, you're at a minimum going to get polytheistic AI of The US and Chinese varieties. And then you add the Indian version on it, and you're going to get quite a few of these different AIs around there. And then you have War of the Gods where maybe they are good at coordinating the humans who, you know, take instructions from them, but they can't live without the humans. And the humans are giving input to them. That's a series of things. I could probably make that clearer if I just laid it out in bullets in essay. But just to recap it, a, technical reasons like chaos, turbulence, cryptography, why AI is limited in its ability to predict time frames and to solve equations B, practical limits. An AI cannot easily be a Stuxnet because Microsoft and Google and Apple can install software on 1000000000 devices and just kill it, right? Like, basically, guys with torches come. All right? It can't easily live off the land without humans because they would need hundreds of millions of autonomous robots out there to control, to mine the ore and set up the data centers. It can't just hypnotize humans like it can control drones because of the principal agent problem and the degree defection. To make those humans do that, you'd have to have such massive alignment between the AI and humans that humans all know they'll die if the AI dies and vice versa. We're not there. Maybe we'll be there in, like, I don't know, n number of years, but not for a while. That's a total change in, like, how states are organized. Okay? Finally, let me just talk about the physics a little bit more. There's a lot of stuff we just talked about at a very sci fi book level of, it'll just invent nanomedicine and nanotech and kill us all and so on and so forth. Now look. I like Robert Freitas. Obviously, Richard Feynman's a genius and so on so forth. But nanotech somehow hasn't been invented yet. Okay? Meaning that, you know, there's a lot of chemists that have worked in this area. Okay? And a lot of, quote, nanotech is like rebranded chemistry because those are the molecular machines, you know, for example, DNA polymerase or ribosome. Those are molecular machines that we can get to work at that scale, the evolved ones. To my knowledge, and I may be wrong about this. I haven't looked at it very, very recently. We haven't actually been able to make artificial, you know, replicators of the stuff that they're talking about, which means it's possible that there's some practical difficulty that intervened between Feynman and nephritis and so on's calculations. Right? Just the sheer fact that those books have came out decades ago and no progress has been made indicates that maybe there's a roadblock that wasn't contemplated. Right? So you can't just click your fingers and say, boom, nanomiscent. It's sort of like clicking your fingers and saying, boom, time travel. Right? Nanomiscent exists. That was a good poke that I had a while ago in a conversation like this, where the AI guy, AI safety guy on the other side was like, well, time travel, that's too implausible. I'm like, yeah, but you're waiting on the nanotech thing you're thinking is like here, and you're making so many assumptions there that I want to actually see some more work there. I want to actually see that nanotech is actually more possible than you think it is. As for, oh, we just need to mix things in a beaker and make a, you know, virus and so on and so forth. You know what is really, really good at defending against novel viruses? Like the human immune system, that's something that's within envelope, right? Like you have evolved to not die and to fight off viruses. Is it possible that maybe you could make some super virus? I mean, maybe. But again, humans are really good and the immune system is really good at that kind of thing. That is what we're set up to do, right? To adapt to that. Billions of years of evolution are being set up for that. Physical constraints are not really contemplated when people talk about these super powerful AIs. Mathematical constraints, practical constraints are not contemplated. And I could give more, but I think that was a lot right there. Let me pause here.
Nathan Labenz: 1:14:48 Let me try to steel man a few things. And then I do think before too long, I want to get back to the somewhat less radically transformative scenarios and ask a few follow-up questions on that too. But I think for starters, I would say the sort of Eliezer, he's updated his thinking over time as well. And I would say probably doesn't get quite enough credit for it because he's definitely on record repeatedly saying, Yeah, I was kind of expecting more something from the DeepMind school to pop out and be wildly overpowered very quickly. And on the contrary, it seems like we're in more of a slow takeoff type of scenario where we've got these, again, super high surface area, suck up all the knowledge, gradually get better at everything. Some surprises in there, certainly some emergent properties, if you will accept that term. Surprises to the developers if nothing else, right, that are definitely things we don't fully understand. But it does seem to be a more gradual turning up of capability versus some super sudden surprise. Okay. So then what is the alternative? I'm going to try to kind of give you what I think of as the most consensus strongest scenario where humans lose track of the future or lose control of the future. Maybe starting by kind of losing track of the present and then having that kind of give way to losing control of the future. And I think within that, by the way, I'm not really 1 who cares that much about whether AIs say something offensive today. I'm not easily offended and whatever.
Balaji Srinivasan: 1:16:27 That's not world ending. I understand your point. That's not like, Who cares? Whatever. That's within scope. That's within envelope.
Nathan Labenz: 1:16:33 Within this bigger kind of what is the real, most likely path to AI disaster as understood, I think, by the smartest people today, I think that is still a useful leading indicator because it's like, okay, the developers, whether you agree with their politics, think whether their commercial reasons are, their sincere reasons or not, they have made it a goal to get the AI to not say certain things. They don't want it to be offensive. The most naive kind of down the fairway interpretation of that is like, hey, they want to sell it to corporate customers. They know that their corporate customers don't want to have their AI saying offensive things, so they don't want to say offensive things. And yet, they can't really control it. It's still pretty easy to break. So, I view that as just kind of a leading indicator of, okay, we've seen GPT-two, 3, and 4 over the last 4 years, and that's a big delta in capability. How much control have we seen developed in that time? And does it seem to be keeping pace? And my answer would be on the face of it, it seems like the answer is no. We don't have the ability to really dial in the behavior such that we can say, okay, you can expect, you can trust that these AIs will not do A, B, and C. On the contrary, it's like, if you're a little clever, you can get them to do it.
Balaji Srinivasan: 1:18:00 You can break out of the sandbox on it.
Nathan Labenz: 1:18:02 Yeah. And it's not even like I mean, we've talked about things where you have access to the weights and you're doing counter optimizations, but you don't even need that. The kind of stuff I do in my red teaming in public is literally just feed the AI a couple of words, put a couple words in its mouth, and it will kind of carry on from there. So, with that in mind, it's just a leading indicator. I don't know how powerful the most powerful AI systems get over the next few years, but it seems very plausible to me that it might be as powerful as an Elon Musk type figure. Somebody who's really good at thinking from first principles, really smart, really dynamic across a wide range of different contexts. And he's not powerful enough to in and of himself take over the world, but he is kind of becoming transformative. Now imagine that you have that kind of system and it's trivial to replicate it. So, you know, if you have like 1 Elon Musk, all of a sudden you can have arbitrary, you know, functionally arbitrary numbers of Elon Musk power things that are clones of each other.
Balaji Srinivasan: 1:19:06 I maybe I can pause you there. So that's my polytheistic AI scenario. But here's the thing that is this is background, but I wanna push it to foreground. You still have a human typing in things into that thing. The human is doing the jailbreak. Right? What we're talking about is not artificial intelligence in the sense of something separate from a human, but amplified intelligence. Amplified intelligence, I very much believe in. The reason is amplified intelligence So here's something that people may not know about There's this great book, Cooking Made Us Human. Okay? Tool use has shifted your biology in the following way. For example, and I'll map it to the present day. This book by Richard Rang, Cooking Made Us Human, where the fact that we started cooking and using fire meant that we could do metabolism outside the body, which meant it freed up energy for more brain development. Okay? Similarly, developing clothes meant that we didn't have to evolve as much fur. Again, more energy for brain development. Evolving tools meant we didn't have as much fangs and claws and muscles. Again, more energy for brain development, right? So encephalization quotient rose as tool use meant that we didn't have to do as much natively and we could push more to the machines. In a very real sense, we have been a man machine symbiosis since the invention of fire and the stone axe and clothes, right? You do not exist as a human being on your own. Like the entire Ted Kaczynski concept of living in nature by itself, humans are social organisms that are adapted to working with other humans and using tools. And you have for And we have been for millennia. Okay? This goes back not just human history, but like hundreds of thousands of years before. 100 gatherers were using tools. Okay? So what that means is man machine symbiosis is not some new thing. It's actually the old thing that broke us away from other primate lineages that weren't using tools. Okay? This is the fundamental difference between what I call uncle Ted and uncle Fred. Uncle Ted is Ted Kaczynski. It's a Unabomber. It's a Doomer. It's a Decelerator, the degrowther who thinks we need to go back to Gaia and Eden and become monkeys and live in the jungle like like, you know, Ted Kaczynski. Right? The the the Unabomber style. Uncle Fred is Friedrich Nietzsche. Right? Nietzschean, we must be get the stars and become Uberman and so on and so forth. This, I think, is gonna become and I actually tweeted about this years ago before the current AI debates that, you know, between anarcho primitivism, degrowth, deceleration, okay, on the 1 hand, and transhumanism and acceleration and human 2 and human self improvement and make it the stars on the other hand, This is the future political axis, the current 1. And roughly speaking, you can cut It's not really left and right because you'll have both left status and right conservatives go over here. You know, left states will say it's against the state, and the right theists will say the right conservatives will say it's against God. Okay? And you'll have left libertarians and right libertarians over here, where left libertarians say it's my body, and, you know, the right libertarians say it's my, you know, my money. Right? And so that is a re architecting of the political axis where, you know, Uncle Ted and Uncle Fred, which is a kind of clever way of putting it. Okay? And the problem with the Uncle Ted guys, in my view, is, as I said, yeah, if they go and want to live in the, you know, the woods, fine. Go get them. But once you start having even like a forget 1000, a 100 people doing that, your your trees will very quickly get exfoliated. You know, the the leaves are going to get all picked off of them. Humans are not set up to just literally live in the jungle right now. You've had hundreds of thousands of years of evolution that have driven you in the direction of tool use, social organisms, farming, etcetera, etcetera. The man machine symbiosis is not today. It's yesterday and the day before and 10000 years ago and 100000 ago. And how do we know we've got man machine symbiosis? Can you live without, even if you're not using the stove, somebody's using a stove to make you food, right? Can you live without the tractors that are digging up the grains? Can you live without indoor heating? Can you live without your clothes? Frankly, can you do your work without your phone, without your computer? No, you can't. You are already a man machine symbiosis. Once we accept that, then the question is, what's the next step? And right now, we're in the middle of that next step, which is AI is amplified intelligence. So what you're talking about is not that the AI is Elon Musk. It is that the AI human fusion means there's another 20 Elon Musks or whatever the number is. Okay? And that's good. That's fine. That's within envelope. That's just a bunch of smarter humans on the planet. That is amplified intelligence. That is more like, you know, I mentioned the tool thing. Okay? Another analogy would be like a dog. You know, a dog is man's best friend. Right? So that AI does not live without you. Humans can turn it off. They have to power it. They have to give it subsidence. Right? Eventually, that might become like a ceremonial thing. Like, this is our god that we pray to, right, because it's wiser and smarter than us, and it appears in an image. But the priests maintain it. You know, just like you go to a Hindu temple or something like that, and the priests will pour out the ghee, you know, for the fires and so on and so forth, and then everybody comes in and prays. Okay? The priests believe in the whole thing, but they also maintain the back of the house. They do the system administration for the temple. Same, you know, in a Christian church, right? The, you know, like, it's not like it appears out of nowhere. Somebody, you know, went and assembled this cathedral, right? They saw the back of the house, the fact that it was just woods and rocks and so on that came together. But then when people come there, it feels like a spiritual experience. Do you see what I'm saying? Okay. So the equivalent of that, the priests or the, you know, the people maintaining temples, cathedrals, mosques, whatever, is engineers who are maintaining these future AIs, which appear to you as Jesus. They appear to you maybe even a hologram. Okay. You come there. You ask you for guidance as an oracle. You've also got the personal version on your phone. You ask you for guidance. But guess what? You're still a human AI symbiosis until and unless that AI actually has the Terminator scenario where it's got lots of robots and can live on its own. I'm not saying that's physically impossible. I did give some constraints on it earlier. But for a while, we're not gonna be there. So that alone means it's not FOOM because we don't have lots of drones running around. The AI has to be with the human. It's a human AI symbiosis. It's not AI Elon Musk. It is human AI fusion that becomes Elon Musk. And frankly, that's not that different from what Elon Musk himself is. Elon Musk would not be Elon Musk without the Internet. Without the Internet, he can't tweet and reach 150,000,000 people. The Internet itself made Elon what he is. Right? And so this is like the next version of that. Maybe there's now 30 Elons because the AI makes the next 30 Elons.
Nathan Labenz: 1:26:16 Yeah. I mean, again, I think I'm largely with you with just this 1 very important nagging worry that's like, what if this time is different? Because what if these systems are getting so powerful so quickly that we don't really have time for that techno human fusion to really work out? I'll just give you kind of a couple data points on that. Like, you know, you said, like, it's still somebody putting something into the AI. Well, sort of. Right? I mean, already we have these proto agents, and the, like, super simple scaffolding of an agent is just run it in a loop, give it a goal, and have it kind of pursue some, like, plan, act, get feedback, and and loop type of structure. Right? It doesn't take it doesn't seem to take a lot. Now they're not smart enough yet to accomplish big things in the world, but it seems like the the language model to agent switch is less 1 right now that is gated by the structure or the architecture and more 1 that's just gated by the fact that, like, the language models, when framed as agents, just aren't that successful at, like, doing practical things and and getting over humped. So they they tend to get stuck. But it doesn't seem that hard to imagine that, like, you know, if you had something that is sort of that next level that you put it into a loop, you say, okay. You're Elon Musk LLM, and your job is to make us whatever us exactly is a you know, multiplanetary species. And then you just kind of keep updating your status, keep updating your plans, keep trying stuff, keep getting feedback. And, you know, like, really limits that?
Balaji Srinivasan: 1:27:55 There may be, like, a really good program, but the whole AI kills everyone thing is so it's like, where's the actuator? Okay. I hit enter. What kills me? Right? Is it a hypnotized human who's been hypnotized by an AI that he's typed into, and he's radicalized himself by typing into a computer? Okay, that's not that different from a lot of other things that have happened in the past, right? So who is actually striking me, right? Who's striking the human? It's another human with an axe that he's been radicalized by an AI? He's not actually, that's not even the right term. We're giving agency to the AI when it's not really an agent. It is a human who's self radicalized by typing into a computer screen and has hit another human. That's 1 scenario. The other scenario is it's literally a Skynet drone that's hitting you. Those are the only 2 how else is it gonna be physical? Right? How does AI the actuation step is a part that is skipped over, and it's a nontrivial step.
Nathan Labenz: 1:28:54 Well, I think it could be lots of things. Right? I mean
Balaji Srinivasan: 1:28:57 If it's not 1 of those 2, if it's not another human or a drone hitting you, what is it?
Nathan Labenz: 1:29:02 Just habitat degradation. Right? I mean, how do we kill most of the other species that we drive to extinction? We don't go out and, like, hunt them down with axes 1 x 1. We just, like, change the environment more broadly to the point where it's not suitable for them anymore and they don't have enough space and they kind of die out. Right? Like so we did hunt down some of the megafauna, like, literally 1 x 1 with with spears and stuff. But, like, most of the recent loss of species is just like we're out there just extracting resources for our own purposes. And in the course of doing that, you know, whatever bird or whatever, you know, thing just kind of loses its place, and then it's no more. And I don't think that's, like, totally implausible.
Balaji Srinivasan: 1:29:42 Wait. So so that is, though, I think, within normal world. Right? What does that mean? That means that some people some some amplified intelligence and may maybe we might call it HAI. Okay. Human plus AI combination. Right? Some HAIs outcompete others economically, and they lose their jobs. Is that what you're talking about?
Nathan Labenz: 1:30:03 I think also the humans potentially become unnecessary in a lot of the configurations, like just a recent paper from DeepMind.
Balaji Srinivasan: 1:30:11 So you're marginal product workers?
Nathan Labenz: 1:30:13 Or negative. Yeah. I mean, the Yeah. Sure. The last you know, DeepMind has been on Google Google DeepMind has been on a tear of increasingly impressive medical AIs. Their most recent 1 takes a bunch of difficult case studies from the literature. I mean, case studies, you know, this is like rare diseases, hard to diagnose stuff, and asks an AI to do the differential diagnosis, compares that to human, and compares it to human plus AI. And they've they phrase their results like in a very understated way, but the the headline is the AI blows away the human plus AI. The human makes the AI worse.
Balaji Srinivasan: 1:30:51 So here's the thing. Do and I'll say something provocative maybe. Okay. Like, I haven't already. Fine. I do think that the ABCs of economic apocalypse for Blue America are AI, Bitcoin, and China, where AI takes away their a lot of the revenue streams, the licensures that have made medical and legal costs and other things so high. Bitcoin takes away their power over money, and China takes away their military power. So I foresee total meltdown for Blue America in the years and, you know, maybe decade to come. Already kind of happening. But that's different than being at the end of the world. Right? Like blue America had a really great time for a long time, and they've got these licensure locks. But because of that, they've hyperinflated the cost of medicine. It's like, how much So what you're talking about is, Wow, we have infinite free medicine. Man, doctor billing events are going to get ahead. That's the point.
Nathan Labenz: 1:31:46 Yeah. And to be clear, I'm really with you on that too. I want to see When people say, What is good about AI? Why should we pursue this? My standard answer is high quality medical advice for everyone at pennies per visit. Right? It is orders of magnitude cheaper. We're already starting to see that in some ways it's better. People prefer it. You know, that AI is more patient. It has better bedside manner. I wouldn't say you know, if I was giving my, you know, my own family advice today, I would say use both a human doctor and an AI, but definitely use the AI as part of your mix.
Balaji Srinivasan: 1:32:24 Absolutely. That's right. That's right. But you're prompting it still. Right? The smarter you are, the smarter the AI is. You notice this immediately with your vocabulary. Right? The more sophisticated your vocabulary, the finer the distinctions you can have, the better your own ability to spot errors. You can generate a basic program with it. Right? But really, amplified intelligence is, I think, a much better way of thinking about it. Because whatever your IQ is, it surges it upward by a factor of 3 or whatever the number. And maybe the amplifier increases with your intelligence. But that internal intelligence difference still exists. It's just like what a computer is. A computer is an amplifier for intelligence. If you're smart, you can hit enter and programs can go to think about the Minecraft guy, right, or Satoshi. 1 person built 1000000000 or in associates against trillion dollar thing. You know. Obviously, other people continued Bitcoin and so on and so forth, right? So what I feel though is this is what I mean by going from nuclear terrorism to the TSA, okay? We went from AI will kill everyone, and I'm like, what's the actuator? To, okay, it'll gradually degrade our environment. What does that mean? Okay, some people will lose their jobs. But then we're back in normal world.
Nathan Labenz: 1:33:32 Wait, hold on. Let me paint a little bit more complete picture because I don't think we're quite there yet. So I think the differential diagnosis recent paper, that's just a data point where it's kind of, you know, like chess. This, you know, this came long before. Right? There was a period where humans are the best chess players. Then there was a period where the best were the hybrid human AI systems. And now as far as I understand it, we're in a regime where the human can't really help the AI anymore. And so the best chess players are just pure AIs. We're not there in medicine, but we're starting to see examples where, hey, in a predefined study, differential diagnosis, the AI is beating, not just beating the humans, but also beating the AI human hybrid or the human with access to AI. So, okay, that's not it. Right? There's a paper recently called Eureka out of NVIDIA. This is Jim Fan's lab where they use GPT-four to write the reward functions to train a robot. So you you wanna train a robot to, like, twirl a pencil in fingers. Hard, you know, hard for me to do. Robots definitely can't do it. How do you train that? Well, you need a reward function. Basically, while you're in the early process of learning and failing all the time, the reward function gives you encouragement when you're on the right track. So there are people who have developed this skill and you might do something like, well, if the pencil has angular momentum, then that seems like you're on maybe sort of the right track. So give that a reward, even though at the beginning, you're just failing all the time. Turns out GPT-four is way better than humans at this. So it's better at training robots.
Balaji Srinivasan: 1:35:09 So all of that is awesome, and it's great. And but here is here's the thing is there's a huge difference between AI is gonna kill everybody and turn everybody into paper clips, okay, versus some humans with some AI are going to make a lot more money, and some people are going to lose their jobs.
Nathan Labenz: 1:35:28 Yeah. I'm not scared of that. I'm not scared of that scenario. I mean, it could be disruptive. It could be disruptive, but it's not existential unto itself.
Balaji Srinivasan: 1:35:37 Bingo. Okay. So that's what I want right. There's a the the me, it comes if I if I ask just 1 question is what is the actuator? Right? You know, sensors and actuators. Right? What is the thing that's actually going to plunge a knife or a bullet into you and kill you? It is either a human who has hypnotized themselves by typing into a computer, like basically an AI terrorist, you know, which is kind of where some of the EAs are going, in my view. Or it is like an autonomous drone that is controlled in a StarCraft or Terminator like way. We are not there yet in terms of having enough humanoid or autonomous drones that are Internet connected and programmable. That won't be there for some time. Okay? So that alone means fast takeoff is and I think by the time we get there, you will have cryptographic control over them. That's a crucial thing. Cryptography fragments the whole space in a very fundamental way. If you don't have the private keys, you do not have control over so long as that piece of hardware, the cryptographic controller, you've nailed the equations on that. And frankly, can use AI to attack that as well to make sure the code is perfect. Right? Remember you talked about attack and defense? AIs attack crypto as defense. Right? Because 1 of the things that crypto has done Do you know what the PKI problem is, public key infrastructure?
Nathan Labenz: 1:37:03 I'll say no on behalf of the audience.
Balaji Srinivasan: 1:37:06 This is good. We should do more of these actually. Feel it's a good fusion of things or whatever, right? But the public infrastructure problem the public infrastructure problem is something that was sort of lots of cryptography papers and computer science papers in the '90s and 2000s assumed that this could exist, and essentially meant if you could assume that everybody on the Internet had a public key that was public and a private key that was kept both secure and available at all times, then there's like all kinds of amazing things you can do with privacy preserving, messaging, and authentication, and so on. The problem is that for many years, what cryptographers try to do is they try to nag people into keeping their private keys secure and available. And the issue is it's trivial to keep it secure and unavailable, where you write it down, you put it into a lockbox, and you lose the lockbox. It's trivial to keep it available and not secure, okay, where you put it on your public website, and it's available all the time. You never lose it, but it's not secure because anybody can see it. When you actually ask, what does it mean to keep something secure and available? That's actually very high cost. It's precious space because it's basically your wallet, right? Your wallet is on your person at all times, so it's available. But it's not available to everybody else, so it's secure. So you actually have to like touch it constantly. Yes, right? So it turns out that the crypto wallet, by adding a literal incentive to keep your private keys secure and available, because if they're not available, you've lost your money. If they're not secure, you've lost your money. Okay? To have both of them, that was what solved the PKI problem. Now we have hundreds of millions of people with public private key pairs where the private keys are secure and available. That means all kinds of cryptographic schemes, 0 knowledge stuff. There's this amazing universe of things that is happening now. 0 knowledge in particular has made cryptography much more programmable. There's a whole topic which is if you want something that's kind of You know how like AI was creeping for a while and people, specialists were paying attention to it, and then it just burst out on the scene? 0 knowledge is kind of like that for cryptography. Thanks to the you you've probably heard of 0 knowledge before.
Nathan Labenz: 1:39:22 Yeah. We did 1 episode with Daniel Kong on the use of 0 knowledge proofs to basically prove without revealing the weights that you actually ran the model you said you were going to run and things like that, I think are super interesting.
Balaji Srinivasan: 1:39:40 Exactly. Right? So what kinds of stuff? Why is that useful in the AI space? Well, first is you can use it, for example, for training on medical records while keeping them both private but also getting the data you want out it. For example, let's say you've got a collection of genomes, okay? And you want to ask, okay, how many Gs were in this dataset? How many Cs? How many As? How many Ts? Okay, like you just say, like, that's a very simple analysis. What's the ACGT content of this, you know, the sequence dataset? You could get those numbers, you could prove that they were correct without giving any information about the individual sequences, right? Or more specifically, you do it at 1 locus and you say, How many Gs and how many Cs are at this particular locus? And you get the SNP distribution, okay? So it's useful for what you just said, which is like showing that you ran a particular model without giving anything else away. It's useful for certain kinds of data analysis. There's a lot of overhead on compute on this right now, so it's not something that you do trivially. Okay? But it'll probably come down with time. But what is perhaps most interestingly useful for is, in the context of AI, is coming up with things an AI can't fake. So what we talked about earlier. Right? Like an AI can come up with all kinds of plausible sounding images, but if it wasn't cryptographically signed by the sender, then, you know, it should be signed by the sender and put on chain. And then at least you know that this person or this entity with this private key asserted that this object existed at this time in a way that'd be extremely expensive to falsify because it's either on the Bitcoin blockchain or another blockchain that's very expensive to rewind. Okay? This starts to be a bunch of facts that an AI can't fake.
Nathan Labenz: 1:41:30 You know, so going back to the the kind of big picture loss of control story, I was just kind of trying to build up a few of these data points that like, hey. Look at this. Differential diagnosis, we already see, like, humans are not really adding value to AIs anymore. That's kind of striking. And similarly with training robot hands, GPT-four is outperforming human experts. By the way, all of the latent spaces are totally bridgeable. 1 of the most striking observations of the last couple of years of study is that AIs can talk to each other in high dimensional space, which we don't really have a way of understanding natively. It takes a lot of work for us to decode.
Balaji Srinivasan: 1:42:11 This is like the language thing?
Nathan Labenz: 1:42:13 We're starting to see AIs kind of develop, not obviously totally on their own as of now, but there is becoming an increasingly reliable go to set of techniques if you want to bridge different modalities with a pretty small parameter adapter.
Balaji Srinivasan: 1:42:32 That's interesting. Actually, what's a good paper on that? I actually hadn't seen that.
Nathan Labenz: 1:42:35 The BLIP family of models out of Salesforce research is really interesting, and I've used that in production at
Balaji Srinivasan: 1:42:40 Salesforce. Really?
Nathan Labenz: 1:42:41 Yeah. Salesforce research. They have a crack team that has open sourced a ton of stuff in the language model computer vision joint space. And this this you see this all over the place now. But, basically, what they did in the paper called BLIP 2, and they've had, like, 5 of these with a bunch of different techniques. But in BLIP 2, they took a pre trained language model and then a pre trained computer vision model, and they were able to train just a very small model that kind of connects the 2. So you could take an image, put it into the image space, then have their little bridge bridge that over to language space. And that everything else the the 2 big models are frozen. So they were able to do this on just like a couple days worth of GPU time, which I do think goes to show how it is gonna be very difficult to contain proliferation.
Balaji Srinivasan: 1:43:32 Which is good. In my view, that's really good.
Nathan Labenz: 1:43:35 As long as it doesn't get out of control. I'm I'm probably with you on that too. But by bridging this vision space into the language space, then the language model would be able to converse with you about the image. Even though the language model was never trained on images, but you just had this connector that kind of bridges those modalities.
Balaji Srinivasan: 1:43:54 It's just it's like another layer of the network that just bridges 2 networks almost.
Nathan Labenz: 1:43:58 Yeah. It bridges the spaces. It bridges the conceptual spaces between something that has only understood images and something that has only understood language, but now you can kind of bring those together.
Balaji Srinivasan: 1:44:09 As I think about it, it's not that surprising because that's what you know, for example, text image models are basically that. They're bridging 2 spaces, in a sense. Right? But I'll check this paper out. So on the 1 hand, it's not that surprising. On the other hand, I should see how they implemented it or whatever. So blip 2. Okay.
Nathan Labenz: 1:44:27 Yeah. I think the most striking thing about that is just how small it is. Like, you took these 2 off the shelf models that were trained independently for other purposes, and you're able to bridge them with a relatively small connector. And that seems to be happening all over the place. I would also look at the Flamingo architecture, which is a year and a half ago now out of DeepMind. That was 1 for me where I was like, oh my and this is also a language to vision where they keep the language model frozen and then they kind of, in my mind, it's like, I can see the person in their garage, like, tinkering with their soldering iron. You know? Because it's just like, wow. You took this whole language thing that was frozen, and you kind of injected some, you know, vision stuff here, you added a couple layers, and you kind of Frankensteined it, and it works. And it's like, wow. That's not really it wasn't, like, super principled. You know? It was just kind of hack a few things together and, you know, try training it. And I don't wanna diminish what they did because I'm sure there were, you know, more insights to it than that. But it seems like we are kind of seeing a reliable pattern of the key point here being model to model communication through high dimensional space, which is not mediated by human language, is I think 1 of the reasons that I would expect and by the way, there's lots of papers too on, like, you know, language models are human level or even superhuman prompt engineers. You know, their their self prompting, like, techniques are getting pretty good. So if I'm imagining the big picture of like and we can, you know, get back to like, okay. Well, how do we use any techniques, crypto or otherwise, to keep this under control? And I would say this is kind of the newer school of the big picture AI safety worry. Obviously, there's a lot of flavors. But if you were to go look at Ajai Akatra, for example, I think is a really good writer on this, her worldview is less that we're going to have this FOOM and more that over a period of time. And it may not be a long period of time. Maybe it's a generation, maybe it's 10 years, maybe it's 100 years, but obviously those are all small in the grand scheme of the future. We have, in all likelihood, the development of AI centric schemes of production where you've got kind of your high level executive function is like your language model. You've got all these lower level models. They're all bridgeable. All the spaces are bridgeable in high dimensional form where they're not really mediated by language unless we enforce that. I mean, we could say, you know, it must always be mediated by language so we can read the logs. But there's a tax to that. Right? Because going through language is, like, highly compressed compared to the high dimensional space to space.
Balaji Srinivasan: 1:47:13 Alright. So let me see if I can steel man or articulate your case. You're saying AIs are gonna get good enough. They're gonna be able to communicate with each other good enough, and they'll be able to do enough tasks that more and more humans will be rendered economically marginal or unnecessary.
Nathan Labenz: 1:47:27 I'm not saying I think that will happen. I'm just saying I think there's a good enough chance that that will happen that it's worth taking really seriously.
Balaji Srinivasan: 1:47:33 I actually think that will happen, something along those lines, or in the sense of at least massive economic disruption. Definitely. Okay? But I'll give an answer to that, which is both, you know, maybe fun and not fun. Have you seen the have you seen the graph of the percentage of America that was involved in farming?
Nathan Labenz: 1:47:51 Yeah. I tweeted a version of that once.
Balaji Srinivasan: 1:47:54 Oh, you did? Okay. Great. Good. So you're familiar with this, and you're familiar with what I mean by the implication of it, where basically Americans used to identify themselves as farmers. Right? And, manufacturing rose as agriculture collapsed. Right? And here is the graph on that. But from like 40% in the year 1900 to like a total collapse of agriculture, and then also more recently a collapse of manufacturing into bureaucracy, paperwork, legal work. What is up into the right since then is, you know, the the lawyers. What is up into the right? What is replacing that? Starting in around the 19 seventies, we used to be adding energy production, and energy production flatlined once people got angry about nuclear power. So this is a future that could have been. We could be on Mars by now, but we got flatlined. Right? What did go up into the right, so construction costs, this is the bad scenario where the miracle energy got destroyed because regulations the cost was flat, and then when vertical, when regulations were imposed, all the progress was stopped by desells and degrowthers. And then a LARA was implemented, which said nuclear energy has to be as low risk as as reasonably necessary, as reasonably achievable. And that meant that you just keep adding, quote, safety to it until it's the same as cost as everything else, which means you destroy the the value of it. Right? But you know what was up into the right? What replaced those agriculture and manufacturing jobs? Look at this. You see this graph?
Nathan Labenz: 1:49:24 For the audio only we will put this on YouTube. So if you wanna see the graph, do the YouTube version of this. For the audio only group, it's an exponential curve in the number of lawyers in The United States from, looks like, maybe 2 thirds of 1000000 to 13,000,000 over the last hundred and 40 years.
Balaji Srinivasan: 1:49:39 Yeah. And in 1880, it was like sub 100,000 or something like that, right? And then it's just like, especially that 19 70 point, that's when it went totally vertical, okay? And it's probably even more since So, you know, if you add paperwork jobs, bureaucratic jobs, you know, every lawyer is like, you know, sorry lawyers, but you're basically negative value add, right? Because it should the fact that you have a lawyer means that you couldn't just self serve a form. Right? Basically, government is platforms where you can just self serve and you fill it out and you don't have to have somebody code something for you custom. You know, lawyers just doing custom code is because the legal code is so complicated. So the whole Shakespeare thing, like, first thing we do, let's kill all the lawyers. First thing we do, let's automate all the lawyers, Right? Only something that's the hammer blow of AI can break the backbone, and it will. It's going to break the backbone of blue America. Right? It's going to cause that's why the political layer and the sovereignty layer is not what AI people think about, but it's like crucial for thinking about AI. Because what tribes does AI benefit? And again, we got away from why does AI kill everybody? Well, it's going to need actuators. Who's going to stab you? Who's going to shoot you? It's It's got to be a human hypnotized by AI or a drone that AI controls. A human hypnotized by AI is actually a conventional threat. It looks like a terrorist cell. We know how to deal with that, right? It's just like radicalized humans that worship some AI that stab you. It's like the pause AI people are 1 step, I think, away from that. All right? But that's just like I'm Shinrico. That's like Al Qaeda. That's like basically terrorists who think that the AI is telling them what to do. Fine. If it's not a human that's stabbing you, it is a drone, and that's like a very different future where, like, 5 or 10 or 15 years out, maybe we have enough Internet connected drones out there, but even then they'll have private keys, so there's going to be fragmentation of address space. Not all drones be controllable by everybody, in my view. Okay? That's what AI safety is. AI safety is can you turn it off? Can you kill it? Can you stop it from controlling drones? That's what AI safety is. Can you also open the model weights so you can generate adversarial inputs? Can you open the model weights and proliferate it? You're saying, oh, proliferation is bad. I'm saying proliferation is good because if everybody has 1, then nobody has an advantage on it. Right? Not relatively speaking. Okay.
Nathan Labenz: 1:52:01 I have very few super confident positions, so I wouldn't necessarily say I think that proliferation is bad. I'd say so far, it's good. It has and even the most of the AI safety people, I would say, if I could, you know, speak on the behalf of the AI safety consensus, I would say most people would say even that the LAMA 2 release has proven good for AI safety for the reasons that you're saying.
Balaji Srinivasan: 1:52:31 But they opposed it.
Nathan Labenz: 1:52:32 Well, some didn't, some didn't. I would say the main posture that I see AI safety people taking is that we're getting really close to or we might be getting really close. Certainly, if we just kind of naively extrapolate out recent progress, it would seem that we're getting really close to systems that are sufficiently powerful that it's very hard to predict what happens if they proliferate. LAMA 2, not there. And so, yes, it has enabled a lot of interpretability work. It has enabled things like representation engineering, which there is a lot of good stuff that has come from it.
Balaji Srinivasan: 1:53:10 The big thing that I want to kind of establish is you agree with me on the actuation point or not. The the the thing is this thing like, oh, LAMA 2 proliferates, and so businesses are disrupted and people, you know, may maybe they they paid a lot of money for their MD degree and they can't make us a bunch of money. That's within the realm of what I call conventional warfare. You know what I mean? That's like we're still in normal world as we were talking about. Okay? Unconventional warfare is, you know, Skynet arises and kills everybody. Okay? And that is what is being sold over here. And when you think about the actuators, we don't have the drones out there. We don't have the humanoid robots they can control. And hypnotized humans are a very tiny subset of humans, probably. And even if they aren't, that just looks like a religion or a cult or a terrorist cell, and we know how to deal with that as well. The superintelligent AI with, you know, lots of robots that can control in a Starcraft form, I would agree, is something that humans haven't faced yet. But by the time we get that many robots out there, you won't be able to control all of them at once because of the private key things I mentioned. So that's why I'm like, okay. Everything else we're talking about is a normal world. That is the single biggest thing that I wanted to get. Like, economic disruption, people losing jobs, proliferation so that the balance of power is redistributed, all that's fine. The other the reason I say this is people keep trying to link AI to existential risk. A great example is 1 of the things you actually had in here. This is similar to the AI policy institute thing. It's a totally reasonable question, but then I'm gonna, in my view, deconstruct the question. What would you think about putting the limit on the right to compute or their capabilities an AI system might demonstrate that would make you think OpenAI is no longer wise? Most common near term answer here to be seems to be related to risk of pandemic via novel pathogen engineering. So guess what? You know who the novel pathogen engineers are? The US and Chinese governments. Right? They did it, or probably did it, credibly did it, credibly being accused of doing it. They haven't been punished for COVID 19. In fact, they covered up their culpability and pointed everywhere other than themselves. They used it to gain more power in both The US and China with both lockdown in China and in The US, and all kinds of COVID era. Trillions of dollars was printed and spent and so on and so forth. They did everything other than actually solve the problem that was actually getting, you know, the vaccines in the private sector. And they studied the existential risk only to generate it, and they were even paid to generate pandemic prevention and failed. So this would be the ultimate fox guarding the henhouse. Okay? The only the 2 organizations responsible for killing millions of people with novel pathogen are going to prevent people from doing this by restricting compute? No. You know what it is actually? What's happening here is 1 of the concepts I have in the network state is this idea of god, state, and network. Okay? Meaning, what do you think is the most powerful force in the world? Is it almighty god? Is it the US government? Or is it encryption? Right? Or eventually maybe an AGI. Right? If what's happening here is a lot of people are implicitly, without realizing it, even if they are secular atheists, they're treating GOV as GOD. Okay? They treat the US government as God, as the final mover.
Nathan Labenz: 1:56:20 No. I appreciate your little I I take inspiration from you actually in terms of trying to come up with these little quips that, you know, that are memorable. So I was just smiling at that because I I think you do a a great job of that, and I I try to I I have less success coining terms than you have, but, certainly try to follow your example on that front.
Balaji Srinivasan: 1:56:42 It's like a helpful you can compress it down, it's, like, more memorable. So that's what I try to do. Right? So exactly, a lot of these people who are secular think of themselves as atheists have just replaced GOD with GOV. They worship the US government as God, and there's 2 versions of this. You know how, like, God has both the male and female version. Right? The female version is the Democrat god within The USA that has infinite money and can take care of everybody and care for everybody. And the Republican god is the US military that can blow up anybody, and it's the biggest and strongest and most powerful America. Fuck yeah. Okay? And everybody who thinks of the US government as being able to stop something is praying to a dead god. Okay? When you say this, you actually get an interesting reaction from AI safety people where you've actually hit their true solar plexus. All right. The true solar plexus is not that they believe in AI, it's that they believe in the US government. That's a true solar plexus because they are appealing to they're praying to this dead God that can't even clean the poop off the streets in San Francisco, right, that is losing wars or fighting them to sell me, that has lost all these wars around the world, that spent trillions of dollars, that have been through financial crisis, coronavirus, Iraq war, you know, total meltdown politically, okay, that has interest that now has interest payments more than the defense budget, that is you know, that spent a $100,000,000,000 on the California train without laying a single track. It's like that, you know, that, Morgan Freeman thing, you know, the clip from Batman where he's like, so this man is a billionaire, blah blah blah, this and that, and your plan is to threaten him. Right? And so you're gonna create this superintelligence and have Kamala Harris regulate it? Come on, man, so to speak. Right? Like, these people are praying to a blind, deaf, and dumb God that was powerful in 1945. Right? That's why, by the way, all the popular movies what are they? It's Barbie. It's Oppenheimer. Right? It's, it's Top Gun. They're all throwbacks to the eighties or the fifties when The USA was really big and strong, And the future is a black mirror.
Nathan Labenz: 1:58:55 Yeah. I think that's tragic. 1 of the projects that I do like, and you might appreciate this, I don't know if you've seen it, is the from the future of Life Institute, a project called Imagine a World, I think is the name of it. And they basically challenged their audience and the public to come up with positive visions of a future where technology changes a lot and obviously AI pretty central to a lot of those stories. And what are the challenges that people go through and how do we get there and whatever, but a purposeful effort to a to imagine positive futures. Super under provided, and I I really liked the the investment that they made in that.
Balaji Srinivasan: 1:59:44 You know, some 1 of the things I've got in the never stated book is there's certain megatrends that are happening. Right? And megatrends, I mean, it's possible for, like, 1 miraculous human maybe to reverse them. Okay? Because I think both the impersonal force of history theory and the great man theory of history have some truth to them. But the megatrends are the decline of Washington DC, the rise of the Internet, the rise of India, the rise of China. That is like my worldview. And I can give 1000 graphs and charts and so on for that, but that's basically the last 30 years and maybe the next x. Right? I'm not saying there can't be trend reversal. Of course, there can be trend reversal. As I just mentioned, some hammer blow could hit it, but that's what's happening. And so because of that, the people who are optimistic about the future are aligned with either the Internet, India, or China. And the people who are not optimistic about the future are blue Americans or left out red Americans. Okay? Or Westerners in general who are not tech people. Okay? If they're not tech people, they're not up into the right, basically. Because the Internet's if you I mean, 1 of the things is we have a misnomer, as I was saying earlier, of calling it The United States because it's the disunited States. It's it's like talking about you know, talking about America is like talking about Korea. There's North Korea and South Korea, and they're totally different populations. And, you know, communism and capitals are totally different systems. And the thing that is good for 1 is bad for another and vice versa. And so, like, America doesn't exist. There's only just like there's no Korea, there's only North Korea and South Korea, there's no America. There is blue America and red America and also gray America, tech America. And blue America is harmed, or they think they're harmed, or they've gotten themselves into a spot where they're harmed by every technological development, which is why they hate it so much, right? AI versus journalist jobs. Crypto takes away banking jobs. You know, everything. You know, self driving cars, they just take away regulator control, right? Anything that reduces their power, they hate, and they're just trying to freeze in amber with regulations. Red America got crushed a long time ago by offshoring to China and so on. They're making inroads to ally with tech America or gray America. Tech America is, like, the 1 piece of America that's actually still functional and globally competitive, and people always do this fallacy of aggregation where they talk about The USA, and it's really this component that's up into the right and the others that are down into the right or at best flat like red, but they're, like, down. Right? Like, red is like okay functional. Blue is down. Point is, tech America, I think we're going to find, is not even truly or how American is tech America? Because it's like 50% immigrants, right, and like a lot of children immigrants, and most of their customers are overseas, and their users are overseas, and their vantage point is global, Right? And they're basically not I know we're in this ultranationalist kick right now, and I know that there's going to be there's a degree of a fork here where you fork technology into Silicon Valley and the Internet, okay, where Silicon Valley is American and they'll be making like American military equipment and so on and so forth that are signaling USA, which is fine, okay? And then the Internet is international global capitalism. And the difference is Silicon Valley, or let's say U. S. Tech, let me be less, you know, U. S. Tech says ban TikTok, build military equipment, etcetera. It's really identifying itself as American, and it's thinking of being anti China. Okay? But there's US and China are only 20% of the world. 80% of the world is neither American nor Chinese. So the Internet is for everybody else who wants actual global rule of law. Right? When as The US decays as a rules based order and people don't wanna be under China, people wanna be under something like blockchains where you've got, like, property rights contract law across borders that are enforced by an impartial authority. Okay? That's also the kind of laws that can bind AIs, like AIs across borders if you wanna make sure they're gonna do something. Cryptography can bind an AI in such a way that it can't fake it. It can't an AI can't mint more Bitcoin. You know?
Nathan Labenz: 2:03:50 My here's my last question for you. AI discourse right now does seem to be polarizing into camps. Obviously, big way that you think about the world is by trying to figure out what are the different camps, how do they relate to each other, so on and so forth. I have the view that AI is so weird and so unlike other things that we've encountered in the past, including just like unlike humans. Right? I always say AI alien intelligence. That I feel like it's really important to to borrow a phrase from Paul Graham, keep our identities small and try to have a scout mindset to really just take things on their own terms. Right? And not necessarily put them through a prism of like, whose team am I on? Or, you know, does this benefit my team or hurt the other team or whatever? But, you know, just try to be as kind of directly engaged with the things themselves as we can without mediating it through all these lenses. You know, I think about you mentioned, like, the gain of function. Right? And I don't know for sure what happened, but it certainly does seem like there's a very significant chance that it was a lab leak. Certainly, there's a long history of lab leaks. But it would be like, it would seem to me a failure to say, Okay, well, what's the opposite of just having a couple of government labs? Everybody gets their own gain of function lab, right? And this is kind of what we're doing with AI. We're like, Let's compress this power down to as small as we can. Let's make a kit that can run-in everybody's home. Would we want to send out these gain of function wet lab research kits to every home in the world and be like, Hope you find something interesting. Let us know if you find any new pathogens or, Hey, maybe you'll find life saving drugs. Like, whatever. We'll see what you find, all 8,000,000,000 of you. That to me seems like it would be definitely a big missed step. And that's the kind of thing that I see coming out of ideologically motivated reasoning or tribal reasoning. And so I guess I wonder how you think about the role that tribalism and ideology is playing and should or shouldn't play as we try to understand AI.
Balaji Srinivasan: 2:06:07 Okay. So first is, you're absolutely right that just because a is bad does not mean that b is good. Right? So A could be a bad option. B could be a bad option. C could be a bad option. There might be You may have to go down to option G before you find a good option, or there might be 3 good options and 7 bad options, for example, right? So to map that here, in my view, an extremely bad option is to ask The US and Chinese governments to do something. Anything the US government does at the federal level, at the state level in blue states, at the city level has been a failure. And the way here's a meta way of thinking about it. You invest in companies. Right? So as an investor, here's a really important thing. You might have 10 people who come to you with the same words in their pitch. They're all, for example, building social networks. But 1 of them is Facebook and the others are Friendster and whatever. Okay? And no offense to Friendster, you know, those guys were like pioneers in their own way, but they just got outmatched by Facebook. So the point is that the words were the same on each of these packages, but the execution was completely different. So could I imagine a highly competent government that could execute and that actually did, you know, you know, make the right balance of things and so on? I can't say it's impossible, but I can say that it wouldn't be this government. Okay? And so you are talking about the words, and I'm talking about the substance. The words are, we will protect you from AI. Right? In my view, the substance is they aren't protecting you from anything. Right? You're basically giving money and power to a completely incompetent and, in fact, malicious organization, which is Washington, DC, which is the US government, that has basically over the last 30 years gone from a hyperpower that wins everywhere without fighting to a declining power that fights everywhere without winning. Okay? Like, just literally burn trillions of dollars doing this. Take maybe the greatest decline in fortunes in 30 years in maybe human history. Not even the Roman Empire went down this fast on this many power dimensions this quickly. Right? So giving that guy, let's trust him, that's just people running an old script in their heads that they inherited. They are not thinking about it from first principles that this state is a failure. Okay? And like how much of a failure it is? You have to look at the sovereign debt crisis. You have to look at graphs that other people aren't looking at. But like, you know, the domain of what blue America can regulate is already collapsing because it can't regulate Russia anymore. It can't regulate China anymore. It's less able to regulate India. It's less able even to regulate Florida and Texas. States are breaking away from domestically. So this gets to your other point. Why is the tribal lens not something that we can put in the back, we have to put in the absolute front? Because the world is retribalizing. Like, basically your tribe determines what law you're bound by. If you think you can pass some policy that binds the whole world, well, there have to be guys with guns who enforce that policy. And if I have guys with guns on their side that say, We're not enforcing that policy, then you have no policy. You've only bound your own people. Does that make sense? Right? And so Blue America will probably succeed in choking the life out of AI within Blue America. But Blue America controls less and less of the world. So it'll have more power over fewer people. I can go into why this is, but essentially, you know, a financial Berlin Wall is arising. There's a lot of taxation and regulation and effectively financial repression, de facto confiscation, that will have to happen for the level of debt service that The US has been taking on. Okay? Just there's 1 graph just to make the point, and if you want to dig into this, you can. All right? But the reason this impacts things is when you're talking about AI safety, you're talking about AI regulation. You're talking about the US government, right? And you have to ask, what does that actually mean? And it's like, in my view, it's like asking the Soviet Union in 1989 to regulate the internet, right? That's going to outlive, you know, the country. U. S. Interest payment on federal debt versus defense spending. The white line is defense spending. Look at the red line. That's just gone absolutely vertical. That's interest. And it's going to go more vertical next year because all of this debt is getting refinanced at much higher interest rates. This is why look at this. You AI timelines? Right? The question for me is DC's timeline. What is DC's time left to live? Okay? This is the kind of thing that kills empires, and you either have this just go to the absolute moon, or they cut rates and they print a lot. And either way, you know, the fundamental assumption underpinning all the AI safety, all the AI regulation work is that they have a functional golem in Washington, D. C, where if they convince it to do something, it has enough power to control enough of the world. When that assumption is broken, then a lot of assumptions are broken. Right? And so in my view, you have to you must think about a polytheistic AI world because other tribes are already into this. They're already funding their own. Right? The proliferation is already happening, and they're not going to bow to blue tribe. So that's why I think the tribal lens is not secondary. It's not some, you know, totally separate thing. It is an absolutely primary way in which to look at this. And in a sense, it's almost like a, you know, in a well done movie. All the plot lines come together at the end. Okay? And all the disruptions that are happening, the China disruption, the rise of India, the rise of the Internet, the rise of crypto, the rise of AI, and the decline of DC, and the internal political conflict, and, you know, various other theaters like what's happening in Europe and, you know, and Middle East, all of those come together into a crescendo of, ah, there's a lot of those graphs that are all having the same time. And it it's not something you can analyze by just, I think, looking at 1 of these curves on its own.
Nathan Labenz: 2:12:13 I think that's a great note to wrap on. I am always lamenting the fact that so many people are thinking about this AI moment in just fundamentally too small of terms, but, I don't think you're 1 that will easily be accused of that. So with, an invitation to come back and continue in the not too distant future, for now, I will say Balaji Srinivasan, thank you for being part of the Cognitive Revolution.
Balaji Srinivasan: 2:12:41 Thank you, Nathan. Good to be here.
Nathan Labenz: 2:12:43 It is both energizing and enlightening to hear why people listen and learn what they value about the show. So please don't hesitate to reach out via email at tcr@turpentine.co, or you can DM me on the social media platform of your choice.