In this special episode of The Cognitive Revolution, Nathan shares his thoughts on the upcoming election and its potential impact on AI development.
Watch Episode Here
Read Episode Description
In this special episode of The Cognitive Revolution, Nathan shares his thoughts on the upcoming election and its potential impact on AI development. He explores the AI-forward cases for Trump, featuring an interview with Joshua Steinman. Nathan outlines his reasons for not supporting Trump, focusing on US-China relations, leadership approach, and the need for a positive-sum mindset in the AI era. He discusses the importance of stable leadership during pivotal moments and explains why he'll be voting for Kamala Harris, despite some reservations. This thought-provoking episode offers a nuanced perspective on the intersection of politics and AI development.
Be notified early when Turpentine's drops new publication: https://www.turpentine.co/excl...
SPONSORS:
Weights & Biases RAG++: Advanced training for building production-ready RAG applications. Learn from experts to overcome LLM challenges, evaluate systematically, and integrate advanced features. Includes free Cohere credits. Visit https://wandb.me/cr to start the RAG++ course today.
Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at https://shopify.com/cognitive
Notion: Notion offers powerful workflow and automation templates, perfect for streamlining processes and laying the groundwork for AI-driven automation. With Notion AI, you can search across thousands of documents from various platforms, generating highly relevant analysis and content tailored just for you - try it for free at https://notion.com/cognitivere...
LMNT: LMNT is a zero-sugar electrolyte drink mix that's redefining hydration and performance. Ideal for those who fast or anyone looking to optimize their electrolyte intake. Support the show and get a free sample pack with any purchase at https://drinklmnt.com/tcr
CHAPTERS:
(00:00:00) About the Show
(00:00:22) Sponsors: Weights & Biases RAG++
(00:01:28) About the Episode
(00:13:13) Reflecting on Trump
(00:15:32) Introducing Josh
(00:16:35) AI Arms Race Concerns
(00:20:20) Arms Race History
(00:22:35) Building Trust
(00:25:19) Ashenbrenner Model
(00:27:17) Global Good vs. Self-Interest
(00:28:20) Sponsors: Shopify | Notion
(00:31:16) Working with Trump
(00:33:54) Media Misrepresentation
(00:40:09) Cabinet Member Leverage
(00:44:41) Sponsors: LMNT
(00:46:23) China's Communist Party
(00:48:36) AI and National Policy
(00:50:14) The Reality of AGI
(00:52:39) Framing the Disagreement
(01:01:41) Slaughterbots and AI Future
(01:04:24) Risks of Engagement
(01:09:29) Sustainability of Military Tech
(01:13:01) Closing Statements
(01:14:55) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://www.linkedin.com/in/na...
Youtube: https://www.youtube.com/@Cogni...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...
TRANSCRIPT:
Nathan: Hello, and welcome back to the Cognitive Revolution.
This weekend, we're running 2 episodes, which originally appeared on the Moment of Zen feed, focusing on the election, in which I attempt to give an honest hearing and questioning to two different AI-forward cases for Trump.
I like this exercise because, imho, this election is ultimately a Referendum on Trump.
My interlocutors are, in the first episode, Samuel Hammond, Senior Economist at the Foundation for American Innovation, and a thinker I generally very much respect – we cross-posted his appearance on the Future of Life podcast last year, and I also really appreciated his 95 Theses on AI, most of which I agreed with.
In the second episode, I speak with Joshua Steinman, Trump's National Security Council "Senior Director for Cyber Policy and Deputy Assistant to the President” from 2017-2021 – the entire Trump term!
Before launching into it, I’m going to briefly share where I’ve landed on Trump as I expect a possible Trump presidency might relate to AI development.
If you see AGI as a real possibility in the ~2027 timeframe, it seems totally reasonable to consider the election’s impact on AI as a major decision making factor
Of course I understand people have other priorities, but this isn't a politics channel, so I’m not going to share my opinion on every issue - just AI and AI adjacent issues.
Interestingly, as you'll hear, I find that on a number of AI-adjacent issues, I agree with the Trump supporters I talk to.
To name a few:
- nuclear energy is good – we should build more nuclear plants!
- Population decline does merit real concern
- Freedom of speech is valuable and should be protected
- On today's margin, we should have fewer rules and more right to build on their own property
- We should cultivate a Culture of achievement
- We should aim for age of abundance - degrowth is nonsense
- It makes sense to prioritize high-skilled immigration, at least to some degree
- And… american companies like Amazon, Google, and Tesla should not be allowed to abuse their market power at the expense of consumers, but neither should they be subject to government harassment just because they are outcompeting many legacy businesses.
Fortunately, it does seem the Democratic establishment broadly is coming around on at least a number of these, but in any case, there are still 3 main reasons that I cannot ultimately support Trump, despite these points of agreement.
Those are:
- He's far too inclined to escalate tension with China, accelerate decoupling, and prioritize his own narrow domestic political interests over the national & global interest.
- The lack of rigor in his thinking & discipline in his communications seems like a recipe for unnecessary risk-taking in an increasingly volatile environment.
- I believe we are far better off approaching the future with a positive sum and inclusive mindset – not just within the US but globally, if we're to have a healthy conversation about a new social contract that befits the AI era.
On the question of US-China relations, I think we have a general failure of leadership and vision, on both sides, unfolding slowly but gathering more momentum all the time. People now see adversarial relations with China as a foregone conclusion – Joshua Steinman calls it "the physics of the environment"
To put it plainly, I don't accept this.
Conflict with China would be a disaster, and an arms race would take us ever closer to that disaster, but I do not see this as an inevitability, because I don't see China as a meaningful threat to America, Americans, or the American way of life. That's not to say the Chinese government haven't wronged us at times – their coverup of early COVID, whatever its origins, was shameful, and obviously Chinese agencies and companies have stolen a lot of intellectual property from American companies. I don't think we should ignore that – and of course we should take steps to make ourselves less vulnerable to cybercrime – but I think we should stay level-headed about it. The possibility that our grandkids could be speaking Chinese one day seems far more remote to me than that AI destroys the world.
I would say the Biden admin has done OK on AI policy domestically - the 10^26 threshold for reporting has aged pretty well for a 2023 rule, and I do believe in some strategic industrial policy – subsidizing the building of new chip fabs in the US so that we are not so easily disrupted by eg a Chinese attack on Taiwan seems a prudent step for a great power to take.
That said, the chip ban still feels wrong to me, and I have to admit that Kamala's rhetoric on China also depresses me. There's no way for China to understand recent US statements and actions other than as an attempt to keep them down, and I'd say this escalation was premature at best – given the shape of an exponential, we could have retained the option value of cutting them off later and still the bulk of the total hypothetical chip sales would have been in the future.
Relatedly, I have been really interested to see Miles Brundage, OpenAI’s recently departed Head of Policy Research, saying that we need to make a much more proactive effort to demonstrate that western AI development is benign - which of course is much easier to do if it actually is benign, and to some degree open or otherwise shared.
If we have to frame our relationship with China as a competition, I would love to see us race them on metrics like life expectancy improvements, number of diseases eradicated, or perhaps tonnage to Mars. Of course, I do understand that it's a complicated situation, that naivete solutions aren't viable, and that real solutions will be both nuanced and hard to find, and I do intend to invite more China experts onto the show going forward in an effort to more substantively contribute to a positive vision for the future.
Fow now, I do wish Kamala and Democratic leadership in general were less hawkish and more visionary, but considering how much trust has already broken down and extreme difficulty of making credible commitments between the two countries, I’d rather have a steady hand, who is more predictably going to follow a sane consensus path, and might actually make some arms-control type deals, even if politically costly for them, to help us better navigate a tricky period.
Trump, it's well established, will do anything to avoid looking weak, has credibility with rival countries perhaps when it comes to threats but not positive follow through, and for me his withdrawal from the Iran nuclear deal, general taste for inflammatory rhetoric, and stated plans for blanket tariffs – which will hurt American consumers in order to generally stick it to the Chinese – all suggest to me that he is overwhelmingly likely to make things worse.
Zooming out from US-China relations, while I agree with Sam Hammond when he says that the more fucked you think we are, the more willing you should be to roll the dice with Trump … I don’t think we are actually that likely to be fucked.
I've been a bit ambiguous about this over time, often saying that my "p(doom)" is 5-95%, and I've meant that to reflect the fact that while nobody has convinced me that we don't need to worry about AI X-risk, neither has anyone convinced me that it's inevitable.
After all, while we don’t know how they work and see all sorts of surprising and sometimes alarming capabilities, today's best AIs do understand human ethics quite well, and seem to be getting at least a bit "more aligned" with each generation. This may not continue and we should absolutely be vigilant about it, but this is a much better position for 2024 than most AI safety people expected 5 or 10 years ago.
Today, facing a decision like this referendum on Trump, I recall the words of a wise friend who told me that we should think less about what the probabilities are, and more about what we can shift them to.
And here… I have to say that, with competent, stable leadership, I believe we can steer toward scenarios on the lower end of that range, where the nature of the risk is more intrinsic to the technology itself and less the result of letting domestic political incentives lead us toward imprudent escalations, AI arms races, or catastrophic impulsive decisions.
I often think of the role that Kennedy played in the Cuban missile crisis, where my understanding is that he over-rode the recommendations of his military advisors to insist that the US would not escalate to nuclear war first.
That was heroic, but scenarios in which executive authority matters most can cut both ways. When I imagine Trump vs Kamala in moments of intense crisis, where single decisions could alter the course of history, I have to say that I find it much more likely that Trump would impact things substantially for the worse than substantially for the better. After all, we saw how he handled COVID.
To be clear, Kamala hasn’t impressed me on the topic of AI, and in general her track record generally doesn’t show the foresight of great leadership so much as a tendency to follow local trends and incentives. We could certainly hope for better. But still, if I have to choose a leader for a potentially highly volatile period of time, I'll take the stable, sane person who will listen to expert consensus, even acknowledging that the experts could be wrong, rather than betting that Trump will somehow manage to override experts in a positive way.
You'll hear my conversation partners make the case, which I won't attempt to summarize here for fear of doing it poorly, that Trump represents our best case to break out of a broken consensus and revitalize the American state for the AI era, but in the end, I just don't see it. It sounds like chaos, when we need capabilities.
Finally… when it comes to the future of American society, and the world at large, I think we have a never-before-seen opportunity to adopt a positive-sum mindset, create a world of abundance, and ultimately update our social contract.
I think OpenAI and the other leading AI companies do have roughly the right vision when they talk about benefitting all humanity. And I think Sam Altman, for all the other criticisms I've made of him, should be praised for his experiments in Universal Basic Income.
While neither candidate has shown this kind of vision, Kamala at least aims to speak and extend opportunity to all Americans. I thought her best moment of the recent debate was when she said that when Americans look at one another, "we see a friend" – this is at least something of a foundation on which to start building a shared positive vision for the future.
Trump, of course, is far more zero-sum in his thinking and negative in his outlook, and that has real consequences.
I grew up in Macomb County, Michigan - one of those bellwether counties that swung hard from Obama to Trump. And I also have family in Ohio - my beloved Mama and Papa belong to the same cohort as JD Vance’s grandparents - they moved from rural Kentucky to southern Ohio for jobs, the whole bit.
And to be totally honest, one thing I have seen personally, is that Trump has brought out the worst in a lot of people.
While JD Vance, Elon Musk, and others in Trump's orbit are no doubt more sophisticated thinkers about technology than Trump himself, I can't imagine that his brand of cynical populist politics could possibly lead to a healthy national conversation about adapting to AI that is – let's face it – going to disrupt a lot of people's jobs, let alone re-imagining what it means to contribute as a citizen or to live a good life.
It would be shameful if we ended up hoarding the benefits of AI or restricting access for non-Americans due to a widespread sense of scarcity that isn't even justified by the fundamentals, but that's unfortunately where that’s the direction I'd expect Trump to take us.
Ultimately, the idea that Trump could be President as AGI is first developed strikes me as an imprudent move, with far more and more likely downside than upside.
By all means, listen to these conversations with an open mind and form your own judgment, but for my part, I can’t support putting a loose cannon in power as we head into such a potentially pivotal period, and so I will be voting for Kamala, mostly as a rejection of Trump.
Eric: Hello, sir.
Nathan: Yo, what's up?
Eric: All good, man. Good morning. Good to see you. Thanks for doing this. Still waiting a minute for Josh, but I thought we'd get started. Any quick reactions to the last episode that we did on this same topic? Josh worked for Trump, so it brings more of a personal insight or connection than more of a abstract think tank view. But before getting into it with him, I was just curious, any Any reflections or reactions from talking to Sam or how you've been thinking about the topic since?
Nathan: I did go back and listen to the whole thing. And it was a little weird. I don't know. I felt like I kind of kept getting lulled into these scenarios of like all the great things that the, you know, highly competent Trump administration of our dreams might do. And then I look at the actual... you know, election as it's unfolding. And it's like, I just don't see the evidence in the actual candidate or like the way that they're executing a campaign to believe it, you know? And I also feel like there's this weird, I mean, politics is of course full of like contradictory messages, but I feel like there's a weird one happening where The criticism, obviously, and I don't even care about this too much, but the talking point on the Republican side from the sort of popular surrogates is like, who's the president? We have no president. The president's incompetent, whatever. Meanwhile, we've got Trump. He's a strong, singular figure, and his whole appeal is about what a strong, irreplaceable figure he is, and only he can fix it and so on. And that seems to be what the large majority of his voters believe about him. But then when we get on with Sam, it's like, oh, but the president doesn't really do that much. You know, it's like, it's actually all the people that he's going to appoint that are really going to matter. And so I'm like, well, which is it? You know, is this sort of a, if that's the real story, are we just kind of lying to the voters? Which I guess, you know, again, maybe all the candidates are sort of lying to the voters in some ways. But I actually tend to think that the person probably matters. That seems to be my default position. That's certainly like what the Constitution says. So I don't know.
Eric: Let me segue and introduce Josh. Josh, thank you for joining. I'm lucky to be a collaborator with Josh in that I'm on the Galvanic cap table, but Josh is also a friend and someone who helps me make sense of what's happening in politics. Josh previously served in the Trump administration, and so I thought it'd be great to bring him on and have this conversation as well. I briefed him that we previously had a conversation, and I think this is a good one because I think, Nathan, you represent a lot of people in this country who are first principled, not tribal, and really just trying to sort of call balls and strikes as you see it. And while you don't love everything that's happening on the Democratic side or left side, of course, there's something about Trump that just makes you uncomfortable. And I don't mean to dismiss that, I'm just saying that it is deeply unsettling in terms of the risk that he presents.
Josh: Sorry, what risk and what is it that unsettles you?
Eric: Let's get into it.
Nathan: Well, I focus all my time and attention pretty much on AI. And I think we may well be headed for a short-term situation in which AI systems become extremely powerful and pose all sorts of unprecedented challenges. On what time horizon? Potentially as soon as the next two to three years. So, you know, very much in... You don't think the grid... I mean, I think the window of possibility is very, very wide open.
Josh: I'm just saying like a bunch of folks that I really like have said that essentially the U.S. energy grid can sustain current rates of growth of AI power consumption until about 2026 and then essentially run out of power. So, I mean, are you talking about in that window?
Nathan: Possibly. I mean, that would be the near end of the window. You know, if you listen to somebody like John Schulman, who was the head of post training and one of the co-founders at OpenAI, he was recently on the Dwarkesh podcast and said, you know, yeah, this could happen as soon as next year. This being like AGI, probably an early, not, you know, super intelligent AGI, but nevertheless, something that I think could be profoundly important. you know, altering of all sorts of dynamics and power structures, you know, within and across countries. And Dwarkesh was like, you mean next year? And he's like, well, that would be kind of a surprise, more like two to three, probably. And Dwarkesh was like, that's still really soon. You know, three is only 2027. So yeah, I mean, I don't know. The energy question is really interesting. I see huge efficiency gains happening all the time. And I tend to think a lot of these analyses don't take that into proper full account, but it's hard to say, you know, I mean, you do, you can only see so many like 10 X efficiency improvements before you're like, geez, Unless these are like fake or they somehow don't work, you know, when it really matters, then it seems like we probably will have enough energy. I've done a lot of an energy analysis just in terms of like offsetting as well. You know, how many chats do you have to have with a model before it takes as much energy as like one crosstown car trip?
Josh: I feel like we're sort of quiet. So just to be super clear. So the thing that concerns you is what about Trump with regards to AI? Tail risk.
Nathan: Tail risk. I think being- Like what tail risk? Creating an arms race with China. Creating an AI arms race with China.
Josh: Aren't we already in it?
Nathan: I mean, China's gonna three- I think we're gonna figure that out over the near term. I mean, not necessarily. I think that is probably, or has a very good chance at least of being the key question that political leadership on both sides is gonna decide. If you can believe the reporting, which is like hard to say, of course, we have recent comments from Xi suggesting that he might not be inclined for an arms race and he does seem at least open to taking things like existential risk from AI seriously you've got Chinese Turing Award winners also coming out recently and joining Aman Turing Award winners with statements about geez we might really need to slow this technology down like maybe we can have an international treaty to not create slaughter bots I don't think any of those things are inevitable. I think if we say, oh, we're definitely in an AI arms race with China, then we're probably fucked. And then who cares who's president, arguably. But I think I'm like a one issue voter. If any candidate will say, I'm going to do everything we can to not have an international AI arms race and to try to make AI a peaceful technology.
Josh: Are you familiar with previous arms races with other competitive, aspiring global hegemons?
Nathan: I mean, somewhat. I don't know how many arms races you have in mind, but I... Do you think that countries tell the truth to each other when thinking about national security? I think it's very hard, but I don't... I mean, again, if you're going to just bake in an AI arms race, then I think, you know, from my perspective, that's kind of the end of the story.
Josh: I don't think, you know, to take one earlier... Have you seen the Chinese Communist Party's plans to 3x its total power output to 7 terawatts a year in the next 15 years?
Nathan: Yeah, that's great. I mean, they have a lot of people still living in rural poverty. So, you know, they've got plenty of uses for that. And I wish them well on their power expansion. I also would support, you know, at least some amount of power expansion here. I would love to see us build nuclear reactors. I'm not somebody who is, you know, anti-growth or, you know, anti-progress. I call myself an adoption accelerationist when it comes to AI.
Josh: Would you privilege Xi's words or actions when judging whether or not they're already engaged in radically expanding their capacity to compute.
Nathan: Well, we have cut them off, and this is a Biden policy, so I don't blame Trump for this, but we have set the tone in this dynamic most recently with a dramatic escalation in the AI domain specifically by saying, we are not going to sell you leading chips. And so, of course, they're responding to that by saying, well, shit, if you're going to cut us off, and now we're hearing all these comments from every which angle about arms race and decisive strategic advantage that's going to be achieved by AI. Of course, they're going to be trying to figure out what they can do to avoid that. But again, I challenge that dynamic. I don't want to see us in this AI arms race. I think we can begin. We should try to build trust.
Josh: So you want to see a candidate who's going to allow... people to buy whatever chips they want. I mean, do I get this correct? Like that's what you want. You wanna take off the sanctions. You wanna allow them to buy advanced compute. Why?
Nathan: I want to build trust. I think that if we end up in an AI arms race and we end up, you know, seeking strategic advantage over each other, we are going to all lose.
Josh: I'm asking you a very specific, very specific question. What task do you want someone to accomplish?
Nathan: Avoid AI arms race with China. Begin by building trust. Yes, share benefits now.
Josh: No, but you just said what you want is to take off the sanctions and let the Chinese buy advanced chips, which are necessary.
Nathan: I don't even think that's necessarily true. I think that, in fact, what we're seeing in the research, even from this week with a recent potentially game-changing breakthrough, for better or worse and probably both is that distributed training is now starting to work so the whole paradigm this is why i also don't fully believe the energy story I've got a buddy who thinks he can train at one-tenth a cost using FPGAs.
Josh: I'm under no illusions that we need advanced chips to train crazy models. Okay, so you want rhetoric. You're looking for rhetorical change from a political candidate. Is that your request? Yeah.
Eric: Josh, do you think that basically that we're in an arms race no matter what and sort of this idea of trust?
Josh: Yeah, that's the Physics of the Environment.
Eric: Yeah, so sort of a trust building.
Nathan: No, that is not. The physics of the environment does not dictate. I mean, unless you're a total, unless this is some sort of total universal determinism argument where like we don't have free will in this situation, then again, what are we even talking about? But if we have some sense of agency.
Josh: Look at the actions of the Chinese Communist Party. Like the Chinese Communist Party. Look at our actions.
Nathan: We are both currently escalating with each other at every turn. That is a choice that both political leadership regimes are making. And I think it's a terrible one. I mean, the last arms race, you know, you kind of raise.
Josh: I reject your premise, but I appreciate that you're trying to inject it. What premise are you rejecting there? That everything is completely escalatory. Like this is just great power politics. This is welcome to the history of the world.
Nathan: The history of the world is not on a good trajectory. I mean, how are we going to get to a good trajectory where we have peace between great powers and AI that serves us as opposed to AI that hangs over us all like a sort of Damocles as the nuclear arms race still does.
Josh: So you're interested in a candidate that will appease commercial powers inside China. And I'm just trying to understand what you want.
Nathan: Yeah, I would go for benefit sharing sooner rather than later, I think. I mean, I don't know what the... Let's take the Ashenbrenner model as sort of the contrasting point of view, right? Stylized story. You have to talk to me like I'm fine.
Josh: I don't know what that means. I don't know what that means. Sorry. I'm a... I'm a simple man.
Nathan: Explain it to me like in five. His situational awareness manifesto in more or less his words, he said, here's what I think we should do. We should take the lead that we have on China, jam as hard as we can, stay ahead, use all kind of available mechanisms to stay ahead. use the window of time that we have in the lead to solve alignment, make safe AI to achieve decisive strategic advantage. Then we can go to China and have a conversation about benefit sharing. I would say let's, I don't like that plan at all. I would much rather see a plan that involves earlier benefit sharing and a collaborative approach to trying to solve the fundamental challenges. So you want to give more technologies to the Chinese.
Josh: Is that right? You want to give things?
Nathan: I mean, I would I would engage in trade with China. Yes, I'm I don't think the case has. You're our largest trading partner. What are you talking about? Yeah, well, we've just cut them off from perhaps the most fundamental resource in the world at the moment. So we're not we are in a period of decoupling. I would like to see us stay more coupled rather than continue to decouple from China.
Josh: Okay, so you're interested in closer alignment with the Chinese Communist Party. You're interested in giving them the tools to build the things that you fear the most. I'm trying to understand this here.
Nathan: I'm interested in working together as a global community to develop AI in a positive way, not racing each other to achieve strategic advantage with AI over one another, because I don't think that ends well for anyone. And it might not end well in any case. Do you think a president of the United States should represent a global community or the citizens of the country that they're leading? I think it's definitely a mix of both. I mean, you know, when you have global issues that affect everyone and that- Should there be a priority? Should one take priority? I think it depends on the issue. I mean, there's, when it comes to a pandemic, we're all in it together.
Josh: Give me an issue where there should be parity in between president's evaluation of options and judging the benefit to humanity, vice the citizens.
Nathan: Yeah, right now there's a monkeypox outbreak happening in Africa. If you're the president and you're sitting on a bunch of vaccines, you could say, well, we could send a bunch of vaccines to Africa and try to get that outbreak under control. That would be good for everyone in the world. Or you could say, let's just hoard those for ourselves. [Expletive] everyone else. We'll wait till it gets here. We'll all be vaccinated. Everybody else can deal with it on their merits.
Josh: I would vote for the former because I know of no one who's statistically likely to get a Hockey Box.
Nathan: You don't know anyone in the Democratic Republic of the Congo right now, perhaps. But those people are out there. And I believe that we should prioritize the global good over a narrow self-interest in cases where the global good is at risk.
Josh: I think you've got a candidate that you're going to want to support.
Eric: Let me zoom out really quick. This is a good debate because we don't hear this debate too often. But I want to get out from the weeds of this specific issue, which is obviously very important. And Josh, I want to hear from you a little bit about your experience working with Trump because there is a representation of who Trump is, what it's like to work with Trump. And from our private conversations, you said that that is different from your experience. So I would like you to articulate what is your perception of how other people have perceived sort of the previous Trump administration and Trump as a person. And then I'd like to hear from you where there's overlap and where there's difference.
Josh: Yeah, he's a really sharp guy. So, you know, I worked for four years. I was the senior official on the National Security Council coordinating all of our cyber telecom supply chain and cryptocurrency policy. That meant that essentially when the president said, this is what I want our policies to look like, it was up to me and my team to structure national strategies and then ensure that all of the departments and agencies, DOD, CIA, Department of Energy, etc., conformed and executed those strategies. So my office was at the White House. I had a small team that worked for me, and it was our job to coordinate how the U.S. government functioned and what priorities it pursued. Yeah, I just found the president to always be thinking more steps ahead than I was. And it was a very humbling experience. Not that I'm, you know, some genius or anything like that. But, you know, often in meetings with foreign leaders, you look at the talking points that have been assembled by the sort of bureaucratic entities such as they submit them and President Trump would talk about things very different. And it was only after a day or two of like pouring through a bunch of research that you realized he was talking about political economic priorities of the counterparty at the table. So I just think he's a really sharp guy, probably one of the smartest people I've ever met. I think that the challenge that he faces is that a lot of people aren't that smart. And so you have to find a way to communicate and find common ground with folks. And I think he's a great communicator. He's shown that over 20 years of being one of the leading TV stars of an entire generation of having a... huge real estate company and a bunch of other successful and some unsuccessful companies, just like every entrepreneur has hits and misses. I was always really impressed and enjoyed working for him.
Nathan: You buy in his latest digital trading cards? Going long on the Trump token. Oh, he's currently hawking, unless this is like an AI fake, he's currently hawking digital trading cards for 99 bucks a piece. Buy 15 and they'll send you one in the mail, physical one. I mean, I don't really care. It's just absurd. I predict that that will be a miss on the entrepreneurial ledger.
Josh: AI Crypto friction. I just love seeing it. That's cool. I got it.
Eric: Josh, why do you think other people don't see that? Like, what is it about Trump that some people think he's very sharp and other people think, you know, he's not a stable genius, you know, to quote the quip. Like, what is it about him that, you know, some people see the intelligence and some people don't?
Josh: Yeah, I mean, it was really eye-opening where... you realize that most of the world gets their information a medium, right? A media, one might say. And, you know, those mechanisms are under significant control. Not all of them are under control. And so what I usually find is you run this loop with people who think that they know what he's like or even what the policies are. which is that they read articles that don't represent reality. They make assumptions. And so when you confront them with facts, they go back to this set of media narratives, articles, press operatives, et cetera. And they say, well, no, that's not true because I read the following words on a website. And, you know, when you work in one of these places for a long, even for a short time, what you essentially see is on a day-to-day basis, people actively, either through ignorance or malice, misrepresent reality. And you just learn a sort of, it's a feature, unfortunately, of the system. So, you know, on a... On a weekly basis, I would see articles in the mainstream media. People would send me breathlessly like, oh, my God, what's going on with X, Y or Z? Read the article and, you know, it'd be a total fabrication or a misunderstanding of what was actually happening. Furthermore, and this is the most interesting part. I'll give you an example. This is amazing. So I was a military officer for many years. Then I left and I went to Silicon Valley. When I was in the military, I started a luxury Aman-made CPG company. Not worth talking about. Anyway, I had a whole bunch of things that I did in the military. I got out. I went to a startup. I was the running ops at this startup. And then just through a strange turn of events, ended up at the White House. So two and a half, three years in, one of the senior national security correspondents, a guy whose name you know, whose articles you've read, has been begging White House comms to sit down with me for over a year. I wasn't one of these guys who leaked to the media. I didn't really care. I've got a long list of things that I got done because I just stayed focused on doing the thing that he asked me to do. They were like... five or six major things that he asked me to do. And I just went about and did them. But finally, in like year three, three and a half, something like that, we're like, okay, we'll sit down with this guy. Literally, like best selling author, writes for one of the top three newspapers in the world, whole thing on TV all the time. He comes in and pulls out his latest book, Signs the thing to Josh, hands it over. I'm hearing all these amazing things about you. Like you've done this, you've done that. And like, it's clear that he's talked to people and he knows what I've actually done. And we went on to have a very in-depth, very direct conversation for about an hour because he's writing this news story. You know, going what I would consider to be relatively straightforward. strategically deep. Like, why are you doing X? Why are you doing Y? And me giving him like very specific answers. He has rejoinders to those that I'm like, but X. And he's like, huh, okay. Hadn't thought about that. So, you know, I found him to be a competent interlocutor. The story comes out, none of that in there. The only line of description, Steinman, a former SOC entrepreneur, in over his head. So you have these engagements with these people and you realize that essentially like a significant period of time, they're acting in Not good faith. I wouldn't want to say bad faith. And you just have to extrapolate that out to the news cycle. So when people are like, oh, geez, Trump's this orange man bad. I'm just like, at this point, I, you know, I can't help but laugh because these are like we it's like you're talking to someone in the cave. Like, I just can't, I can't, there's nothing I can do, man. Like you're in the cave, that's cool. Like, listen to the speeches, like go direct. That is how I've always tried to, you know, pull myself out of those types of, you know, situations of knowing. But I mean, friends, family, at this point, it's sort of like, I can't help someone that wants to stay inside that cocoon.
Eric: What do you say to someone who says, hey, You had a great experience. There's some people who had a great experience. But listen, if you decide, a lot of people or a few dozen people or something worked for Trump in the last admin who don't endorse anymore.
Josh: Kamala obviously- I'm going to go name by name and I'll tell you all the dirty laundry.
Eric: Less Dirty Laundry.
Josh: You want to find out who's paying them? Should we talk about the donors? Should we talk about the private equity firms? I'm happy to do it. Like every single one of those people has skin in the game and there's a very specific reason why they've done what they've done.
Eric: So enlighten us a little bit, not on a name by name basis, but more on a macro, you know, and obviously Kamala doesn't have people saying great things about her either who worked for her. So, you know, this is, this is bipartisan, but. Maybe the bartenders. So you mentioned some examples of maybe some corruption. Give some of the macro reasons why people in the last administration or who worked for Trump don't endorse or don't have good things to say. What was the situation there? And why would it be different in the future?
Josh: Why would it be different in the future? I mean, there's a bunch of questions in there so I can sort of answer the one that I want. Look, it's really powerful. It's one of the reasons why I think it's hard to be a member of the cabinet because when you walk into the situation room and you've got genuine disagreements and like Nathan has... you know, some interesting kernels of disagreement that I think, you know, if we had a different type of conversation, we could sort of pull on. I don't think we're going to have that type of conversation, but nothing against you, Nathan. I'm just saying like, that's not where this is going. But like, imagine that you're sitting in a room and I'll sort of pull out what I would think, what I would call like the best version of your arguments that actually carry weight in that room. So you say something along the lines of, this is how much money US companies make selling these types of products to these types of customers in China per year. And you say, okay, we're going to take step X, we're going to cut off this, and they're going to build the capacity to build Y. And the long-term negative consequences for US GDP are going to be Z. That's sort of a standard formulation of a debate that happens a lot. And it doesn't have to be China. It can be other countries as well. You could imagine that this debate likely happened around the de-dollarization of the Russian Federation around the war in Ukraine, something which, you know, One could imagine had been discussed for many years, cutting them off from Swift, et cetera, but only happened for the first time. So essentially weaponizing the dollar, which the U.S. did to the Russians about whatever it was, like two, two and a half years ago. So imagine that debate, right? Nathan comes into the room and is like, hey, like, name your... you know, name your analog chip manufacturer, you know, those guys down in San Diego, I forgot the name, you know, Qualcomm, Broadcom, whatever. Like they're making these chips and we're afraid that there's going to be significant hit. The Chinese are going to spin up a competitor, et cetera. Okay. So if you believe your position firmly, you need to have the ability to walk, right? You need to tell the president, like, if you're the Secretary of Commerce, like, I think we need to make this deal or I think we need to not make this deal. If you don't have the ability to walk, you're essentially at the whims of the people that do have the ability to walk inside that room. What do I mean by that? I mean that people will be calling you if you're a secretary or a deputy secretary. People called me. People in the news that you read right now about many China-related things, some of them got my desk number, called me up, like big, big, powerful people. And they're like, Do you know who I am? Because I, you know, just to be clear, Nathan, so I architected our policies against Huawei. I architected Executive Order 13873, which is what you would have heard referred to as the ICT supply chain executive order. It's now it's a counterpart to CFIUS. It's the ability of the U.S. government to shut off a company from doing business in the United States if it has deep ties to the military or intelligence complex of a foreign adversary. including Chinese companies, Russian companies too, many other countries as well. So if you're the Secretary of Commerce and you hold that power, you walk into that room, and you've had powerful people calling, and they're essentially threatening you. They're saying, do you know who I am? Do you know what I can do? I know you need to work after this. You're going to do this thing for me. I've got leverage over you. And I think that what you're seeing right now with many of these people is, you know, people have leverage over them. They're not independently wealthy. They've got modest pensions. They want to sell books. They want to get on TV. They want to get board seats. You know, 75K a year, or sorry, 75K a month from a, you know, top 20 technology companies, nothing to sneeze at. It's like sort of going right for these folks when they play ball. Like, you know, a million dollars a year here, a million dollars a year there, pretty soon you're talking about real money. And so, know it's the it's the it's the basics of what motivates human behavior you know money ego, compromise, et cetera.
Eric: I appreciate that articulation. Let's actually focus on China for a bit. Could you give a broad overview? We got in the weeds of AI in the beginning, and we'll get back there in a bit. But maybe you could start with just a broad overview of how you think we should be responding to China or engaging with China. You mentioned previous great power conflict.
Josh: The Chinese Communist Party is a technology-enabled totalitarian fascist dictatorship. That's what the Communist Party is. They're a Communist Party. They kill their own people. They harvest their organs. They create strange rape environments when they're part of minority groups that they don't like, Muslim minority groups that they don't like. They're threatening to take over a country with a long history of democracy, Taiwan. They bully their neighbors and they steal things that Amans build, that smart Amans build. There's over a trillion dollars of stolen intellectual property over the past 20 years. Go look up Advanced Persistent Threat 10, APT10. It's one of the leading government-sponsored hacker groups that the Chinese operate. And folks like that have been given huge shopping lists of like, go and steal us this, go steal us that. And then because the Chinese Communist Party controls China, you hand that material over to companies, to individuals that the government supports and likes. So I don't think that this is like dealing with your neighbor, right? It's not like the nice guy next door. Maybe you guys, you know, he goes to a different church. You know, this is an entity that is a revolutionary Marxist entity that wants to make the world safe for Maoist communism. And I just think that you have to sort of come with that approach when thinking about what you want to do with the Chinese Communist Party. You can be scared.
That's fine. Being scared of what a world dominated by the Chinese Communist Party would look like, totally reasonable. You can be scared that maybe things are going to spiral out of control, but you've got to remember, this is who you're dealing with.
Eric: And so what are your thoughts on how we should be handling AI then? Or sort of give you the concerns that Nathan has shared earlier around the arms race being potentially hit?
Josh: I think they're a synthetic straw man for a bunch of other things that maybe a lot of people advocating for these positions don't even understand, like changes in trade policy. And so I don't really feel the need to engage with the sort of like technical details. I can tell you that And I do have friends that run some of these big AI companies. They're much more concerned about energy than they are about these strange policy angles with regards to international trade. And so I just don't think it's really that serious of an issue. And I don't mean serious as important. I mean serious as in worth prioritization at the current moment. If energy is 35 cents a kilowatt hour in the United States, you're not getting AGI. If it's 20 cents a kilowatt hour, you're probably not getting AGI. Like I'm not even sure that AGI exists, nor are you in Heart of Hearts. Like you can believe that there's a synthetic, you know, entity out there that, you know, can represent itself as something that to our minds, you know, one could ascribe intelligence to, but I'm just not sure that that's the case. Yeah. You know, it's almost a theologic question. So we can have a theologic debate if you want. But to me, this is about processing power. This is about corporate power. It's about trade. It's about money. Like those are real things in the world. All this other stuff is fantasy land.
Nathan: Yeah, I mean, I think you should study AI more. AGI definitely exists. We are a form of AGI. I would call us a weak AGI. There is no reason to doubt that something more capable than humans can be created. We are not the end of history. The timeline on that is very unclear. The energy requirements for that are also not entirely clear. But the idea that, I mean, so many of these debates ultimately come down to, do you actually take the tail risk from AI seriously or not? It sounds like you don't. And if you don't, then it's sure, then the whole debate is kind of moot, or at least the perspective I'm bringing to the debate is kind of moot, because if you don't really think it's a problem, you don't really think that tail risk is out there, then you can say, sure, like, who cares? Why would we prioritize that? but I don't think that is going to age very well. And it may age poorly even on the timescale of the next president. I would ask anyone watching to go watch the short Slaughterbots film and then go watch some videos coming out of Ukraine and then watch the human versus drone, human versus AI drone races. And, you know, just extrapolate a little bit and say like, are we not on the Slaughterbot trajectory?
Josh: you and I are talking about different things. Okay. You're talking about technology futures. I'm talking about political power. So if you want to have a conversation of like, what's in the art of the possible, like I literally cut my teeth in the military when I wasn't deployed to Iraq, looking at technology futures, like was on the team that put, you know, some of the first unmanned systems in the hands of people that were using them, 3D printers, augmented reality, literally a decade ago, in some cases, depending on the data we're talking about here, like more than a decade ago. So I have no doubt that you're going to have, in fact, we already have like autonomous, by we, I don't mean the US government, but like I have friends that have started companies that do autonomous targeting, all these things, like I get all that. I'm talking about political power. And I'm talking about what moves the needle politically. And I'm talking about how these things actually play out inside the rooms where decisions get made. Like if you want to have a conversation about like what might happen with technology in the next 10 years, that's a different conversation. I'm talking about politics because we're here ostensibly talking about President Trump.
Eric: And I want to just frame what I think is the difference of opinion here, which is Nathan believes that we are headed, you know, whether it's know either administration at this point because Biden hasn't kept policies that that Biden's advanced policy that Nathan's not excited about, which is this arms race with China. And Nathan believes that this arms race with China is not inevitable, that we are accelerating it in trying to compete. And by identifying the arms race, we're accelerating it. And we need an alternative path that would remove some of the sanctions, would hopefully build trust and maybe stop the arms race. Nathan, I'm sure, would would concede that there are some risks with that approach, of course. And Josh has a much more realpolitik perspective, which is, hey, we are in an arms race. To, you know, remove sanctions would be to be aiding our enemy in the arms race. And thus we would be, you know, losing that arms race to a dictatorial, you know, communist.
Josh: In the fullness of time, the people who in great power competition have advocated for, you you know, this type of thing, de-escalation, et cetera, you know, are one, two, three steps intellectually or financially removed often from, I'm not accusing you of this. I'm just telling you that like, read Venona, like, you know, read Cold War history. You know, people get wrapped up in these memetic things you know, mania usually end up having sponsorship in the counterparty. So, you know, I understand that we're afraid of this potential future. I don't disagree with you. I've made investments in this space personally with, you know, startups building military technology that's going to be able to do all this stuff because I'm terrified of it.
Nathan: Talk about your financial conflict of interest. You're throwing out everybody else's sponsored. You've got direct investment in military technology. Yeah, 100 percent. That is somehow not. Why is that supposed to undermine me in the abstract where I don't have any?
Josh: No, because I'm saying I'm afraid of the same reality that you're afraid of. But I'm investing in these companies because I want us to have it as opposed to the other side to have it. And what you're saying is that you're afraid of it and you want to get a bunch of words on a sheet of paper as a mechanism to try and prevent that eventuality from coming to pass. And I'm telling you, words mean nothing.
Nathan: Well, they're a start. I mean, were you, do you think Reagan was making a huge mistake when he engaged in arms reduction treaties with the Soviet Union? Was that a terrible idea because words mean nothing? As far as I know, the actual number of deployed nukes came down dramatically. And while not nearly enough, I would say that's a very good thing. Are you like... Why is that not possible to execute something similar in the AI era?
Josh: Because so many other things were happening at the time that caused those things to happen, like Soviet economic collapse, overmatch, extension of their military industrial complex into expeditionary you know, conflicts in Central Asia, et cetera. Like, I don't think that the story that you've told yourself about why things happened is a reflection of reality.
Nathan: I'm not telling myself any story. I'm just saying there is precedent for arms reduction treaties. There is precedent for arms control. Still very few nations in the world have nuclear weapons. Many could develop them if they chose- I think we're doing a great job of arms control.
Josh: Limiting the chipsets that go to the Chinese Communist Party. I think that's a great start. Let's keep doing it.
Nathan: What's your plan? How are we not all going to be living under the threat and possible actual reality of a militarized AI arms race? How do we not end up there? Because if we end up there, it's bad for everyone. I mean, we could all die in a nuclear war anytime, right?
Josh: Oh my God, the Sweet Meteor of Death is coming.
Nathan: Do you deny that? I mean, if you're going to say like, oh, the nuclear sword of Damocles that we have is no big deal, then I think that's just ridiculous. Like there's some finite probability every year that we could have a nuclear Armageddon. That is going to, in the fullness of time, end our species if we don't deal with it in some other way. The probabilities are going to accumulate and there's just no escaping that. The only thing we can do is de- decommission the weapons. So if you want to decommission all nuclear weapons? I think we should decommission a very... I think there is maybe a small amount that could be useful for deterrence. We're way beyond that. We do not need to have the capacity of nuclear weapons to destroy the world fully. And we do. And it's a huge strategic blunder that the public... Everybody who doesn't overthink it knows that the world is not in a great spot for having 20,000 deployed nuclear weapons. So what is the right number? It could be zero. It could be like a small number that's just like enough to make sure nobody fucks with you. Fine. But we're not in a healthy place. Right. We are. We've had many close calls. We've had many sort of false alarms. You know, we've got Petrov Day that we celebrate because one random dude like had the backbone to override what his signals were telling him at a critical moment. Who knows how close we came in the Cuban Missile Crisis, but we just can't keep living with this persistent threat of annihilation forever. It will one day catch up with us. So if we're going to add another one with AI, that seems to me very bad. And I, you know, my question to you is, what's your plan?
Josh: Yeah, I think you have no way, no frame of reference for how reality actually works. No, I mean, like, I mean, I genuinely mean that. Like, I think you're well at it.
Nathan: What's your plan? You can insult me, but what is your plan? How do we get to an AI prosperous future that is not a mutually assured destruction? We're going from MAD to MADE.
Josh: Yeah, the Mayan plan was very interesting for how to stave off these types of cataclysms. When you defeated adversary tribes, you brought their warriors back, put them on the throne, split open their chest, removed their hearts, and allowed their blood to, you know, fall out on the temple.
Nathan: Great analogy. What is your plan? What is the plan? What is the Trump plan? What is your plan? Give me a plan. I'm at least giving you a plan. You're just insulting me by comparing it to the Mayans. I have not heard any plan.
Josh: I have no connection to the President. I'm not a part of the campaign. I run a private company right now, so I can't speak for the President.
Nathan: Make it your plan. What is your plan? If you were president, if you were advising, whatever hypothetical, just tell me a plan. What is the plan for a good outcome that doesn't, I'm not asking for another round of insults against me. I'm asking for a plan. Give me an outline of one.
Josh: You think that the way in which policy gets made is people screaming loudly get to elicit some type of formulated structure that responds to their queries. That's not how things work in the world. I'm telling you like- Okay, but what's your plan? You're just doing it again. Continue the pressure on the Chinese Communist Party and shift semiconductor production to the United States. That is a plan on one issue that is an actual issue that people talk about. Not this thing that you're talking about.
Nathan: Not like this, like, I'm afraid, please comfort me. That's just a step on the path to, I mean, I think we should also have some domestic chip manufacturing capability. So I don't think it's a good situation that it's all, for many reasons, not just the AI arms race, we should be able to make our own chips. I think we can agree on that. However, as presented, that is not a plan to reach some sort of stable AI future. That is currently one move on the path to the AI arms race. It's been framed that way.
Josh: Your structure of approaching this problem of a stable AI future is again, like it comes from a place that I don't accept. Like it comes from a set of experiences that I have no exposure to. Like, that's not how people think.
Eric: Just to sort of add to that, Nathan, earlier you said if someone isn't afraid of AGI, then they're not going to, then this conversation is a little bit moot because they're not super worried about the AI arms race in terms of things getting out of control with the technology and its threat to humanity. They're mostly just concerned about beating China. And so... Josh's proposal is consistent with that, which is sort of to shore up our domestic capabilities. Is that how you perceive the situation too, Nathan? Or when you flush out exactly, what is the concern on the AI arms race that you have?
Nathan: Well, I think I honestly, I just watched this Slaughterbots thing again recently. I think it's a very good short visualization of what the future might be like. It is. But, you know, we have many good pieces of fiction, I think, can inspire good thinking. Notably, the movie Her is inspiring the AI developers right now in their product development. So basically, this depiction of a future of out of control, highly weaponized AI is just one where everything is destabilized because everybody's under constant threat of assassination. You've got like tiny little autonomous things that can take out any target. They overwhelm, you know, they sort of swarm defenses. And this is the trajectory that we're going on, right? We are headed for- 100%. Right, so this is bad. So, you know, we can sort of dismiss people as naive who want to avoid that future. Or do you want it controlled by the United States? I don't think either party can control that sort of technology or dynamic. And so I think the only way that we're going to have a good future... So you're in favor of continued sanctions? No, I'm in favor of working together to avoid that branch of the technology tree. What does that mean practically? Is it mutual sanctions? Well, I think it's a many-step process. I mean, you've got to start by building some trust, right? The two powers right now don't trust each other. We are locked into a period of mutual escalation, mutual decoupling. And the first thing is to extend some olive branches to try to reverse that process so that there can be some form of trust building so that the world's two great powers What olive branches do you propose we send to our largest trading partner? I think it would, you know, of course there's in, in any complex thing like this, I don't think there's a single, you know, one sentence answer. I think it's, I would first change our attitude and I would start. You want to send them the advanced chips? So that instead of having one party that can do this, you want two? I think that we should try to work much more together as one party than two. Is that going to be easy to achieve? I don't think so. A communist party? The communist party has been many things over time, right? I mean, we currently have a leader there who we don't like. We also had previously Deng Xiaoping. Before that, we had Mao. I mean, their leadership can change just like our leadership can change. I don't think we should cast ourselves as their permanent enemy or vice versa, because who knows what, you know, openings there may be in the future. Nixon went to China, right? I mean, there is the possibility for much better relations between the countries to come. And we foreclose that possibility to our own deficit or our own detriment, I think.
Eric: Nathan, do you concede the risk of doing that given sort of, you know, if they take advantage of us like they have for many years? You know, there have been lots of efforts at trying to liberalize or create good relations, and they haven't always been responded to well or been met, you know, reciprocally, to put it mildly. And that hasn't been great for us, right? It seems a big critique of our policy over the last 20 years is we didn't take the threat seriously enough or soon enough, and thus we enabled them to build. And so some people might be listening and gain a lot of power. Now they're a great power alongside. It's been a great 5,000 year old civilization that had a down century. But 30 years ago, you know, the economy was very different. And so there's a question as to like, is this just repeating the same mistakes that we've made, which is not treating China like the the the sort of threat that that it is and letting them Go listen to the private speeches of the Chinese Communist Party that they give inside their private party conferences.
Josh: Listen to how they talk about the United States and ask yourself if that's a country that you think we ought to be doing favors for. I don't mind trading. I don't mind even swaps. I don't mind exchanging money for goods and services. But ask if you think we should be doing them favors after reading what they say about the United States in private.
Eric: What would we learn, Josh? I haven't heard the speeches. What would we learn from them?
Josh: They do not think that we are their friends. We think that they think that we are their enemies. Here, how about this? Don't believe me on this. Rand Corporation put out an amazing report in 2015, 2016 called Systems Confrontation, Systems Destruction. And it is an accounting of the current thinking of the leading theoreticians inside the Chinese Communist Party's military apparatus, the People's Liberation Army. So basically how they think about competition with the United States. It's the most terrifying thing you'll ever read. It's written by the students of the two colonels who now run the think tank of the People's Liberation Army. who wrote the Unrestricted Warfare article in 2000, or sorry, 19, I believe it's 1999 or 2000. So current thinking inside the Chinese Communist Party is to disassemble the United States along hundreds of vectors from our ability to use language, to our ability to govern ourselves with laws, to explicit military capabilities and tasks. I mean, just read the Rand Corporation report. read Unrestricted Warfare. I don't know how else to try and tell the audience that these are adversaries. We can collaborate, we can deescalate, but this is how they think about Ama is something to be destroyed, something that's in the way of global Marxist revolution with Maoist tendencies. They buy off politicians. They will change national laws. It's not good. It's not even concede. I'm convicted that many of the technologies that you're concerned about, Nathan, are coming. And by me telling you about companies that I've invested in, what I'm explaining to you is on that line, we think very similarly, that these things are happening. I don't know that they're inevitable, like startups can always fail. I mean, military technologies have a long history of just not working or not getting adopted. But the point is like, yes, these are things to be concerned about, but this formulation of like, let's get some type of alignment with the Communist Party is to me, bizarro land. And it's not how national policy, at least in my experience, gets made. I don't think it's how good national policy gets made. I think that you look at things very minimally, right? You look at a trade issue, you look at a diplomatic issue, You try and do something big and you run the risk of taking things off the rails in a very strange way, right? There's billions of dollars floating around the economy for AI influencers to be talking about these types of things. Some of that money comes from places that we know. Some of that money comes from places. If you could point me to any pockets of it, that would be much appreciated. Like $600 million into this thing. I'm sure you could get some, you know, I mean, I seriously, like I'm sure you could get some like non-resident fellowship at one of these institutes or something like that. The trick is to start writing articles and then see if you can get on the conference circuit. Like that's where the money starts.
Eric: Let me rephrase one thing, and I want to be mindful of time, so we'll get you both out in a few minutes. But putting aside the AGI stuff, you're in agreement that these sort of military technologies are getting stronger and stronger, and more countries are going to have more capabilities to do damage. And I'm curious. Do you think that world is sustainable? Like, do you think there needs to be some global sort of decommissioning or?
Josh: 10 years ago in graduate school, I wrote a paper talking about the declining barriers to entry and exit into the marketplace of violence. I firmly believe this, right? I don't like it. It's happening. Right? Like literally wrote a short story about an assassination by drone in like 2015. Published. Published. Completely agree. These things are very dangerous. Like the world is getting more dangerous. There are very few things that you can do as a nation state to affect the security of your own country and of the world. Okay? I understand that there is this desire to try and like bring people together and like, can't we all just figure this out? And Orthodox Marxist doctrine does not work like that. Like they will lie up until the moment that they slit your throat. This is what they've done in every country they've taken over. This is what they do on the international stage. It is just how these systems are structured. Okay. The United States is this like gem of, of a political system in the world that accords some amount of freedom to individuals more than any other country in the past 1500 years, 2000 years. I mean, however you want to sort of measure these things, you could say ever people would debate that fine. And so, and so from, from my perspective and my experience, what I'm telling you is, Those types of grandiose desires, there's no attachment point for national policy there. You can ask for it, you can want it, but I've never seen it. You can get national proclamations or whatever, but when the rubber hits the road, that's just not how things happen. I'm telling you that That my recommendation for how we run the country is to take very narrow approaches to technical competition, economic competition. And the best way to do that is to act in the best interest of the Aman people. Okay. And so when you have a counterparty that has stolen trillions of dollars of intellectual property that actively dumps raw materials and finished goods, subsidizes them, look at what Huawei has done around the world. It's essentially a SIGINT system for the Chinese Communist Party, by the way, like they've all but admitted this, like you just have to deal with things in very specific cases. Even though we understand that there is some possibility out in the future of some terrible world, like I get it. Like I'm mindful of it as well. But to say like, hey, you know, kumbaya, we're going to give them all the chips. We're going to work together. I'm just telling you, you can't trust Marxists. Yeah.
Eric: I want to be mindful of people's time. So I'll give Nathan a closing statement. And Josh, if you want another one, you're welcome to.
Josh: I've said enough. I've said enough. Nathan, please.
Eric: Yeah.
Nathan: I mean, you may say that I'm a dreamer. I'm reminded of the perhaps apocryphal Einstein quote where, you know, and I don't know if he actually said this or not, but we don't know what weapons World War III will be fought with, but we know that after that, we'll be back to sticks and stones. It feels like you know, the argument for sort of that's not possible just doesn't cut it in the, you know, against the sort of magnitude of the threat that it does sound like you are also recognizing. I would love to hear a plan, you know, I would love to hear a plan from either candidate for what is, how are we going to not slide into an AI arms race, another sort of Damocles, another technological sort of Damocles over the entire global population a World War III perhaps at some point that we can't recover from.
Eric: So some people would call sort of what's happened with nuclear weapons a success in that we haven't yet had major nuclear conflict since World War II, but you just think it's an inevitable conflict. You think it's a failure?
Nathan: I think it's a terrible failure. Yeah. I mean, we have way too many. It hasn't been that long, and we've had a number of close calls. So the you know, just in the same way that like we had this pandemic and we don't seem to have like learned much or, you know, we're not like taking the appropriate next steps to sort of be ready for the next one. We sort of, you know, have had a bunch of close calls with nuclear weapons and we basically have just been like, guess there's nothing we can do about it. You know, it's just, that's just life. I just don't accept that frame on any of these big questions. I think we can and should strive to do better. And if we want to be here in thousands of years, let alone millions of years, we better.
Eric: Thank you for both engaging in this very important conversation. Nathan, Josh, thanks so much.
Full Transcript
Full Transcript
Transcript
Nathan Labenz: (0:00) Hello and welcome to the Cognitive Revolution, where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas and together we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz joined by my cohost, Erik Torenberg. Hello and welcome back to the Cognitive Revolution. This weekend, we're running 2 episodes which originally appeared on the Moment of Zen feed focusing on the election in which I attempt to give an honest hearing and questioning to 2 different AI forward cases for Trump. I like this exercise because in my honest opinion, this election is ultimately a referendum on Trump. My interlocutors are in the first episode, Samuel Hammond, senior economist at the Foundation for American Innovation and a thinker that I generally very much respect. We cross posted his appearance on the future of life podcast last year, and I also really appreciated his 95 theses on AI, most of which I agreed with. In the second episode, I speak with Joshua Steinman, Trump's national security council senior director for cyber policy and deputy assistant to the president from 2017 to 2021, the entire Trump term. Before launching into it, I'm gonna briefly share where I've landed on Trump as I expect a possible Trump presidency might relate to AI development. If you see AGI as a real possibility in the 2027 time frame, it seems totally reasonable to consider the election's impact on AI as a major decision making factor. Of course, I understand that people have other priorities, but this is not a politics channel. So I'm not gonna offer my opinion on every issue, just AI and AI adjacent issues. Interestingly, as you'll hear, I find that on a number of these AI adjacent issues, I agree with the Trump supporters that I talked to. To name a number of them, nuclear energy is good. We should build more nuclear plants. Population decline is a problem and does merit real concern. Freedom of speech is valuable and should be protected. On the current margin, we should have fewer rules and more right to build on one's own property. We should cultivate a culture of achievement. We should aim for an age of abundance and dismiss degrowth. And it makes sense to prioritize high skilled immigration, at least to some degree. Finally, American companies like Amazon, Google, and Tesla should not be allowed to abuse their market power at the expense of consumers, but neither should they be subject to government harassment just because they're out competing many legacy businesses. Fortunately, it does seem that the democratic establishment is coming around on at least a number of these. But regardless, there are 3 big reasons that I cannot ultimately support Trump despite these important points of agreement. Those are, 1, he is far too inclined to escalate tension with China, accelerate decoupling, and prioritize his own narrow domestic political interests over the national and global interest. 2, the lack of rigor in his thinking and discipline in his communications seems like a recipe for unnecessary risk taking in an increasingly volatile environment. And 3, I believe we are far better off approaching the future with a positive sum and inclusive mindset, not just within The US, but globally, if we're to have a healthy conversation about a new social contract that befits the AI era. On the question of US China relations, I think we have a general failure of leadership and vision on both sides, unfolding slowly but gathering more momentum all the time. People now see adversarial relations with China as a foregone conclusion. Joshua Steinman calls it the physics of the environment. To put it plainly, I do not accept this. Conflict with China would be a disaster, and an arms race would take us ever closer to that disaster. But I don't see this as an inevitability because I don't see China as a meaningful threat to America, Americans, or the American way of life. That's not to say that the Chinese government hasn't wronged us at times. Their cover up of early COVID, whatever its origins, was definitely shameful. And, obviously, Chinese agencies and companies have stolen a lot of intellectual property from American companies. I don't think we should ignore that. And, of course, we should take steps to make ourselves less vulnerable, but I think we should stay level headed about it. The possibility that our grandkids could be speaking Chinese 1 day seems far more remote to me than that AI destroys the world. I would say that the Biden administration has done okay on AI policy domestically. The 10 to the 20 sixth threshold for reporting requirements has aged pretty well for a 2023 rule, and I do believe in some strategic industrial policy as well. Subsidizing the building of new chip fabs in The US so that we're not so easily disrupted by a Chinese attack on Taiwan seems under the current circumstances a prudent step for a great power to take. At the same time, the chip ban still feels wrong to me, and I have to admit that Kamala's rhetoric on China also depresses me. There's simply no way for China to understand recent US statements and actions other than as an attempt to keep them down. And for me, this escalation seems premature at best. Given the shape of an exponential curve, we could have retained the option value of cutting them off from chip sales later and still the bulk of the total hypothetical chip sales would have remained in the future. Relatedly, I've been really interested to see Miles Brundage, OpenAI's recently departed head of policy research, saying that we need to make a much more proactive effort to demonstrate that Western AI development is benign, which of course is much easier to do if it actually is benign and to some degree open or otherwise shared. If we must frame our relationship with China as a competition, I would love to see us raise them on metrics like life expectancy improvements, the number of diseases we can eradicate, or perhaps the tonnage that we can send to Mars. Of course, I do understand that it's a complicated situation, that naive solutions won't work, and that real solutions will be both nuanced and difficult to find. And I do intend to invite more China experts onto the show going forward in an effort to more substantively contribute to a positive vision for the future. For now, I wish that Kamala and democratic leadership in general were both less hawkish and more visionary. But considering just how much trust has already broken down and how difficult it's going to be for the 2 countries to make credible commitments to 1 another, I'd rather have a steady hand who is more predictably going to follow a same consensus path and might actually make some arms control type deals even if politically costly for them to help us navigate a tricky period as well as we possibly can. Trump, I think it's well established, will do anything to avoid looking weak, has credibility with rival countries perhaps when it comes to threats, but not when it comes to positive follow through. And for me, his withdrawal from the Iran nuclear deal, general taste for inflammatory rhetoric, and stated plans for blanket tariffs, which will hurt the American consumer in order to vaguely somehow stick it to China, all suggest to me that he is overwhelmingly likely to make things worse. Zooming out from US China relations, while I agree with Sam Hammond when he says that the more fucked you think we are, the more willing you should be to roll the dice with Trump, I don't actually think that we are all that likely to be fucked. I've been a bit ambiguous about this over time, often saying that my PDOM is 5 to 95. And I've meant that to reflect the fact that while nobody has convinced me that we don't need to worry about AI risk, neither has anyone convinced me that it's inevitable. After all, while we don't really know how they work and we see all sorts of surprising and sometimes alarming capabilities, today's best AIs do understand human ethics and values quite well and seem to be getting at least a bit more aligned with each passing generation. This may not continue and we should absolutely be vigilant about it, but this is a much better position for 2024 than most AI safety people expected 5 or 10 years ago. Today, facing a decision like this referendum on Trump, I recall the words of a wise friend who told me that we should think less about what the probabilities currently are and more about what we can shift them to. Here, I have to say, with competent, stable leadership, I believe we can steer towards scenarios on the lower end of that range, where the nature of the risk is more intrinsic to the technology itself and less the result of letting domestic political incentives lead us toward imprudent escalations, AI arms races, or catastrophic impulsive decisions. I often think of the role that Kennedy played in the Cuban Missile Crisis. My understanding is that he overrode the recommendations of his military advisers to insist that The United States would not escalate to nuclear war first. That was heroic, but scenarios in which executive authority really matters can cut both ways. When I imagine Trump versus Kamala in moments of intense crisis where a single decision could alter the course of history, I have to say, I find it much more likely that Trump would impact things substantially for the worse than substantially for the better. After all, we all saw how he handled COVID. To be clear, Kamala has not impressed me on the topic of AI either. And in general, her track record suggests not the great foresight of excellent leadership so much as the tendency to follow local trends and incentives. We absolutely could hope for better. But still, if I have to choose a leader for a potentially highly volatile period of time, I'll take the stable, sane person who will listen to the expert consensus, even acknowledging that the experts could be wrong, rather than betting that Trump will somehow manage to override the experts in a positive way. You'll hear my conversation partners make the case, which I won't attempt to summarize here for fear of doing it too poorly, that Trump represents our best case to break out of a broken consensus and revitalize the American state for the AI era. In the end, I just don't see it. It sounds to me like chaos when we need capability. Finally, when it comes to the future of American society and the world at large, I think we have a never before seen opportunity to adopt a positive sum mindset, create a world of abundance, and ultimately update our social contract. I think OpenAI and other leading companies do have roughly the right vision when they talk about benefiting all humanity. And I think Sam Altman, for all the other criticisms that I've made of him, absolutely should be praised for his experiments in universal basic income. While neither candidate has shown this kind of vision, Kamala at least aims to speak and extend opportunity to all Americans. I thought her best moment of the recent debate was when she said that when Americans look at 1 another, quote, we see a friend. This is at least something of a foundation on which to start building a shared positive vision for the future. Trump, of course, is far more 0 sum in his thinking and negative in his outlook, and that does have real consequences. I grew up in Macomb County, Michigan, 1 of those bellwether counties that swung hard from Obama to Trump. And I also have family in Ohio. My beloved mama and papa belong to the same cohort as JD Vance's grandparents. They moved from rural Kentucky to Southern Ohio for jobs, the whole thing. To be totally honest with you, 1 thing that I have seen for myself is that Trump has brought out the worst in a lot of people. Sure. JD Vance, Elon Musk, and others in Trump's orbit are no doubt more sophisticated thinkers about technology than Trump himself. But still, I cannot imagine this brand of cynical populist politics could possibly lead to a healthy national conversation about adapting to AI as it, let's face it, is going to be disrupting a lot of people's jobs, let alone reimagining what it means to contribute as a citizen or to live a good life. It would be shameful if we ended up hoarding the benefits of AI or restricting access for non Americans due to some sense of scarcity that isn't even justified by the fundamentals. But unfortunately, that's the direction that I would expect Trump to take us. Ultimately, the idea that Trump could be president as AGI is first developed strikes me as an imprudent move to say the least, with far more and more likely downside than upside. By all means, listen to these conversations with an open mind and form your own judgment. But for my part, I can't support putting a loose cannon in power as we head into such a potentially pivotal period. And so I will be voting for Kamala, primarily as a rejection of Trump. Hello, sir. Yo. What's up?
Erik Torenberg: (12:10) All good, man. Good morning. Good to see you. Thanks for doing this. Still waiting a minute for Josh, but I thought we'd, we'd get started. Any any quick reactions to the to the last episode that we did on this same topic? Josh, you know, worked for for Trump, so it brings more of a personal, you know, insight or connection than more of a, you know, sort of abstract sort of think tank view. But before getting into it with him, I was just curious any any reflections or reactions from from talking to Sam or how you've been thinking about the topic since.
Nathan Labenz: (12:41) I did go back and listen to the whole thing, and it was a little weird. I don't know. I felt like I kind of kept getting lulled into these scenarios of, like, all the great things that the, you know, highly competent Trump administration of our dreams might do. And then I look at the actual, you know, elect election as it's unfolding, and it's like, I just don't see the evidence in the actual candidate or, like, the way that they're executing a campaign to believe it. You know? And I also feel like there's this weird I mean, politics is, of course, full of, like, contradictory messages, but I feel like there's a weird 1 happening where, you know, the the criticism obviously and I don't even care about this too much, but, you know, the talking point on the Republican side from, like, the sort of popular surrogates is like, who's the president? We have no president. The president's incompetent. Whatever. Meanwhile, we've got Trump. He's a strong singular figure, you know, and his whole appeal is about what a strong, you know, irreplaceable figure he is, and only he can fix it and so on. And that seems to be, like, what the large majority of his voters believe about him. But then when we get on with Sam, it's like, oh, well, the president doesn't really do that much. You know? It's like, it's actually all the people that he's gonna appoint that are really gonna matter. And then so I'm like, well, which is it? You know? And is this sort of a if that's the real story, are we just kind of lying to the voters, which I guess, you know, again, maybe we're maybe all the candidates are sort of lying to the voters in some ways. But I actually tend to think that the person probably matters. That that seems to be my default position. That's certainly, like, what the constitution says. So I don't know.
Erik Torenberg: (14:27) Let me let me segue and introduce Josh. Josh, thank you for thank you for joining.
Nathan Labenz: (14:32) Hello, sir.
Erik Torenberg: (14:35) Josh is a Nice to meet you, Nathan. Hey, Eric. Yeah. I'm I'm lucky to to be a collaborator with Josh and that, that I'm on the galvanic cap table, but Josh is also a friend and someone who helps, me make sense of of what's happening in politics. Josh previously served in the in the Trump administration, and so I thought it'd be great to bring him on and have this this conversation as well. I briefed him that we previously had a conversation, and I think this is a good 1 because I think, Nathan, you represent a lot of people in this country who are first principled, you know, not not tribal, and really just trying to sort of call balls and strikes as as as you see it. And while you don't love, you know, everything that's happening on the on the on the democratic side or left side, of course, there's something about Trump that just makes you uncomfortable. And I don't mean to dismiss that. I'm just saying that that that is deeply unsettling in terms of the risk that that he presents. And
Joshua Steinman: (15:31) Sorry. What what what risk and what is it that unsettles you?
Erik Torenberg: (15:35) Let's get into it.
Nathan Labenz: (15:38) Well, I focus all my time and attention pretty much on AI, and I think we may well be headed for a short term situation in which AI systems become extremely powerful and pose all sorts of unprecedented challenges.
Joshua Steinman: (15:56) On what time horizon?
Nathan Labenz: (15:57) Potentially as soon as the next 2 to 3 years. So, you know, very much in
Joshua Steinman: (16:02) You don't think the grid at all?
Nathan Labenz: (16:04) I mean, anything's I think the the window of possibility is very, very wide open.
Joshua Steinman: (16:09) I'm just saying, like, a bunch of folks that I really like have said that essentially The US energy grid can sustain current rates of growth of AI power consumption until about 2026, and then essentially run out of power. So, I mean, are you talking about in that window?
Nathan Labenz: (16:24) Possibly. I mean, that would be the near end of the window. You if you listen to somebody like John Schulman, who was the head of post training and 1 of the cofounders at OpenAI, he was recently on the Duarkesh podcast and said, you know, yeah, this could happen as soon as next year, this being, like, AGI, probably an early, not, you know, superintelligent AGI. But, nevertheless, something that I think could be profoundly, you know, altering of all sorts of dynamics and power structures, you know, within and across countries. And Dorkash was like, you mean next year? And he's like, well, that would be kind of a surprise. More like 2 to 3 probably. And was like, that's still really soon. You know, 3 is only 2027. So,
Erik Torenberg: (17:04) yeah, I mean,
Nathan Labenz: (17:05) I don't know. The energy question is really interesting. I see huge efficiency gains happening all the time, and I tend to think a lot of these analyses don't take that into proper full account. But it's hard to say. You know? I mean, you do you can only see so many, like, 10 x efficiency improvements before you're like, jeez. Unless these are, like, fake or they somehow don't work, you know, when it really matters, then it seems like we probably will have enough energy. I've done a lot of an energy analysis just in terms of, like, offsetting as well. You know, how many chats do you have to have with with a model before it takes as much energy as, like, 1 crosstown car trip?
Erik Torenberg: (17:46) I feel like we're sort of going
Joshua Steinman: (17:48) so just just to be super clear. So the thing that concerns you is what about Trump with regards to AI?
Nathan Labenz: (17:54) Tail risk. I think being What deal? Like, what tail what tail risk? Creating an arms race with China. Creating an AI arms race with China.
Joshua Steinman: (18:03) Aren't we already in it? I mean, China's gonna 3
Erik Torenberg: (18:06) x there.
Nathan Labenz: (18:06) I think we I think we're gonna figure that out over the near term. I mean, not necessarily. I think that is probably, you know or has a very good chance at least of being the key question that political leadership, like, on both sides is gonna decide. If you can believe the reporting, which is, like, hard to say, of course, we have recent comments from Xi suggesting that he might not be inclined for an arms race, and he he does seem at least open to taking things like existential risk from AI seriously. You've got Chinese Turing Award winners also coming out recently and joining American Turing Award winners with statements about, jeez, we might really need to slow this technology down. Like, maybe we can have a international treaty to not create slaughterbots. I don't think any of those things are inevitable. I think if we say, oh, we're in an AI arms race with China, then we're probably fucked. And then who cares who's president arguably? But I think I'm, like, a 1 issue voter. If if any candidate will say, I'm gonna do everything we can to not have an international AI arms race and to try to make AI a peaceful technology.
Joshua Steinman: (19:14) Are you familiar with previous are you familiar are you familiar with previous arms races with other competitive aspiring global hegemons? I mean, somewhat. You know? I don't
Nathan Labenz: (19:25) know how many arms races you have in mind, but I
Joshua Steinman: (19:29) Do do you think that do you think that countries tell the truth to each other when thinking about national security?
Nathan Labenz: (19:35) I think it's very hard, but I don't if I mean, again, if you're gonna just bake in an AI arms race, then I think, you know, from my perspective, it's that's kind of the end of the story. I don't think you know, to take 1 earlier seen
Joshua Steinman: (19:48) China have you seen have you seen the Chinese Communist Party's plans to 3 x its total power output to 7 terawatts a year in the next 15 years?
Nathan Labenz: (19:57) Yeah. That's great. I mean, they have a lot of people still living in rural poverty. So, you know, they've they got plenty of uses for that, and I wish them well on their, you know, power expansion. I also would would support, you know, at least some amount of power expansion here. I would love to see us build nuclear reactors. I'm not somebody who is, you know, anti growth or, you know, anti progress. I call myself an adoption accelerationist when it comes to AI.
Joshua Steinman: (20:21) Would you privilege Xi's words or actions when judging whether or not they're already engaged in radically expanding their capacity to to compute?
Nathan Labenz: (20:30) Well, we have cut them off, and this is a Biden policy, so I'm I don't blame Trump for this. But we have set the tone in this dynamic most recently with a dramatic escalation in the AI domain specifically by saying, we are not gonna sell you leading chips. And so, of course, they're responding to that by saying, well, shit. If you're gonna cut us off and now you we're hearing all these, you know, comments from every which angle about arms race and, you know, decisive strategic advantage that's gonna be achieved by AI. Of course, they're gonna be trying to figure out what they can do to avoid that. But, I challenge that dynamic. I don't want to see us in this AI arms race. I think we can begin. We should try to build trust.
Joshua Steinman: (21:12) So you wanna see a candidate who's gonna allow people to buy whatever chips they want. I mean, is that do I do I get this correct? Like, that's what you want? You wanna take off the sanctions. You wanna allow them to buy advanced compute. Why?
Nathan Labenz: (21:30) I want to build trust. I think that if we end up in a an AI arms race and we end up, seeking strategic advantage over each other, we are going to all lose.
Joshua Steinman: (21:40) I'm asking a very specific question. What task do you want someone to accomplish?
Nathan Labenz: (21:46) Avoid AI arms race with China. Do want them to check? Share benefits now.
Joshua Steinman: (21:51) No. But you you just you just said you just said what you want is to take off the take off the sanctions and let the Chinese buy advanced chips, are necessary.
Nathan Labenz: (22:00) I don't even think that's necessarily true. I I think that, in fact, what we're seeing in the research even from this week with a a recent potentially game changing breakthrough for better or worse and probably both is that distributed training is now starting to work. So the whole paradigm this is why I also don't fully believe the energy story.
Joshua Steinman: (22:21) I've got a buddy who thinks he can train at 1 tenth the cost using FPGAs. I have I'm under no illusions that we need advanced advanced chips to train, you know, crazy models. So okay. So so you want rhetoric. You want you're looking for you're looking for rhetorical change from a political candidate. Is that is that your request? Josh, do
Erik Torenberg: (22:45) you think that basically that we're in a arms race no no matter what and sort of this idea of of trust
Joshua Steinman: (22:51) Yeah. That's the that's the physics of the environment.
Erik Torenberg: (22:53) So it's sort of a trust building act.
Nathan Labenz: (22:55) No. It's not that is not the physics of the environment does not dictate I mean, unless you're a total unless this is some sort of total universal determinism argument where, like, we don't have free will in this situation, then, again, what are we even talking about? But if we if we have some sense of agency
Joshua Steinman: (23:12) Look at the actions of the Chinese Communist Party. Like, the Chinese Communist Party abuse.
Nathan Labenz: (23:18) Look at our actions. We are both currently escalating with each other at every turn. That is a choice that both political leadership regimes are making, and I think it's a terrible 1. I mean, the last arms race, you know, you you kind of raised
Joshua Steinman: (23:31) I I reject your I reject your premise, but I appreciate that you're trying to inject it.
Nathan Labenz: (23:36) What premise are you rejecting there?
Joshua Steinman: (23:38) That everything is completely escalatory. Like, this is just great power politics. This is welcome to the history of the world.
Nathan Labenz: (23:45) The history of the world is not on a good trajectory. I mean, how are we going to get to a good trajectory where we have peace between great powers and AI that serves us as opposed to AI that hangs over us all like a sort of Damocles as the nuclear arms race still does?
Joshua Steinman: (24:00) So you're interested in a candidate that will appease commercial powers inside China, and I'm just trying to understand what you want.
Nathan Labenz: (24:10) Yeah. I would I would go for benefit sharing sooner rather than later, I think. I mean, I don't know what the I'm you know, let's take the Ashenbrenner model as sort of the contrasting point of view. Right? Stylized story.
Joshua Steinman: (24:25) You have to talk to me like I'm fine. I I don't know what that means. I don't know what that means. Sorry. I'm a I'm a I'm a I'm a simple man. Explain it to me like I'm 5.
Nathan Labenz: (24:35) His situational awareness manifesto in more or less his words, he said, here's what I think we should do. We should take the lead that we have on China, jam as hard as we can, stay ahead, use all, you know, kind of available mechanisms to stay ahead, use the window of time that we have in the lead to solve alignment, make safe AI, achieve decisive strategic advantage, then we can go to China and have a conversation about benefit sharing. I would say, don't like that plan at all. I would much rather see a plan that involves earlier benefit sharing and a collaborative approach to trying to solve the fundamental challenges.
Joshua Steinman: (25:15) So you wanna give more technologies to the Chinese. Is that right? You wanna give things
Nathan Labenz: (25:20) to mean, I would I would engage in trade with China. Yes. I I'm I'm I don't think the case has
Erik Torenberg: (25:26) You're our largest trading partner. What are you talking about?
Nathan Labenz: (25:28) Yeah. Well, we've just cut them off from perhaps the most fundamental resource in the world at the moment. So we're not we are in a period of decoupling. I would like to see us stay more coupled rather than continue to decouple from China.
Joshua Steinman: (25:41) Okay. So you're interested in closer alignment with the Chinese Communist Party. You're interested in giving them the tools to build the things that you fear the most. I'm just trying to understand
Nathan Labenz: (25:51) I'm interested in working together as a global community to develop AI in a in a positive way, not racing each other to achieve strategic advantage with AI over 1 another because I don't think that ends well for anyone. And it might not end well in any case.
Joshua Steinman: (26:04) Do you think a president The United States should represent a global community or the citizens of the country that they're leading?
Nathan Labenz: (26:10) I think it's definitely a mix of both. I mean, you know, when you have global issues that affect everyone and that
Joshua Steinman: (26:15) Should there be a priority? Should should 1 take priority?
Nathan Labenz: (26:18) I think it depends on the issue. I mean, there's when it comes to a pandemic, we're all in it together.
Joshua Steinman: (26:24) Give me an issue where there should be parity in between president's evaluation of options and judging the benefit to humanity vice the citizens.
Nathan Labenz: (26:33) Sure. Yeah. Right now, there's a monkeypox outbreak happening in Africa. If you're the president and you're sitting out a bunch of vaccines, you could say, well, we could send a bunch of vaccines to Africa and try to get that outbreak under control. That would be good for everyone in the world. Or you could say, let's just hoard those for ourselves. Fuck everyone else. We'll wait till it gets here. We'll all be vaccinated. Everybody else can can deal with it on their merits. I would vote for the former because
Joshua Steinman: (26:56) I I know of no 1 who's statistically likely to get hockey pucks.
Nathan Labenz: (26:59) You don't know anyone in the Democratic Republic Of Congo right now, perhaps, but those people are out there. And I believe that we should prioritize the global good over a narrow self interest in cases where the global good is at risk.
Joshua Steinman: (27:13) I think you've got a candidate that you're gonna wanna support.
Erik Torenberg: (27:15) Hey. We'll continue our interview in a moment after a word from our sponsors.
Erik Torenberg: Let let me let me me zoom out really quick. This is a good debate because people we don't we don't hear this this this this debate, too too often. But I but I wanna get from the weeds of this specific issue, which is obviously very important. And and, Josh, I wanna hear from you a little bit about your experience working with with Trump because there is a representation of of who Trump is, what it's like to work with Trump. And from our private conversations, you said that that is different from from your experience. So I I would like you to articulate what what is your perception of how other people perceived sort of the the previous Trump administration and and and Trump as a person, and then I'd like to hear from you where there's overlap and and where there's difference.
Joshua Steinman: (27:59) Yeah. He's a really sharp guy. So, you know, I worked for for 4 years. I was the senior official on the national security council coordinating all of our cyber telecom supply chain and cryptocurrency policy. That meant that, essentially, when the president said this is what I want our policies to look like, it was up to me and my team to structure national strategies and then ensure that all of the departments and agencies, DOD, CIA, Department of Energy, etcetera, conformed and executed those strategies. So my office was at the White House. I had a small team that worked for me, and it was our job to coordinate how the US government functioned and what priorities it pursued. Yeah. I just found the president to always be thinking more steps ahead than I was, and it was a very humbling experience. Not that I'm, you know, some genius or anything like that, but, you know, often in meetings with foreign leaders, you know, you you look at the talking points that have been assembled by, you know, the sort of bureaucratic entities such as they submit them, and and president Trump would talk about things very different. And it was only after a day or 2 of, like, pouring through a bunch of research that you realized he was talking about political economic priorities of the counterparty at the table. So I just think he's a really sharp guy, probably 1 of the smartest people I've ever met. I think that, you know, the challenge that he faces is that a lot of people aren't aren't that smart. And so you have to find a way to communicate and find common ground with folks, and I think he's a great communicator. Like, he's shown that over 20 years of being 1 of the leading, you know, TV stars of a entire generation of having a huge real estate, company and a bunch of other successful and some unsuccessful companies just like every entrepreneur has hits and misses. I was always really impressed and enjoyed working for him.
Nathan Labenz: (29:59) You're buying his latest digital trading cards going long on the, the Trump token. Oh, he's currently hawking. Unless this is like an AI fake, he's currently hawking digital trading cards for $99 a piece. Buy 15 and they'll send you 1 in the mail, physical 1?
Joshua Steinman: (30:15) You you don't like that.
Nathan Labenz: (30:17) I mean, I don't really care. It's just it's just absurd. I predict that that will be a miss on the entrepreneurial ledger.
Joshua Steinman: (30:22) AI crypto friction. I just love seeing it. That's cool. I I got it. Sorry for the back.
Erik Torenberg: (30:26) Judge, why do you think other people don't see that? Like, what is it about Trump that some people think he's he's very sharp and other people think, you know, he's a not a stable genius, you know, to to to quote the clip. Like, what is it about him that, you know, some people see the intelligence and some people don't?
Joshua Steinman: (30:44) Yeah. I mean, it's it was really eye opening where you realize that most of the world gets their inform information a medium. Right? A media, 1 might say. And, you know, those mechanisms are under significant control. Not all of them are under control. And so what I usually find is you run this loop with people who think that they know what he's like or even what the policies are, which is that they read articles that don't represent reality. They make assumptions. And and so when you confront them with facts, they go back to this set of, media narratives, articles, press operatives, etcetera. And they say, well, no, that's not true because I read the following words on a website. And, you know, when you work in 1 of these places for a long even for a short time, what you essentially see is on a day to day basis, people actively either through ignorance or malice misrepresent reality, And you just learn to sort of it's a feature, unfortunately, of the system. So, you know, on on a on a weekly basis, I would see articles in the mainstream media. People would send me breathlessly like, oh my god, what's going on with x, y, or z? Read the article and, you know, it'd be a total fabrication or a misunderstanding of what was actually happening. Furthermore, and this is the most interesting part. I'll give you an example. This is amazing. So I was a military officer for many years, then I left and went to Silicon Valley. When I was in the military, I started a luxury American made CPG company. Not worth talking about. Anyway, I had a whole bunch of things that I did in the military. I got out. I went to a startup. I was the, you know, running opposite this startup. And then just through a strange turn of events, end up at the White House. So 2 and a half, 3 years in, 1 of the senior national security correspondents, a guy whose name you know, whose articles you've read, has been begging White House comms to sit down with me for over a year. I wasn't 1 of these guys who, like, leaked to the media. I didn't really care. I've got a long list of things that I got done because I just stayed focused on doing the thing that he asked me to do. There were, like, 5 or 6 major things that he asked me to do, and I just went about and did them. But finally, in, like, year 3, 3 and a half, something like that, we're like, okay. We'll sit down with this guy. Literally, like, best selling author, writes for 1 of the top 3 newspapers in the world, whole thing, on TV all the time. He comes in and pulls out his latest book, signs the thing to Josh, hands it over. I'm hearing all these amazing things about you. Like, you've done this. You've done that. And, like, it's clear that he's talked to people and he knows what I've actually done. And we went on to have a very in-depth, very direct conversation for about an hour because he's writing this news story. You know, going going what I would consider to be relatively strategically deep. Like, why are you doing x? Why are you doing y? And me giving him, like, very specific answers. He has rejoinders to those, then I'm like, but x. And he's like, okay. Hadn't thought about that. So, you know, I found him to be a competent interlocutor. The story comes out, none of that in there. The only line of description, Steinman, a former sock entrepreneur in over his head. So you have these engagements with these people and you realize that essentially, like, a significant period of time, they're acting in not good faith. I wouldn't wanna say bad faith. And you just have to extrapolate that out to the news cycle. So when people are like, Oh geez, Trump's this, orange man bad. I'm just like, at this point, I can't help but laugh because these are like we it's like you're talking to someone in the cave. Like, I I just can't I can't there's nothing I can do, man. Like, you're in the cave. That's cool. Like, listen to the speeches. Like, go direct. That is how I've always tried to, you know, pull myself out of those types of, you know, situations of knowing. But I mean, friends, family, at this point, it's sort of like, I I can't help someone that wants to stay inside that cuckoo.
Erik Torenberg: (35:12) What do you say to someone who says, hey. You know, you you had a great experience. There's some people who had who had a great experience. But this this you know, let's say we decide a lot of people or, you know, a few dozen people or something worked for Trump in the last admin who don't endorse anymore. You know, was you know, obviously
Joshua Steinman: (35:29) You're go name by name, and I'll tell you all the dirty laundry?
Erik Torenberg: (35:32) Let let let's dirty laundry.
Joshua Steinman: (35:33) You wanna find out who's paying them? Should we talk about the donors? Should we talk about the private equity firms? I'm happy to do it. Like, every single 1 of those people has skin in the game, and there's a very specific reason why they've done what they've done.
Erik Torenberg: (35:45) So so enlighten us a little bit, not on a name by name basis, but more on a macro you know? And, obviously, Kamala doesn't have people saying great things about her either who worked for her. So, you know, this is this is bipartisan.
Joshua Steinman: (35:56) Maybe the bartenders.
Erik Torenberg: (35:58) What what what so why well, you've mentioned some examples of of maybe some corruption. Give some of the macro reasons why people in the last administration or or who who worked for Trump don't don't don't endorse or don't have good things to say. What what was the situation there? And and why why would it be different in the future?
Joshua Steinman: (36:16) Why would it be different in the few I mean, there's a bunch of questions in there, I can sort of answer the 1 that I want. Look. It's real it's really powerful. It's 1 of the reasons why it's I think it's hard to be a member of the cabinet because when you walk into the situation room and you've got genuine disagreements and, like, Nathan has, you know, some some interesting kernels of disagreement that I think, you know, if we if if we had a different type of conversation, we could sort of pull on. I don't think we're gonna have that type of conversation, but nothing against you, Nathan. I'm just saying, like, that's not where this is going. But, like, imagine that you're sitting in a room and I'll sort of pull out what I would think what I would call, like, the best version of your arguments that actually carry weight in that room. So you say something along the lines of this is how much money US companies make selling these types of products to these types of customers in China per year. And you say, okay, we're gonna take step x, we're gonna cut off this, and they're gonna build the capacity to build y. And the long term negative consequences for US GDP are gonna be z. That that's sort of a a standard formulation of a debate that happens a lot, and it doesn't have to be China. It can be other countries as well. You could imagine that this likely this debate likely happened around the dedollarization of the Russian Federation around the war in Ukraine, something which, you know, 1 could imagine had been discussed for many years, cutting them off from SWIFT, etcetera, but only happened for the first time. So essentially weaponizing the dollar, which The US did to the Russians about whatever it was, like, 2 2 and half years ago. So imagine that debate. Right? Nathan comes into the room and is like, hey. Like, name your, you know, name your analog chip manufacturer, you know, those guys down in San Diego. I forgot the name. You know, Qualcomm, Broadcom, whatever. Like, they're making these chips, and we're afraid that there's gonna be significant hit. The Chinese are gonna spin up a competitor, etcetera. Okay. So if you believe your position firmly, you need to have the ability to walk. Right? You need to tell the president, like, if you're the secretary of commerce, like, I think we need to make this deal, or I think we need to not make this deal. If you don't have the ability to walk, you're essentially at the whims of the people that do have the ability to walk inside that room. What do I mean by that? I mean that people will be calling you. If you're a secretary or a deputy sec people called me. People in the news that you read right now about many China related things, some of them got my desk number, called me up, like big, big, powerful people, and they're like, do you know who I am? Because I you know, just to be clear, Nathan, so I architected our policies against Huawei. I architected, executive order 1 3 8 7 3, which is what you would have heard to heard referred to as the ICT supply chain executive order. It's now it's a counterpart to CFIUS. It's the ability of the US government to shut off a company from doing business in The United States if it has deep ties to the military or intelligence complex of a of a foreign adversary, including Chinese companies, Russian companies too, many other countries as well. So if you're the secretary of commerce and you hold that power, you walk into that room, you have and you've had powerful people calling, and they're essentially threatening you. And they're saying, do you know who I am? You know? Do you know what I can do? I know you need to work after this. You're gonna do this thing for me. I've got leverage over you. And I think that what you're seeing right now with many of these people is, you know, people have leverage over them. They're not independently wealthy. They've got modest pensions. They wanna sell books. They wanna get on TV. They wanna get board seats, you know, 75 k a year or sorry, 75 k a month from a, you know, top 20 technology company is nothing to sneeze at. It's, like, sort of going rate for these folks when they play ball. Like, know, 1000000 dollars a year here, 1000000 dollars a year there, pretty soon you're talking about real money. And so, you know, it's the it's the it's the basics of what motivates human behavior, you know, money, ego, compromise, etcetera.
Erik Torenberg: (40:46) Hey. We'll continue our interview in a moment after a word from our sponsors.
Erik Torenberg: I appreciate that that that articulation. Let's actually focus on on on China for a bit. Could you give a broad overview? You know, we we got in the weeds of of AI in the beginning, and we'll get we'll get get back there in a bit. But maybe you could start with just a broad overview of how you think we should be, you know, responding to China or engaging with China. You know, you mentioned sort of previous great power conflict.
Joshua Steinman: (41:13) The Chinese Communist Party is a technology enabled totalitarian fascist dictatorship. Like, that's what the Communist Party is. They're a communist party. You know, they kill their own people. They harvest their organs. They, you know, create strange strange rape environments when they're part of, minority groups that they don't like, Muslim minority groups that they that they don't like. You know, they're they're threatening to take over a country with a long, long history of democracy, Taiwan. You know, they bully their neighbors, and they steal things that Americans build, that smart Americans build. There's over a $1,000,000,000,000 of stolen intellectual property over the past 20 years. Go look up advanced persistent threat 10, a p t 10. It's 1 of the leading government sponsored hacker groups that the Chinese operate, And folks like that have been given huge shopping lists of like, go and steal us this, go steal us that. And then because the Chinese Communist Party controls China, you hand that material over to, you know, companies, to individuals that the, you know, that the that the government supports and likes. So I I don't think that this is like dealing with your neighbor. Right? It's it's not like the nice guy next door. Maybe you guys, you know, he goes to a different church. You know, this is an entity that that is that is a revolutionary Marxist entity that wants to make the world safe for for Maoist communism. And I I just think that you have to sort of come with that approach when thinking about what you wanna do with the Chinese Communist Party. You can be scared. That's fine. Like, being scared of what a world dominated by the Chinese Communist Party would look like, totally reasonable. You can be scared that maybe things are gonna spy a lot of control, but you gotta remember, like, this is who you're dealing with.
Erik Torenberg: (43:06) And so what are your thoughts on how we should be handling AI then? Or or or sort of give you the concerns that that Nathan has shared earlier around the arms race being potentially
Joshua Steinman: (43:17) I think I think they're a synthetic straw man for a bunch of other things that maybe a lot of people advocating for these positions don't even understand, like changes in trade policy. And so I I don't really feel the need to engage with the sort of, like, technical details. I can tell you that, you know, from from and I do have friends that run some of these big AI companies. They're much more concerned about energy than than they are about these, like, strange policy angles with regards to international trade. And and so I I just don't think it's really that serious of an issue. And I don't mean serious as in important. I mean, serious as in, you know, worth prioritization at the current moment. Like, if energy is 35 cents a kilowatt hour in The United States, you're not getting AGI. If it's 20 cents a kilowatt hour, you're probably not getting AGI. Like, I'm not even sure that AGI exists nor are you in heart of hearts. Like, you can believe that there's a synthetic, you know, entity out there that, you know, can represent itself as something that to to our minds, you know, 1 could ascribe intelligence to, but I'm just not sure that that's the case. You know, it's almost a theologic question. So we can have a theologic debate if you want. But to me, this is about processing power. This is about corporate power. It's about trade. It's about money. Like, those are real things in the world. All this other stuff is fantasy land.
Nathan Labenz: (44:44) Yeah. I mean, I think you should study AI more. Like, AGI definitely exists. We are a form of AGI. I would call us a weak AGI. There is no reason to doubt that something more capable than humans can be created. We are not the end of history. The timeline on that is very unclear. The energy requirements for that are also not entirely clear. But the idea that I mean, so many of these debates ultimately come down to, do you actually take the tail risk from AI seriously or not? It sounds like you don't. And if you don't, then it's sure. Then the whole debate is kind of moot or at least my you know, the the perspective I'm bringing to the debate is kind of moot because you don't really think it's a problem. You don't really think that tail risk is out there, then you can say, sure. Like, who cares? Why why would we prioritize that? But I don't think that is going to age very well, and it may age poorly even on the time scale of the next president. I would ask anyone watching to go watch the short Slaughterbots film and then go watch some videos coming out of Ukraine, and then watch human versus drone human versus AI drone races. And, you know, just extrapolate a little bit and say, like, are we not on the slaughterbot trajectory?
Joshua Steinman: (46:03) You and I are talking about different things. Okay? You're talking about technology futures. I'm talking about political power. So if you wanna have a conversation of, like, what's in the art of the possible, like, I literally cut my teeth in the military when I wasn't deployed to Iraq looking at technology futures, like, was on the team that put, you know, some of the first unmanned systems in the hands of people that were using them, 3 d printers, augmented reality, literally a decade ago, in some cases, depending on the data we're talking about here, like more than a decade ago. So I have no doubt that you're gonna have in fact, we already have, like, autonomous by we, I don't mean the US government, but, like, I have friends that have started companies that do autonomous targeting, all these things. Like, I get all that. I'm talking about political power, and I'm talking about what moves the needle politically. And I'm talking about how these things actually play out inside the rooms where decisions get made. Like, if you wanna have a conversation about, like, what might happen with technology in the next 10 years, that's a different conversation. I'm talking about politics because we're here ostensibly talking about president Trump.
Erik Torenberg: (47:08) And, Nathan, feel free to jump into it. I understand I I wanna just frame what I think is the the the difference of opinion here, which is, Nathan believes that we are headed you know, whether it's, you know, either administration at this point because Biden hasn't kept policies that that or Biden has advanced policies that Nathan's not excited about, which is this arms race with China. That and and Nathan believes that this arms race with China is not inevitable, that it that we are accelerating it in trying to compete. And and by identifying the arms race, we're we're accelerating it, and we need an alternative path that would would would would, you know, remove some of the sanctions, would would build hopefully, build trust and, you know, maybe stop the arms race. Nathan, I'm sure, would would concede that there's some risks with that approach, of course. And and Josh has a much more realpolitik perspective, which is, hey. We are in an arms race. To to, you know, remove sanctions would be to a be aiding our enemy in the arms race, and thus we would be, you know, losing that arms race to a dictatorial, you know, communist.
Joshua Steinman: (48:13) In the in the fullness of time, the the people who in great power competition have advocated, for, you know, this type of thing, deescalation, etcetera, you know, are 1, 2, 3 steps intellectually or financially removed often from from I'm not accusing you this. I'm just telling you that, like, read Venona, like, you know, read Cold War history. You know, people get wrapped up in these in these memetic, you know, mania, usually end up having sponsorship in the in the counterparty. So, you know, I understand that we're afraid of this potential future. I don't disagree with you. I've made investments in this space personally with, you know, startups building military technology that's gonna be able to do all this stuff because I'm terrified of
Nathan Labenz: (49:05) it. Talk about your financial conflict of interest. You're throwing out everybody else's sponsor. You've got direct investment in military technology.
Erik Torenberg: (49:13) Yeah. 100%.
Nathan Labenz: (49:14) That is somehow not why
Erik Torenberg: (49:17) is that
Nathan Labenz: (49:17) supposed to undermine me in the abstract where I don't have any No.
Joshua Steinman: (49:19) Because I'm saying I'm afraid of the same reality that you're afraid of, but I'm investing in these companies because I want us to have it as opposed to the other side to have it. And what you're saying is that you're afraid of it and you wanna get a bunch of words on a sheet of paper as a as a mechanism to try and prevent that eventuality from coming to pass. And I'm telling you, words mean nothing.
Nathan Labenz: (49:41) Well, they're a start. I mean, were you do you think Reagan was making a huge mistake when he when he engaged in arms reduction treaties with the Soviet Union? Was that a terrible idea because words mean nothing? As far as I know, the actual number of deployed nukes came down dramatically. And while not nearly enough, I would say that's a very good thing. Are you like, do you think that that is just not like, why is that not possible to execute something similar in the AI era?
Joshua Steinman: (50:10) Because so many other things were happening at the time that caused those things to happen, like Soviet economic collapse, overmatch, extension of their military industrial complex into expeditionary, you know, conflicts in Central Asia, etcetera. Like, I don't think that that that the story that you've told yourself about why things happened is a reflection of reality?
Nathan Labenz: (50:34) I'm not telling myself any story. I'm just saying there is precedent for arms reduction treaties. There are there is precedent for arms control. Still very few nations in the world have nuclear weapons. Many could develop them if they
Joshua Steinman: (50:46) I think we're doing a great job of arms control, limiting the chipsets that go to the Chinese Communist Party. I think that's a great start. Let's keep doing it.
Nathan Labenz: (50:54) What's your plan? How are we not all going to be living under the threat and possible actual reality of an AI a militarized AI arms race. Like, how do we not end up there? Because if we end up there, it's bad for everyone. I mean, we could all die in a nuclear war anytime. Right?
Joshua Steinman: (51:13) Oh my god. The the the sweet meteor of death is coming.
Nathan Labenz: (51:17) Do you deny that? I mean, if you're gonna if you're gonna say like, oh, the meteor short of Damocles that we have is no big deal, then I think that's just ridiculous. Like, there's some finite probability every year that we could have a nuclear Armageddon. That is going to, in the fullness of time, end our species if we don't deal with it in some other way. The probabilities are going to accumulate and there's just no escaping that. The only thing we can do is decommission the weapons. So if
Joshua Steinman: (51:48) You wanna decommission all nuclear weapons?
Nathan Labenz: (51:50) I think we should decommission a very I think there is maybe a small amount that could be useful for deterrence. We're way beyond that. We do not need to have the capacity of nuclear weapons to destroy the world fully, and we do. And it's a huge it's a huge strategic blunder that we that the the public like, everybody everybody who, like, doesn't overthink it knows that the world is not in a great spot for having, like, 20,000 deployed nuclear weapons. So what is the right number? It could be 0. It could be, like, a small number that's just, like, enough to make sure nobody fucks with you. Fine. But we're not in a healthy place. Right? We are we've had many close calls. We've had many sort of false alarms. You know, we've got Petrov Day that we celebrate because 1 random dude, like, had the backbone to override what his signals were telling him at a critical moment. Who knows how close we came in the Cuban Missile Crisis? But we just can't keep living with this persistent threat of annihilation forever. It will 1 day catch up with us. So if we're gonna add another 1 with AI, that seems to me very bad. And I you know, my question to you is, what's your plan? Yeah. I I think you
Joshua Steinman: (52:59) have no way no frame of reference for how reality actually works. No. I mean, like I mean, I genuinely mean that. Like, I think you're
Nathan Labenz: (53:07) What's your plan? Dude, you can insult me. I love it. What is your plan?
Joshua Steinman: (53:10) How
Nathan Labenz: (53:11) do we get to an AI prosperous future that is not mutually assured destruction? We're going from mad to made.
Joshua Steinman: (53:20) Yeah. The Mayan plan was very interesting for how to stave off these types of cataclysms. When you defeated adversary tribes, you brought their warriors back, put them on the throne, split open their chest, removed their hearts, and allowed their blood to, you know, fall out on the temple.
Nathan Labenz: (53:39) Great analogy. What is your plan? What is the plan? What is the Trump plan? What is your plan? Give me a plan. I'm at least giving you a plan. You're just insulting me by comparing it to the Mayans. I have not heard any plan.
Joshua Steinman: (53:51) I have no connection to the president. I'm not a part of the campaign. I run a private company right now, so I can't speak for the president.
Nathan Labenz: (53:58) Make it your plan. What is your plan? If you were president, if you were advising, whatever hypothetical, just tell me a plan. What is the plan for a good outcome that doesn't it's I'm not asking for another round of insults against me. I'm asking for a plan. Just give me an outline of 1.
Joshua Steinman: (54:12) You think that the way in which policy gets made is people screaming loudly get to elicit some type of formulated structure that responds to their queries. That's not how things work in the world. I'm telling you, like Okay.
Nathan Labenz: (54:30) But what's your plan?
Joshua Steinman: (54:31) About semi conduct doing it again. Continue the pressure on the Chinese Communist Party and shift semiconductor production to The United States. That's the plan that is a plan on 1 issue that is an actual issue that people talk about, not this not this thing that you're talking about. Not like this, like, I'm afraid. Please comfort.
Nathan Labenz: (54:51) That's just a step on the path to I mean, I think we should also have some domestic chip manufacturing capability. So I don't think it's a good situation that it's all for many reasons, not just the AI arms race. We should be able to make our own chips. I I think we can agree on that. However, as presented, that is not a plan to reach some sort of stable AI future. That is currently 1 move on the path to the AI arms race. It's been framed that way.
Joshua Steinman: (55:19) Your structure of approaching this problem of a stable AI future is, again, like, it it comes from a place that I don't accept. Like, it comes from a a set of experiences that I have no exposure to. Like, that's that's not how people think.
Erik Torenberg: (55:35) Maybe just to and to sort of add to that, Nathan, earlier you said if someone isn't afraid of AGI, then they're not going to then this conversation is a little bit moot because they're not super worried about the the the AR arms race in terms of things getting out of control with the with the technology and its its threat to its its threat to humanity, they're mostly just concerned about beating China. And and so, Josh, your proposal is consistent with that, which is sort of to, you know, shore up our our domestic capabilities. Is is that how you perceive the situation too, Nathan? Or or what do you flush out exactly? Like, what is the concern on the on the AI arms race that that that you have?
Nathan Labenz: (56:17) Well, I think I honestly I just watched this slaughterbots thing again recently. Think it's a very good short visualization of what the future might be like. It is. But, you know, we have many good pieces of fiction, I think, can inspire good thinking. Notably, the movie Her is inspiring the AI developers right now in their product development. So, basically, this depiction of a future of out of control, highly weaponized AI is just 1 where everything is is destabilized because everybody's under constant threat of assassination. You've got, like, tiny little autonomous things that can take out any target. They're they overwhelm you know, they sort of swarm defenses. And this is the trajectory that we're going on. Right? We we are headed for 100%. Right. So this is bad. So, you know, we can sort of dismiss people as naive who want to avoid that future.
Joshua Steinman: (57:06) Oh, by now as techno fascists, or do you want it controlled by The United States?
Nathan Labenz: (57:11) I don't think either party can control that sort of technology or dynamic. And so I think the only way that we're gonna have a good future
Joshua Steinman: (57:18) So you're in favor of continued sanctions?
Nathan Labenz: (57:20) I'm in favor of working together to avoid that that branch of the technology tree.
Erik Torenberg: (57:26) So what does that mean practically? Is it mutual sanctions? Or like
Nathan Labenz: (57:31) It's a many step process. I mean, you gotta start by building some trust. Right? The the 2 powers right now don't trust each other. We are locked into a period of mutual escalation, mutual decoupling, and that is the the first thing is to take to extend some olive branches to try to reverse that process so that there can be some form of trust building so that the world's 2 great powers
Joshua Steinman: (57:56) What olive branches do you propose we send to our largest trading partner?
Nathan Labenz: (58:00) I think it would you know, of course, there's in in any complex thing like this, I don't think there's a single, you know, 1 sentence answer. I think it's I would first change our attitude, and I would start
Joshua Steinman: (58:11) You wanna send them the advanced chips so that instead of having 1 party that can do this, want 2.
Nathan Labenz: (58:16) I think that we should try to work much more together as 1 party than 2. Is that gonna be easy to achieve? I don't think so.
Joshua Steinman: (58:24) A communist party?
Nathan Labenz: (58:25) The communist party has has been many things over time. Right? I mean, we currently have a leader there who we don't like. We also had previously Ding Xiaoping. Before that, we had Mao. I mean, their leadership can change just like our leadership can change. I don't think we should cast ourselves as their permanent enemy or vice versa because who knows what, you know, openings there may be in the future. Nixon went to China. Right? I mean, there's there is the possibility for much better relations between the countries to come, and we foreclose that possibility to our own deficit or to our own detriment, I think.
Erik Torenberg: (58:59) Nathan, do you concede the risk of doing that given sort of, you know, if they take advantage of us like like they have for for for for many year you know, there have been lots of efforts at trying to liberalize or or create good relations, and they haven't always been responded to well or or been met, you know, reciprocally, to put it mildly. And that hasn't been great for us. Right? It it seems a big critique of our of our policy over the last 20 years is we didn't take the threat seriously enough or soon enough, and thus we we enabled them to to build. And so some people might be listen and gain a lot of power. Now they're a great power alongside It's just been a great power.
Nathan Labenz: (59:34) It's a 5,000 year old civilization that had a down century.
Erik Torenberg: (59:36) Yeah. But 30 years ago, you know, the the economy was was very different. And so there's a question as to, like, is this just repeating the same mistakes that we've made, which is not treating China like the the the sort of threat that that it is and and letting them
Joshua Steinman: (59:51) Go go listen to the go listen to the private speech of the Chinese Communist Party that they give inside their their private party conferences. Here listen. Listen. How about how about listen to how they talk about The United States and ask yourself if that's a country that you think we ought to be doing favors for. I don't mind trading. I don't mind even I don't mind even swaps. I don't mind, you know, exchanging money for goods and services. But ask if you should if you think we should be doing them favors, after reading what they say about The United States in in private.
Erik Torenberg: (1:00:21) What would we learn, Josh? I haven't heard the I haven't heard the speeches. What would we learn from them?
Joshua Steinman: (1:00:26) They do not think that we are their friends. We think they are that they think that we are their enemies. Read here. How about this? Don't don't don't believe me on this. Rand Corporation put out an amazing report in, like, 2015, 2016 called systems confrontation, systems destruction. And it is an accounting of the current thinking of the leading theoreticians inside the Chinese Communist Party's military apparatus, the People's Liberation Army. Okay? So, basically, how they think about competition with The United States. It's the most terrifying thing you'll ever read. It's written by the students of the 2 colonels who now run the think tank of the People's Liberation Army who wrote the unrestricted warfare article in '2 or sorry, '19 I believe it's 1999 or 2000. So, you know, current thinking inside the Chinese Communist Party is to disassemble The United States along hundreds of vectors from our ability to use language to our ability to, you know, govern ourselves with laws to explicit military, you know, capabilities and task. I mean, just read read the RAIN Corporation report. Read unrestricted warfare. Like, I don't know how else to try and, you know, tell the audience that that these are adversaries. Right? Like, we can collaborate. We can deescalate, but this is how they think about America is something to be destroyed, something that's in the way of global Marxist revolution with Maoist tendencies. Like, they buy off politicians. They will change national laws. They are seek it's it's not good. I can I it's not even concede? I'm convicted that many of the technologies that you're concerned about, Nathan, are coming. Okay? And by me telling you about companies that I've invested in, what I'm explaining to you is on that line, we think very similarly. Okay? That these things are happening. I don't know that they're inevitable. Like, startups can always fail. I mean, you know, military technologies have a long history of just not not working or not getting adopted. But the point is, like, yes, these are things to be concerned about. But but this formulation of, like, let's let's get some type of alignment with the communist party is to me, and it's just it's not how national policy, at least in my experience, gets made. I don't think it's how good national policy gets made. I think that you you look at things very minimally. Right? You look at a trade issue. You look at a diplomatic issue. You try and do something big, and you run the risk of taking things off the rails in a in a in a very strange way. Right? There's billions of dollars floating around the economy for AI influencers to be talking about these types of things. Some of that money comes from places that we know. Some of that money comes from places that
Nathan Labenz: (1:03:33) we- If you could point me to any pockets of it, that would be much appreciated.
Joshua Steinman: (1:03:36) Oh, like, put, like, $600,000,000 into this thing. I'm sure I'm sure you could get some, you know I mean, I seriously, like, I'm sure you could get some, like, nonresident fellowship at 1 of these institutes or something like that. The trick is to start writing articles and then see if you can get on the conference circuit. Like, that's that's where the money starts.
Erik Torenberg: (1:03:54) Let me rephrase 1 1 thing. I wanna be mindful of times. We'll get you both out in in a in a few minutes. But putting aside the AGI stuff, you know, you you're in agreement that these military technologies are getting stronger and stronger, and more countries are gonna have more capabilities to do damage. And and I'm curious, do you think that world is sustainable? Like, do you think there needs to be some global sort of decommissioning? Or
Joshua Steinman: (1:04:21) 10 years ago in graduate school, I wrote a paper talking about the declining barriers to entry and exit into the marketplace of violence. I firmly believe this. Right? I don't like it. It's happening. Right? Like, literally wrote a short story about, an assassination by drone in, like, 2015. Published. Published. Completely agree. These things are very dangerous. Like, the world is getting more dangerous. There are very few things that you can do as a nation state to affect the security of your own country and of the world. Okay? And I understand that there is this desire to try and, like, bring people together and, like, can't we all just figure this out? Man, orthodox Marxist doctrine does not work like that. Like, they will lie up until the moment that they slit your throat. This is what they've done in every country they've taken over. This is what they do on the international stage. It is just how these systems are structured. Okay? The United States is this, like, gem of a political system in the world that accords some amount of freedom to individuals more than any other country in the past 1500 years, 2000 year. I mean, however you wanna sort of measure these things. You could say ever. People would debate that. Fine. And so and so from from my perspective and my experience, what I'm telling you is those types of grandiose desires, they just don't they there there's no attachment point for national policy there. Okay? Like, you can you can ask for it. You can want it, but I've never I've never seen it. Right? You can get national proclamations or whatever, but when the rubber hits the road, like, that's just not that how things happen. I'm telling you that that my recommendation for how we run the country is to take very narrow approaches to technical competition, economic competition. And the best way to do that is to act in the best interest of the American people. Okay? And so when you have a counterparty that has stolen trillions of dollars of intellectual property that actively dumps raw materials and finished goods, subsidizes them look at what Huawei's done around the world. It's essentially a sigot system for the Chinese Communist Party, by the way. Like, they've all but admitted this. Like, you just have to deal with things in in very specific cases even even though we understand that there is some possibility out in the future of some terrible world. Like, I get it. Like, I'm I'm I'm mindful of it as well. But to say, like, hey. You know, kumbaya. We're gonna give them all the chips. We're gonna work together. I'm just telling you, you can't trust Marxists. I
Erik Torenberg: (1:07:20) I I I'm I wanna be mindful of people's time, so I'll give Nathan a closing statement. And and Josh, if you want another 1, you you're welcome to work with
Joshua Steinman: (1:07:27) No. So I've I've said enough. I've said enough. Nathan, please.
Nathan Labenz: (1:07:30) Yeah. I mean, you may say that I'm a dreamer. Reminded of the perhaps apocryphal Einstein quote where you know, and I don't know if he actually said this or not, but we don't know what weapons World War 3 will be fought with, but we know that after that, we'll be back to sticks and stones. It feels like, you know, the argument for sort of that's not possible just doesn't cut it against the sort of magnitude of the threat that it does sound like you are also recognizing. I would love to hear a plan. I would love to hear a plan from either candidate for how are we going to not slide into an AI arms race, another sort of Damocles, another technological sort of Damocles over the entire global population, a World War 3 perhaps at some point that we can't recover from?
Erik Torenberg: (1:08:24) So so some people would call sort of what's happened with with nuclear weapons a success and that we haven't yet had, you know, major nuclear conflict since World War 2, but you don't you don't you just think it's an inevitable comp you you think it's it's a failure?
Nathan Labenz: (1:08:37) I think it's a terrible failure. Yeah. I mean, we have way too many. It hasn't been that long, and we've had a number of close calls. So, you know, the you know, just in the same way that, like, we have this pandemic and we don't seem to have, like, learned much or, you know, we're not, like, taking the appropriate next steps to sort of be ready for the next 1, we sort of, you know, have had a bunch of close calls with nuclear weapons, and we basically have just been like, guess there's nothing we can do about it. You know? It's just that's just life. I just don't accept that frame on any of these big questions. I think we can and should strive to do better. And if we wanna be here in thousands of years, let alone millions of years, we better.
Erik Torenberg: (1:09:18) Thank you for for for for both engaging in this in this very important conversation. Nathan, Josh, thanks so much.
Nathan Labenz: (1:09:24) Thank you, guys. It is both energizing and enlightening to hear why people listen and learn what they value about the show. So please don't hesitate to reach out via email at tcr@turpentine.co, or you can DM me on the social media platform of your choice.