Jake Sullivan on Navigating AI Uncertainty and Managing Competition with China

Jake Sullivan on Navigating AI Uncertainty and Managing Competition with China

Today Jake Sullivan, the U.S. National Security Advisor from 2021-2025 joins The Cognitive Revolution to discuss AI as a critical national security issue.


Watch Episode Here


Read Episode Description

Today Jake Sullivan, the U.S. National Security Advisor from 2021-2025 joins The Cognitive Revolution to discuss AI as a critical national security issue.

Check out our sponsors: Labelbox, Oracle Cloud Infrastructure, Shopify.

Shownotes below brought to you by Notion AI Meeting Notes - try one month for free at https://notion.com/lp/nathan
- Four-Category Framework: Sullivan organizes his thinking about AI into security, economics, society, and technology categories rather than focusing primarily on existential risks
- "Managed Competition" with China: He advocates for a model where the US and China "compete like hell" while maintaining sufficient guardrails to prevent conflict
- Skepticism of Grand Bargains: Competition is a "chronic condition" of the US-China relationship that cannot be solved with a strategic condominium
- AI in Modern Warfare: Current conflicts are providing "glimpses of the future" with lessons about scale, attritability, and the potential for autonomous weapons
- US Military Adoption Concerns: Sullivan worries that Pentagon bureaucracy prevents rapid AI adoption while China's PLA may be better positioned to integrate AI capabilities
- Private Sector Leadership: AI is "the first technology with such profound national security applications that the government really had very little to do with"

Sponsors:
Labelbox: Labelbox pairs automation, expert judgment, and reinforcement learning to deliver high-quality training data for cutting-edge AI. Put its data factory to work for you, visit https://labelbox.com

Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive

Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive


PRODUCED BY:
https://aipodcast.ing

CHAPTERS:
(00:00) About the Episode
(04:52) Introduction and AI Worldview
(07:15) Washington's AI Understanding
(14:28) Concrete AI Opportunities
(22:09) Trump AI Action Plan (Part 1)
(22:19) Sponsors: Labelbox | Oracle Cloud Infrastructure
(24:55) Trump AI Action Plan (Part 2)
(29:09) Middle East AI Deals (Part 1)
(34:09) Sponsor: Shopify
(36:06) Middle East AI Deals (Part 2)
(36:54) Understanding China Threat
(48:26) Export Controls Strategy
(59:39) Global AI Competition
(01:02:26) Managing Great Power Competition
(01:08:46) AI in Modern Warfare
(01:10:41) Final Thoughts Economic Impact
(01:12:25) Outro

SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...


Full Transcript

Transcript

Nathan Labenz (0:00)

Hello, and welcome back to the Cognitive Revolution. Today, my guest is Jake Sullivan, former national security adviser to president Biden and currently the inaugural Kissinger professor of the practice of statecraft and world order at the Harvard Kennedy School. This conversation was an exciting opportunity to unpack American AI policy and broader geopolitical strategy with someone who has shouldered intense decision making responsibilities at the highest level of the US government and who answers critical questions, including about the nature of the threat that China poses to The United States and the nature of a stable equilibrium that the 2 countries might achieve in the most clear and concrete terms I've heard anywhere, which I appreciated tremendously.

After listening to and reflecting on this conversation, I would say that Jake embodies what I might call the very best of the establishment. While AI was a relatively small part of his overall portfolio for most of the 4 years he spent as national security adviser, he nevertheless took the topic seriously enough to meet with leaders from all of the Frontier AI labs and to seek a wide range of expert opinions. Agree or disagree with his conclusions, there's no question that he is deeply informed and that his approach combines rigorous pragmatism with refreshing restraint.

While he does take for granted that The United States global leadership position is good and ought to be upheld, he is not pushing a broader ideology, not angling for regime change, and not making complicated second order or bank shot arguments. On the contrary, at every turn, he seems to really recognize the profound uncertainties associated with AI technology. And very simply, as the cliche goes, tries to figure out how to maximize the benefits while also minimizing the risks, both from the technology itself and from the possibility that government action could end up making things worse.

As national security adviser, this meant on the one hand assessing the national security establishment's readiness with respect to AI publishing a memorandum that called for greater urgency in adopting AI technology. And on the other hand, it also meant initiating an AI risk dialogue with China. At one point in this conversation, Jake describes himself as boring for focusing so much on the base case and for thinking so linearly. But I would suggest that this style simply reflects 4 years spent in one of the least boring positions in the entire world where his job was more often than not to identify the least bad of a set of unappealing options.

And in any case, when it comes to international relations, boring is probably in most cases a good thing. Managed strategic competition with China, however messy and unsatisfying, may indeed be our best path forward. And if nothing else, it is really good to hear an explicit argument that this strategy will serve us better than either a Cold War style AI arms race or an attempt at a grand bargain.

At the same time, I do think it's important to be mindful of the limitations of this generally sensible approach. While Jake is not at all dismissive of short timelines or existential risks and does take scenarios like AI 2027 seriously, the standard policy toolkits and international relations playbooks don't have much on offer to address the most extreme AI possibilities. And this does create a real gap exactly in the cases where the quality of executive leadership might matter most. Either because we need leaders to show wisdom and restraint under crisis conditions, which I do think Jake would do well, or to inspire the public solidarity needed to navigate what could be shockingly disruptive transitions.

After we stopped recording, Jake was kind enough to say that he's a fan of the show, which given his schedule and responsibilities, I found both extremely flattering and honestly a bit hard to believe. But still, I took the opportunity to ask him how I could make the feed more valuable. His answer is something that the entire AI safety community should hear. Go deep and get as concrete as possible on the specific details of the downside scenarios that you are most worried about so that the people in power can have the clearest possible threat models and also the most concrete possible plans to address them.

I know that many in the AI safety community are trying to do exactly this right now, but it seems to me that we can't have too much of that kind of work, and I definitely plan to feature more of it on the podcast going forward. For my part, if I could be so bold as to offer Jake a suggestion in return, it does seem quite realistic to me that he might return to a position of power just as transformative AI is hitting hard in, say, 2029, it would be to spend more time developing a positive vision and story for the American people. Boring may be best in many scenarios, but as I so often say these days, the scarcest resource is a positive vision for the future. And there's a good chance that the current or the next administration will be called on to do more than muddle through.

With that, I hope you enjoy this window into the worldview, mindset, and thinking behind The United States approach to AI and national security with former national security adviser, Jake Sullivan. Jake Sullivan, former national security adviser to president Biden and currently the inaugural Kissinger professor of the practice of statecraft and world order. Welcome to the cognitive revolution.

Jake Sullivan (5:03)

Thank you very much for having me.

Nathan Labenz (5:04)

I'm excited for this. We have a ton to cover, so I'm gonna try to be brief in my questions, which is not normally my style and give you the bulk of the airtime. I thought we kind of structured this by just setting the stage. I think so many AI conversations kind of go a little bit awry because people don't necessarily have the same expectations of or even understanding of where we are in the technology today, let alone where it's gonna go into the future. So just to give you a chance to kind of share with the audience, like, where you're at, how AGI pilled are you? And, you know, I guess that means, like, how soon do you expect powerful AI how powerful do you expect it to be? Where do you put that in relation to other national and global crises that we have? And maybe for bonus points, like, how would you compare yourself to other parts of the national federal government?

Jake Sullivan (5:49)

It's a great question. And like most great questions, it doesn't have a totally straightforward answer. How AGI pilled am I? I believe that the possibility of transformative, powerful AI coming in the very near future, that is in the next couple of years, is something we have to take very seriously. I took it very seriously when I was national security adviser. And you have to build policy and make strategic decisions on the basis that that is a distinct possibility. But do I believe it's an inevitability or we certainly are going to see it by '27, '28, or even 2030? No. That's because super smart people make the case that it's coming very fast and other super smart people make the case actually there's a number of things that have to happen and those things may take quite a long time before we get to something like AGI or powerful AI or even ASI. So I build a planning assumption that this is distinctly possible. And if it comes and when it comes, it will be transformative across every dimension economic, societal, and yes, national security. And it is on par with the most urgent and important national security issues there is because it touches everything, from the future of warfare to the future of nation state competitiveness to the future of nonstate actors and the threats they pose, and everything in between.

Nathan Labenz (7:16)

Yeah. It is a tough epistemic environment with the divergence of top thinkers being just so radical. That's a frequent theme of this show. What would you say have been and maybe you could kind of weave into a little bit of, like, where do you think congress is at on this? Where do think the military is? There's obviously the agencies that are probably all over the place. But where what has sort of most informed your thinking? Has it been, like, private conversations with leading developer CEOs or, you know, how much is just kind of getting hands on with the technology itself? How much is are these sort of manifesto, you know, type documents, impacting the Washington discourse? Like, where where are people taking their cues from in Washington?

Jake Sullivan (7:54)

So as national security adviser, I really tried to cast a very wide net. I met with the leaders of all the frontier AI labs. I met with technologists who are AI skeptical. I met with investors. I met with academics. And of course, then I convened and engaged with the entire national security enterprise of the US government and said, what are we doing to integrate and adopt AI capabilities to advance America's national security? In fact, towards the end of my time as national security adviser, we put out a national security memorandum, which sounds kind of boring, but was a lengthy document, which in fact the Trump administration has not discarded, that sent clear directives to all of the national security agencies of the US government, including the Pentagon, including the intelligence community, but also Treasury and Commerce, State Department, etcetera.

And what it said was, right now, America is in the lead at the frontier. We could debate by how much. But we are behind the curve when it comes to integrating and adopting these capabilities and applying them for national security purposes. And we got to move faster with greater urgency, with greater dexterity. And I think that remains the case today more than 6 months after I left office. So are there people inside the Pentagon who see the promise and the opportunity here and also see the threats and risks that come with advances in AI? Of course, there are. Brilliant people. But culturally, bureaucratically, institutionally, are there obstacles to taking advantage of the full suite of capabilities that is there today and increasingly will be there in the future? Hell yeah, there are. And that's something that I think takes leadership from the White House to try to break through. We did that particularly in the later years of the Biden administration. I see evidence that there is some push along those lines from the Trump White House as well.

Oh, and you asked about Congress. I should say a word about Congress. I think Congress at the moment it's hard to talk about it monolithically because there are members who get both the opportunity and the challenge, who recognize how transformational this is going to be, and are thinking hard about what role they should be playing. But I would say in the main, when you look at the pieces of draft legislation or you look at the hearings or you look at the public statements of most members, I think there is still a gap between where congress is and where it needs to be in terms of a sense of urgency and priority for this issue.

Nathan Labenz (10:28)

Yeah. That sounds safe to say. Do you see a for people that want to change that, there's this notion in the AI safety community that if we can just get some time with members of congress and we can show them our scary demos, you know, we can show a chatbot that will help you make a bioweapon or at least attempt to or whatever, that that will sort of snap people to attention and get them to engage more seriously. I wonder how much those sorts of awareness building drives have changed the overall landscape in your mind? Has any of that reached you, or have you seen the impact of that?

Jake Sullivan (11:05)

Yeah, look, demos going back years now to early in my time as national security advisor definitely made an impression on me because you can describe what an LLM can do. I can coherently understand that. But when you actually see the capability live and in color, it does have a different kind of impact. That probably is the case on the hill. But I would submit to you that generally speaking, with complex areas of regulation and legislation and this is among the most complex there is Congress really only tends to react when there are real world impacts. That could be a catastrophe, like a 9/11, which gave us the Patriot Act, in part for good and in part for ill. But that also could be a more cascading set of developments in the world, like job displacement in key industries, like announcements by other countries of military capabilities that give them advantages over The United States. And it will be a collection of those moments, in my view, that is really what's going to get Congress into action more than just demos of the tech. That's not great because it means that we're going be playing catch up as opposed to being ahead of the curve, but I think that is the world we're living in right now.

I would also say that the Trump administration has sent a very clear signal they don't want to see legislation. They don't want to see an effort to impose something from the Hill or even from the executive branch. And even setting aside that question, it seems to me this is a very difficult area to be confident in what is right from the point of view of rules, standards, trust, safety. And coming back to your first question, we don't know where the tech is going and when. So it's very hard sitting here today to be too dictating of things in light of the profound uncertainty that we're contending with. That's why I think preparation and flexibility, being ready, developing a set of rules and standards of tools that the congress or the executive branch could use as necessary as we go forward is the right thing. And then choices about when to deploy those based on how we see the tech evolve, should be done in careful dialogue between Washington and Silicon Valley and the rest of the country.

Nathan Labenz (13:37)

I think you're absolutely right that sitting down today and trying to write a long list of rules or standards is an ill fated undertaking. We've done a couple episodes recently on more kind of creative attempts to try to bring dynamism into the regulatory environment with things like the SBA 13 in California proposal where there could be some sort of marketplace regulation. Just going a little bit deeper into your worldview, though, how I know it's been your job for the last 4 years to anticipate and prevent ideally and then respond to if necessary a lot of bad things. Have you had time to develop any sort of positive vision for the AI future? Are you imagining a new social contract? Are you signing on to the Bernie Sanders call for maybe a 4 day work week? Is, in your imagination, what does the upside of all this look like?

Jake Sullivan (14:29)

One of the things about the way that I think is I tend to think in quite linear and concrete terms. I have a hard time with these kind of big abstractions about social compacts or about recursive self improvement leading to a nation of geniuses in a data center. These kinds of concepts I just have a hard time wrapping my head around. So I tend to think of things in a pretty straightforward linear way. And I see kind of 4 big buckets security, economics, society, and then these existential questions where there are both risks and opportunities.

And our job should be to identify the risks very concretely. So in the security space, there are non state actor risks like cyber and bio. In the state actor domain, there are risks around wonder weapons and military advantage by one of our adversaries or competitors. But there are also opportunities in the security realm that AI could actually enhance America's national security if we remain in the lead on it and we deploy and adapt it effectively.

Similarly, in the economic domain, there are genuine risks of job displacement. There are genuine risks of concentration of wealth and power that come from essentially the benefits flowing to a very few if we're not careful. That is a big risk. Then there are massive opportunities on productivity, on problem solving, issues around climate and energy or public health or what have you.

What I try to do is think, all right, let's get very concrete about the risks, identify them, make a typology of them. Let's think very concretely about the opportunities. Then are there steps the government can take to increase or expand the opportunities and to manage and minimize the risks? If the answer to that is yes, act. If it's not, then don't act. That's really how I think of it.

Then there's this fourth bucket of existential. Here I think it's just too imponderable at the moment, these kinds of like the robot apocalypse or super intelligence leading to nirvana and to space. You can tell the very positive story and you can tell the very negative story. For me, I'm really focused on the first 3 buckets more than the existential questions right now. What can we do in the security domain and the economic domain and the society domain? On society, the risks of misinformation, of alienation are real, But the opportunities of giving every person access to teachers and doctors and nurses and personalized medical care, these are exciting opportunities.

So that's how I think of it, basically, in a kind of boring way, to be honest, but in a way that I think the debate about AI is not entirely contending with. I've asked research assistants of mine here at Harvard, Get me a set of articles that actually walks through what are the real risks and what are the real opportunities that go beyond the headline or the whiz bang or the doomerism. Honestly, there's not that much out there that really walks through this in a deep and coherent way. And I think more of that kind of analysis could help us really bring a greater degree of rigor to the conversation about where AI is going and where government action should go to try to make sure that it works more for us than against us.

Nathan Labenz (17:59)

Yeah. That's interesting. It sort of leaves me I think I take the existential concerns more seriously, certainly in terms of a like, I would bet my PDOM is higher than your PDOM.

Jake Sullivan (18:10)

Just to be clear, I take them seriously. It's just right now, I feel like I am personally less equipped to give definitive answers to how to deal with the existential questions than I am immediate issues, like the cyber risk that AI could pose in 2025, or how to think about dealing with the fact that whatever happens overall with growth productivity, job gain versus job loss, a lot of people are going to be disrupted and will need a better form of social insurance than we've currently got on offer.

To me, I want to put more energy and emphasis in working through those questions because I actually believe that if we answer those questions effectively and design a set of approaches that allow us to manage risk and opportunities in those buckets, it will set us up better to deal with those bigger questions of misalignment, AI risk, PDOM, what have you, which I am not trying to shrug off or say isn't significant. I'm just suggesting that racing right to that question, which I think so many people do, takes you past things that are very live challenges right now that are far less imponderable, far less contestable, have far less of a spectrum of where people fall on them. And we've got to do hard work of building consensus around those issues. I think in doing so, we'll build muscles that will put us in a better position to handle the existential.

Nathan Labenz (19:46)

Yeah. That does seem to depend on a certain timeline assumption. I guess part of how I've operationalized my thinking is, like, I do take pretty seriously the idea that, like, Daniel Kokotajlo and the AI 2027 folks could be right and, like, one of these frontier developers just might set off an intelligence explosion in the next 2 to 3 years. But then I'm like, but there's probably not much I can do about it if that is gonna happen. And so maybe I just have to kind of try to stay sane and, like, play for kind of longer timelines and more mundane scenarios. It sounds like you're kinda saying the same thing, but with a little bit less weight on the short timeline.

Jake Sullivan (20:25)

I guess what I'm saying is we have to take seriously the possibility of AI 2027. And I not only read the piece, I shared it extremely broadly. I called people and said, how right is this? I talked to a lot of technologists who said, this is a totally credible scenario. I talked to others who said, it's less credible. So I, of course, take that very seriously.

I guess my point is, sitting here in 2025, let's assume that that is the direction we're heading. I still believe that a more concrete and application specific conversation about what we are going to do to manage particular risks and to enhance particular opportunities will actually set us up better in both the development of policy tools and in the depth and specificity of the conversation that can be had between the technology community, the policy community across this country. And I actually think that will help put us in better shape to handle that kind of extreme scenario if and when it comes.

And yeah, sitting here on your podcast today, I'm not denying that that could come in 2027. And I don't throw my hands up and say, well, there's nothing we can do about it. What I'm saying is let's work through the things we know are coming, are already here. Let's get good answers to those questions. And in so doing, I think it tees us up better for these deeper existential questions about intelligence explosion and takeoff and the like, all of which I spend part of my time both worried about and studying and talking to people about because I do take it very seriously.

Nathan Labenz (22:10)

Perfect. Well, thank you for bearing with me through the high level stuff.

Nathan Labenz (24:55)

That's a perfect transition to getting a lot more concrete. I think we're talking about a week after the AI action plan was released and ton of stuff in there, broadly pretty well received, including by people that I think expected to not like it. What was your take? You know, what would you say are the highlights and lowlights from your point of view on the AI action plan?

Jake Sullivan (25:16)

My overall reaction is that I thought it reflected a bipartisan consensus on a number of critical points, including the kinds of investment and support The US needs to be providing to ensure we stay ahead of the frontier, including even in areas related to try to building a global AI alliance working with other like minded countries as well. And I the way that they handled some of the risk and security issues, including on bio, was laudable. They re upped a proposal that the Biden administration had put forward in the executive order on presynthesis DNA screening, for example. So I thought all of that was to the good. I thought there were some politicized elements to it, but, you know, that kind of go with the territory of the way this current administration operates. My main concern with it actually was the gap between what it says about needing to stay ahead of China in this race, deny them access to high end compute, and then the reality of what the Trump administration is actually doing with this H20 decision and other things. So my biggest concern is that I think there's a dissonance between the AI action plan and what the Trump administration is actually doing, and that's a concern to me.

Nathan Labenz (26:32)

Yeah. I definitely wanna unpack the whole export controls and China thing in some depth in just a second. Another big question I have around just, like, all these action plans in general with a sort of, are we actually going to achieve abundance or not sort of background context in mind is like, how likely should we think it is that the government is actually going to adopt these technologies? Like, last I heard and you may tell me differently, but from what I heard from folks in the Biden White House and, you know, some folks in the Trump administration, there's, like, still no ChatGPT or anything similar at the White House. Right? I think people are, like, going home and running deep research queries on their own computers and then, like, trying to, you know, memorize those results and bring them back into the office. And that just seems like not great baseline from which to try to, like, not dictate, of course, lead the national charge toward global AI leadership. So how do you think about that disconnect?

Jake Sullivan (27:30)

It's a disconnect. I acknowledge it. You know, I am now a Avid would be an understatement user of these tools. I use them much less frequently when I was actually national security adviser and yet working on the issue every day. There's a few reasons for this. The culture of government in terms of new technology adoption is historically slow, that's true in this case as well. There are legal issues, particularly in the national security enterprise, especially if you're talking about things on what we call the high side and classified compute and the like. But in general, I think it's imperative that whether it's the White House or the agencies or beyond, that the adoption and incorporation of these tools get accelerated because you're right. I think it's axiomatic that you cannot effectively lead and govern on this set of issues if you aren't an effective user of them for purposes in advancement of America's interests. So I think we should see more of it. It's not clear to me. I think there's going to be piecemeal to a certain extent, and there'll be some early adopters than others. But I would be an advocate for seeing this get incorporated as rapidly as possible.

Nathan Labenz (28:42)

How about the recent deals that president Trump announced when he went to The Middle East with The UAE and Saudi Arabia? My sense is that you would not have supported those deals or, you know, we wouldn't have seen similar deals under a Harris administration. It seems like sort of a complicated business. I don't really I think there's a lot of arguments going different ways. What do you think are sort of the primary considerations that people should have in mind, and where does that lead you to come down on those deals?

Jake Sullivan (29:11)

Look, for me, this actually isn't too complicated. Number 1, I would like to see the world running on American AI, not Chinese AI, and American chips, not Chinese chips. And that means diffusion. It means that we should be selling our chips and seeing data centers built around the world, including in The Gulf, where countries like The UAE and Saudi Arabia have advanced a deep interest in having AI be a central part of their national development enterprise. So one, I believe in the fusion.

Two, we need security and high standards to ensure that there's not diversion of chips to China that vitiate the export controls, that you deal with things like insider threat challenges, cyber challenges, physical security challenges, etcetera. And we should hold everyone to a high security standard.

And then third, I believe The United States should not trade one form of dependence on any part of the world, including The Middle East, for another. So we were long dependent on The Middle East for oil. We should not become dependent on The Middle East or anywhere else for compute. So I believe that the lion's share of the compute build out by American AI should be in The United States Of America.

And so for me, the question in assessing these announcements coming out of The Middle East and by the way, that's all they really were. They were announcements. It's hard to know exactly what's going to happen is that the devil's in the details. I support selling chips to these countries as long as they meet the necessary security standards. But I support a limit on how many chips we sell because I think we should not end up in a circumstance where all of the future build outs and all of the future training runs are being offshored and outsourced. I think that would be repeating a mistake we've made too many times with past industries, and we shouldn't repeat it here.

I'll be the first to say I engaged deeply with the leadership of The UAE on their AI ambitions. I very much supported deepening partnerships between US technology companies and UAE technology companies. I continue to support that. But I think the critical point for me is let's not end up dependent on others for access to compute in the future. Let's make sure that we are building the lion's share of the compute here in The United States.

Nathan Labenz (31:41)

So I think that dependence thing seems very common sense to me. I guess other dimensions of the deals that I would highlight would be maybe values and control. There's all this talk, and we're gonna get to China next. But there's all this talk about making sure that, like, AI reflects American values. I think it takes some squinting to put it mildly to say that the governments of these countries really uphold or, you know, adhere to American values. And then on the control side, this seems like it might be actually one of those things that you might do a little differently depending on how seriously you took the AI 2027, 2028 superintelligence scenario because you might say, geez, like, a 5 gigawatt data center in Saudi Arabia might be just the kind of place where somebody might set off a recursive self improvement loop that would be outside of your US jurisdiction and control. And, like, maybe we just shouldn't set up the preconditions for something like that to even be possible, and yet it seems like kinda here we go.

Jake Sullivan (32:41)

I agree very strongly that we do not want leading edge frontier training runs done in The Gulf or in a lot of other countries too. I want them done here in The United States Of America, and I think it's incumbent on us to ensure that we break the bottleneck on power so that it's not a hard constraint on being able to conduct those training runs, and we should do them here in The United States. I very much agree with that.

Look. When it comes to values, my view on this is that there are a number of countries around the world, including in The Gulf, who have different values when it comes to democracy and human rights than The United States does. We deal with them economically. We deal with them in technology. I don't think that that is wrong. I just think we have to be clear eyed about what we stand for and try not to directly aid and abet repression and things along those lines. But you could take that argument to its logical extreme and end up really basically severing relations with a lot of countries around the world, then I wouldn't be prepared to do that. So I think it is a factor, and it's something that should be part of the discussion, part of the conversation, and regularly is between our country and countries in The Gulf. But I have a hard time saying that I would put I certainly wouldn't put those countries or others similarly situated on a blacklist.

Nathan Labenz (36:07)

So why doesn't the same analysis apply to China? I mean, it seems like a lot of what you said there in terms of different attitudes on democracy and human rights, but still gotta deal with them. We can still trade, etcetera, etcetera, like, could be applied to China. But it seems like there's a sense that there is some unique threat that comes from China today. And I always ask this question. I feel like I never get a great answer. What is that threat? How should I understand the threat as, you know, an American who grew up in the Midwest and has certainly lived through some of the deindustrialization problems that we've had, but consider those to be in the past. Like, sitting here in 2025, like, looking ahead, why should I be concerned that China is you know, what should I be concerned that China is gonna do such that we, like, can't deal with them in the same way we can deal with these Gulf countries?

Jake Sullivan (36:55)

Well, first, let me just say that I believe we should not decouple from China. I believe we should have an economic relationship with them. I've been very much on record in saying that we're competing intensively with this country, but we're also going to learn, have to learn how to live alongside one another as major powers for the indefinite future. So you're not going to hear from me, you can't deal with China at all. And, you know, I am focused on what I have called the small yard and high fence, where these high end capabilities with national security applications can be controlled and restricted.

But to directly answer your question, which is what should we be concerned about with respect to the PRC? I told you earlier I like to be concrete and not just speak in abstraction, so let me just give you a few examples.

First, you said the China shock is in the rearview mirror. The author of The China Shock, David Autor, and one of his co researchers just wrote a quite compelling piece in The New York Times saying, Get ready for China shock 2, and it's going to be worse. And that is because China is pursuing a strategy of massive subsidization of strategic industries and then trying to flood the market with cheap goods that undercut workers and businesses who play according to a different set of rules in The United States and elsewhere. And that, if left unchecked, could easily have the effect of a further hollowing out of our basic industrial base, including in the industries of the future. Economists call this overcapacity. I call it basically flooding the world with cheap manufactured goods based on a state directed capitalism where China is not playing by the same rules as everybody else. I think that's a challenge. Now, we don't have to go to war to deal with that challenge, but I do think we need a set of measures coordinated with other countries in the G7 and beyond to push back against that. So that's one.

Second type of challenge I think is best exemplified in an episode that happened back in 2019 when the GM of the Houston Rockets made a comment about freedom fighters in Hong Kong. And the Chinese government went nuts, basically told the NBA, you've to shut this guy up. The NBA essentially had to bend to the will of China on that particular point. Now, is that a threat to every American? No. But if the entire world is run on Chinese AI and China is effectively saying there's going to be a price for your speech, then that is a threat to the American way of life. And China has shown a willingness and a proclivity to do that. So that's a second challenge.

Third, while I was in the job, I had to deal with something called Volt Typhoon, which has been publicly reported upon. And I can't say too much about it, but I can say this. We have seen the prepositioning of malware and critical infrastructure in The United States by the PRC, and that's a direct threat to water systems, electricity, pipelines, whatever it may be, and something for us to be very concerned about. Why are they doing that?

Fourth, China has engaged in the world's largest peacetime military buildup probably in all of human history. Why? Part of it's about Taiwan, which is its own risk and threat, and maintaining peace and stability across the Taiwan Strait, I think, has to be a paramount priority for US policy because it'd be totally catastrophic if that happened. But they're also looking to project power globally in a different way.

So what makes China different from other countries? One of the things is they are the only country that actually has the attributes and capacities to compete with The United States, matched with an ambition to surpass The US as the world's leading power economically, technologically, militarily, diplomatically. And I think a world in which China is writing the rules, China is running the tech stack, everyone is dependent on China, is a world of greater coercion, less freedom, less good economic opportunity for Americans, and a greater possibility of China flexing its military muscles to subjugate others. So that is my set of concerns.

Now, can this be managed in a way where we have a stable relationship with China? I engaged in intensive and deep diplomacy with my counterpart, Wang Yi. And I believe that we paired intense competition with intense diplomacy to good effect. And I also am determined that competition not turn into conflict and that we maintain space for cooperation on key issues, including, by the way, on issues associated with AI risk, where we inaugurated an AI risk dialogue with China to talk to them directly about issues that challenge both of our countries.

I'm not someone who sits here and just says, it's Cold War time, maybe. Not at all. I do not seek a Cold War. What I seek is for The United States to maintain the capacity to ensure its way of life and not have that way of life undermined, pressured, or put at risk because of too much capacity and power accruing to a country that does not share our values and has shown a willingness to use its coercive power, frankly, recently against The United States with the rare earth magnets thing, but also against other countries. So that's a long answer to your question, but that's how I see the set of challenges that we're confronting when it comes to the PRC.

Nathan Labenz (42:39)

Yeah. I think I mean, most of that sounds very reasonable to me. And you look at a giant military buildup, and one does have to wonder, you know, what exactly are they planning to do with it. I think you could offer benign answers, but, you know, in your job, it's not right for you to just accept, you know, the first explanation. I totally get that. How do you think that they understand what we are trying to do? Because, you know, I'm not a big analogy guy, but a sort of short story of the run up to World War II, at least in The Pacific was like, cut Japan off from oil. They were like, well, we have a limited window here where we've got to go for everything or we're screwed. So here we go, we'll roll the dice and you know, do this Pearl Harbor thing and try to, you know, push out and establish our perimeter and hopefully we can hold it. And obviously, that was terrible for them, terrible for a lot of American guys that had to go fight that war.

And I think it seems like it would be kind of hard for the Chinese government leadership class, whatever, to look at being cut off from chips at the same time that everybody's saying, like, this is the new oil. You know, it's the new big thing. It's the, you know, it's the thing that's gonna drive the next transformation and not feel like they're kind of in a similar spot where they say we're trying to hold them down and this does seem like it is what we would be doing if we were trying to hold them down. How do you think they understand it and do you think they're getting it wrong or are they maybe in that fundamental sense getting it right?

Jake Sullivan (44:09)

It's interesting because the people who make that argument also tend to make the argument that the export controls don't work. It's an interesting overlap, and I see a pretty deep tension between those 2 propositions. Now, I think China, to a considerable extent, buys its own hype that the export controls are futile because it's going to make more and better chips, and it will be just fine. And that, to me, disrupts the Japan analogy.

Do they complain at great length about the export controls? Absolutely. Do they say that this is containment and suppression and so forth? They do. Do I believe that they have decided they must go to war to deal with the export controls? No. I have not seen that. And I think the analogy has a certain elegance to it, but it requires someone to produce evidence. This is actually a targeted policy of saying the highest end chips and semiconductor manufacturing equipment that The United States designs, we're not going to put in the hands of our competitor, to me, not the same thing as a full on oil embargo and is a sensible national security measure.

And frankly, if you look at China, they're engaged in a whole series of their own national security measures, including various forms of export control, on things where they think they have some advantage. And I don't really see anybody saying for that list of things, which are pretty elemental, hey, that's a reverse Japan US thing that's going to lead The United States to feel it has no choice but to attack China. So this is what competition looks like when power is increasingly measured and exercised in economic and technological terms. And China has cards to play and The US has cards to play. And we have to look at them and make determinations about how to play those cards. But I am a skeptic of the argument that we are inexorably pushing China to a position where it feels it has to lash out at us militarily because we have high end chip controls.

Nathan Labenz (46:14)

Yeah. I mean, again, it seems like a timeline thing maybe. I would put myself in the camp of people who do believe that the export controls will be effective in this at least in the sense of, like, limiting what China can do in AI on some, you know, important dimensions at least in the short term, you know, at least as long as the sort of AI 2027 or Dario Amodei timelines go. If we find that, like, there's no AGI in 2030 and we're kind of headed into a longer term scenario, then I'm not so sure because then, you know, I always say, like, I've seen them put up a hospital in 8 days or whatever. So, like, I don't doubt that they could build a chip industry in 5 years. Maybe not, but who could say?

I guess, how do you think about the it seems to me that there has been this evolution, and I think this would be in terms of, like, are we trying to keep them down and how should they understand it? You mentioned this term, like, small yard, high fence, but it seems like this has really evolved. Right? Like, the story that people are telling, maybe not you, but you could tell me. But the story that I'm hearing in defense of the export controls has moved from this like, we wanna deny them these like super, you know, cutting edge military applications to, well, maybe we can't do that, but we can at least like deny them the ability to build frontier AI models. And now it seems like we're like, well, we can't really do that either, and this is not an exaggeration. The latest stuff that I've seen coming from Leonard Nym who's been almost synonymous with this export control analysis at the sort of think tank level is that, hopefully, we can field 10 times as many AI workers as they can at least. You know, we can't deny them these other capabilities, but we'll just have, like, way more AI agents running in our economy than they will.

And this does seem to me like it has shifted from a sort of small yard high fence to a, like, general denial of the broad based economic benefits of AI, which, again, to me feels like if I'm them, even at the rank and file, like, consumer level, I'm like, is The US trying to deny me my AI doctor? Like, that doesn't feel good. At what point do we actually become, like, the bad guy in some sense or, you know, at what point are they right understanding for us to be, like, trying to hold them down?

Jake Sullivan (48:28)

Look. The way I approach this may be overly simplistic, reflecting the fact that I'm not a technical expert on chips or AI. It's, to me, common sense. You have a high end technology that has significant national security applications. Some of those applications are exquisite, like in a weapon system. Some of those applications are more general, like building super powerful AI that gives you a massive military and intelligence advantage over your adversaries. But either way, it is an input for building national security advantage. We have in China a serious competitor. Why are we giving them that input to be used against us or our allies? We shouldn't do that. That's how I look at it.

Now, what impact does that end up having in the AI race? The way that I would put it is, I think it gives America a distinct advantage at the frontier. And we should try to sustain that advantage because being ahead at the frontier, from my perspective, has a significant number of national security and strategic benefits. And so that was, in my view, the original impetus behind the export controls. Don't provide your deep competitor an input for its national security advantage to be used against you. But be discriminating in what you're denying so that it's not things for gaming applications, broad based kind of stuff.

And it is definitely the case that generative AI and large language models had an impact then on how to think about that basic calculus of national security advantage, no doubt about it. And it made us think differently about what exactly is inside the yard. But the goal here fundamentally is still very much focused on trying to sustain and secure a national security advantage for The United States and not cede it over to China, that leaves this kind of dual use issue that you flag, which is a real one, right? Because these chips can be used for national security purposes. They can be used for scientific and benign economic and other purposes, too.

The challenge we have here is that in China's system, they have a doctrine of civil military fusion. Their technology companies have deep ties to their military. And so it's not easy to say, well, we'll give the chips for this purpose, but not for that purpose. There's no way to actually enforce that in a meaningful way. And that is how we end up with the policy that we end up at. But are we trying to deny access to AI doctors in China? Absolutely not. And do I think the functional outcome of an export control policy will be the denial of AI doctors to Chinese people? I do not believe that it will.

Nathan Labenz (51:30)

So do you think, like, Leonard is wrong when he says we can run 10 times as many agents? Because, like, that seems to be the upshot of that. Right?

Jake Sullivan (51:38)

Well, let's think about it this way. What's absent from that statement and I don't have the context of that exact statement, so it's hard for me to directly comment on it. But what's absent from that statement is what we started this conversation with. AGI, ASI, powerful AI, whatever you want to call it. Is that coming? Who's going to develop that capability first? To what end? I would like to see The United States win that race. Do I think denial of access to high end compute to China is an advantage to The United States in that race? I do. Where do AI workers in data centers come into that? I don't know.

What I will tell you is my logic on this is the profound national security implications of being ahead at the frontier for the development of military capabilities, the development of intelligence capabilities, and the development of economic statecraft tools, not just broad based economic growth, but coercive economic statecraft tools, among many other things, mean I want to see The US there. And if I've got an input that we have some control over, I'd prefer not to hand it over to my competitors so that they can get there first and use it against us. That's kind of the baseline logic behind the controls.

And in this respect, I think the Japan analogy is really not that strong because what we're talking about ultimately with super powerful artificial intelligence is a kind of transformative national security technology that let's say we had it. Leave aside the chips. Let's just say we had that technology to be used for particular national security purposes. Would we hand it over to a competitor of ours? No. We keep it for ourselves the way we do with every other type of technological military advantage. Now, China has caught up to us in a number of areas in the military, but we didn't go hand them the jet engine and hypersonic missiles and so forth. We said, that's on you. You've got to figure that out for yourself. We're not going to help you in that regard. And I see a similar dynamic at play for the very high end compute, given how central to the future of national security the question of frontier AI would appear to be.

Nathan Labenz (54:03)

A statement that has really been a burden in saddle for a while is Dario's proposal from his Machines in Love and Grace essay where he basically says, we should use these export controls to keep our lead, build an international alliance, which you kind of alluded to. But then I think he takes a step that you're potentially not ready to endorse, which is eventually will get around to basically making China an offer they can't refuse or in his words, quote, convince them to, quote, give up competing with democracies in order to receive all the benefits and not fight a superior foe, which to me basically reads like regime change. Would you say, like, that's going too far? Like, we shouldn't adopt that as our policy? How do you think about the fact that one of the, you know, 3 or 4 top AI company leaders is, like, pushing for that.

Jake Sullivan (54:55)

So I've been explicit that I don't believe that regime change in China should be an object of US policy. In general, either whatever direction we should shape our own policy. Our job should not be to shape China's government. We have to deal with the government as it is and try to design a strategy to ensure that America's security, economic, and interests are protected and our values are protected. So I am not a regime change guy when it comes to China. There are others who are. I mean, if we went down the list of prominent technology leaders and their views on various geopolitical issues, we'd find out a lot of different quite interesting things, many of which I agree with, many of which I disagree with. So I don't take a lot from Dario. I have huge respect and admiration for him as a person. I think he's an ethical person. I think he's passionate about democratic values. There's a way in which, to me, that is admirable. I have a different point of view than he does when it comes, it sounds like, to the policy question of regime change.

Nathan Labenz (56:06)

I guess one other sort of very big picture question on China is how much hope or room do you have in your scenario or world building for the possibility that the regime there could change? And I that could be, like, through an actual regime change or it could be sort of through evolution perhaps, you know, fairly sudden in terms of, like, the way that their government is acting. Like, we've seen these sort of dramatic reversals on COVID lockdowns, for example. And I was really struck recently by the release of the Kimi K2 model, which both is leading some creative writing benchmarks in English and also comes from this company that has this, like, very western loving aesthetic. You know, the company's called Moonshot. It's named after Dark Side of the Moon. And it just hasn't had me thinking like, jeez, you know, maybe all this engagement that we've been doing for the last, like, few decades actually, like, has worked on a much deeper level where, like, there's potential for real friendship, you know, in a serious way between the 2 countries. And maybe that just hasn't quite hit a certain level of leadership yet, but maybe it could. Like, am I too fanciful for hoping that something like that could surprise us on the upside?

Jake Sullivan (57:26)

That is not my base case. In fact, what you've seen is a deeper consolidation of power under Xi Jinping, the centrality of national security and all decision making and their definition of national security being social control at home. And the deepening and strengthening of the Chinese Communist Party under the grip of Xi Jinping. That is what I think you have actually seen over the last few years. So my base case is that that's likely to endure. But, of course, I'm absolutely not ruling out that your optimistic scenario could come to pass. It could.

My main point is we should make policy that is not directed explicitly at a particular institutional or systemic outcome inside China. We should say, we'll watch and see what happens with the Chinese government and its relationship with the Chinese people. What we can do is set policy that is not about changing China from within, but rather about protecting and defending our interests and creating an environment that is more conducive and leans in the direction of American interests and values and does not end up allowing Chinese interests and values to dominate.

So could what you're saying be right? I can't rule it out. I don't think that the evidence really points in that direction. I'll say one last thing because you said, could we be friends? Look, I believe that at the people to people level, we should have a lot more exchange than we have today. American students in China, cultural exchanges going back and forth, all of it. And I believe there's no reason why, even if we're competing intensively, we cannot deepen the connections at every level of our societies between our 2 countries. Look, we're 2 big, ambitious, dynamic nations with very proud peoples who share a lot in common and can benefit from engaging with one another. And I am all for that while remaining clear eyed about the nature of the government of Beijing.

Nathan Labenz (59:40)

How do you think the rest of the world is looking at this competition? We are trying to export chips, obviously, and have, you know, some newly announced deals to do that to some at least countries with deep balance sheets to pay for them. China's open sourcing their models at the AI event in Shanghai last week, the premier, no less than the premier of the country showed up and said, you know, we're committed to making AI an open source public good, you know, for the entire world to benefit from. It seems to me like if I were in countries 3 through 90 or 3 through 193, like, I would find that to be a pretty compelling vision. How do you think the soft power competition is going for us right now?

Jake Sullivan (1:00:24)

Well, I think of it in 2 respects. One, I do think that this is a soft power move by the Chinese that is meaningful. We wanna give you really cheap or free high quality AI. Like, people are gonna like that. On the other hand, I think most countries are pretty unsentimental. And another element of soft power is innovation and technological prowess. And they're looking like who's actually got better stuff. And if it turns out it's basically equal and so forth, then yeah, they're going to want the easier, cheaper, open one. But if they're looking and saying, hey, The US is pretty impressive. A lot of countries are betting on The US to stay in the lead and even extend the lead in the AI race over time. And that's its own form of soft power. And I saw that when I was national security advisor, I continue to see that. Countries basically saying, we like the American tech stack because it's damn good. And we think The United States is going to stay at the cutting edge. And I think it's our job to make sure that that remains the case. And if it does, I think the soft power piece of this will take care of itself.

Nathan Labenz (1:01:32)

As you think into the future broadly, can you envision any sort of stable equilibrium? I struggle to get to a point in my imagination where you know, we have obviously the sort of MAD equilibrium for the nuclear threat. I sort of feel like we're on this track to create a new AI sort of Damocles that will sort of hang over all of the future as we, like, race to defend ourselves against one another. And yet I still can't even really imagine a version of that that's super stable. You probably, I'm sure, have seen the superintelligence strategy document from Dan Hendrycks and coauthors that talks about MAIM. Yeah. I didn't find that I applaud them for looking for some sort of stable equilibrium. It didn't ring true to me as, like, something that seemed like it would be super stable. Do you have any sketch of, like, a stable equilibrium in the presence of powerful AI between these great powers?

Jake Sullivan (1:02:27)

I do. I think it is a version of managed competition where we presume there's not going to be an end state where one side just wins and the other side loses, where we have to come to terms with the fact that we're gonna live alongside one another as major powers, but where there is going to be intensive competing, jostling, a little bit of elbow throwing, but where that intensive competition, we have sufficient guardrails, doesn't spill over conflict. I think that is manageable. That is basically the blueprint that we pursued over the back half of the Biden administration compete like hell, but intensively manage that competition so it didn't tip over into conflict. And I think that can be sustained even in the face of increasingly powerful AI.

I also believe that powerful AI presents opportunities for direct engagement between The US and China to manage risks that afflict both of us. And that is kind of a modern version of arms control. On the one hand, we're building our capabilities. On the other hand, we're talking to each other about both arms control and nonproliferation. That we did that quite effectively with the Soviet Union over decades.

And so would I use the phrase stable equilibrium? Maybe not. Do I believe that we can avoid great power war without giving up on competing vigorously? I think we can do that. I think we can keep competing vigorously and avoid great power war. And it requires a stewardship and a deep, intensive diplomatic engagement between the 2 sides. And I think we've shown in this iteration that is possible. And if we continue to build the muscles for it, I think it can withstand even the advent of very powerful artificial intelligence. How's that for optimistic?

Nathan Labenz (1:04:16)

I love some optimism. Does it require us to sort of mutually swear off of the pursuit of strategic dominance though? Because it seems like if there's an expectation that is sort of mutually held that, like, each side wants to get a strategic, get into a position of strategic dominance, then we're in an arms race scenario. We can't really trust each other. This stuff is, like, much harder to verify, it would seem, than anything in the nuclear. You know, you can see the silos from space, but, like, who knows what's going on in any given data center. Right? It seems like as long as there's this notion that we're gonna try to get strategic dominance, and we might, and they also might. If we don't, then I have a hard time imagining how that doesn't create some really serious risks, which maybe we can manage, but boy, we're creating a lot of problems to manage if that's the shape of the competition we're going to be trying to manage.

Jake Sullivan (1:05:16)

My answer to you is going to sound a little bit evasive because, frankly, all good diplomacy is, at the end of the day, a little bit based on evasion and ambiguity. But I think some degree of uncertainty in this phase is manageable and is better than explicit declarations one way or the other on concepts like strategic dominance. I think it's just too soon to tell exactly what this world is going to look like, and the band of possibilities is too wide.

So I think there's this expression about getting across the river by feeling your way across the stones. And that's basically what I would say we have to do here, rather than try to impose a framework today borrowed from deterrence theories that related to nuclear weapons or from something else. We should have a few tentpole concepts. We would like to stay in the lead when it comes to the frontier of AI. We would like to ensure that no other country has capabilities that are so advanced beyond what we have that they can just put us at risk and there's nothing we can do about it. We need to keep the lines of communication open and make sure that there is not mistake, miscalculation, or inadvertent escalation. Let's do all that. And then I think these questions around enduring advantage, strategic dominance, they will sort themselves out as we come to understand exactly what it is we're dealing with here, which I don't think we understand right now. And so I think we should front run all of that with some decisive declarations about our policy and strategy.

Nathan Labenz (1:07:00)

Maybe the same answer, but is there any version of a grand bargain that you could imagine between the 2 countries now or in the near future that would sort of take us off the trajectory that we're on of decoupling and intensifying competition and release some of that pressure?

Jake Sullivan (1:07:17)

I think president Trump is giving some thought to his version of what he will call a grand bargain. I'm not sure what exactly it will contain. I'm pretty skeptical of the grand bargain concept because I think at a fundamental level, both countries are going to keep competing. And as long as that competition is a feature of the relationship, it will need to be managed, and it can't just be solved and put away. It is a chronic condition of The US China relationship, one that I don't think is susceptible to the cure of some strategic condominium or G2 or grand bargain. And so I think we are better off in a world of managed competition than either we're going to go for defeating them or we're going to do a grand bargain and everything will be good. And I think that makes me a little boring, I confess or concede, but I think it's the right recipe.

Nathan Labenz (1:08:19)

All else equal, I think I prefer boring people in power. What have we learned about AI in the future of conflict from recent conflicts? My sense has been like maybe not that much yet because it still seems like we've got wires to the drones and, like, people are, you know, controlling them with PlayStation controllers and not yet, you know, handing that sort of stuff over to truly autonomous systems. But what would you say are the big lessons?

Jake Sullivan (1:08:47)

We're catching glimpses of the future without being able to fully see it in the present. One element of it is just scale and attritability, just a different form of fighting wars than we've seen in the past, and we're beginning to see how that plays out on the battlefield in Ukraine. This is where abundance, the notion of abundance as a core kind of construct of modern war comes in.

Another is that you are beginning to see the incorporation of various AI capabilities into some of these drone platforms. There are still humans in the loop, but you could now glimpse a world in which you could have fully autonomous weapons and see actually how that could play out. And we're going to have to grapple with the lessons of that in American military doctrine very much.

And then you've got to think about this well beyond the war fighting phase. It's intelligence. It's logistics. It's command and control. And here, I think we're beginning to see the incorporation of AI capabilities in that conflict. And that's something that I think we can draw some lessons from as well.

My biggest concern fundamentally is that the United States military, the Pentagon, because of the bureaucracy, the Congress, and the defense primes, is not necessarily poised to adopt at scale and with speed AI capabilities. I think the PLA is better situated to do that. That makes me nervous, and it should be a kick in the pants for all of us on a bipartisan basis.

Nathan Labenz (1:10:26)

Cool. This has been great. I really appreciate the time and thoughtful discussion, and I do genuinely mean it when I say I think all else equal, I'd rather have boring people in power. Anything else you wanna leave people with? It could be what are the biggest questions you're watching? Anything else we didn't touch on that you want people to understand or just leave it there?

Jake Sullivan (1:10:43)

No. You know, one thing we didn't cover as much, we touched on the economic and the job displacement stuff. But for me, how this impacts the lives of everyday people in terms of how they make a living, how they provide for their families, how they find meaning and purpose professionally. These are really profound questions that I'm wrestling with and trying to get a better handle on in addition to everything else that we've talked about today. And as usual, I find there are a lot of different answers from a lot of different people about where this is all headed in terms of impact on our economy.

This is the first technology I can think of with such profound national security applications that the government really had very little to do with. That's essentially a private sector led and driven technology. And that's uncomfortable for a guy who thinks about government policy. And it's made more uncomfortable by this thing we've been talking about, how wide the band is of predictions about timing and scope of AI capabilities. So that's just a world we're gonna have to inhabit and do our best and have the spirit, a can do spirit about this rather than just stick our head in the sand.

Nathan Labenz (1:11:58)

Yeah. Totally. For what it's worth, I think if the political class can get the international relations right, I'm very personally optimistic on the working class's ability to figure out what to do with the peace dividend and the, you know, all the extra free time that they might have.

Jake Sullivan (1:12:14)

Alright. Fair enough. Fair enough.

Nathan Labenz (1:12:17)

Thank you again. This has been great. Really enjoyed the conversation. Jake Sullivan, thank you for being part of the Cognitive Revolution.

Jake Sullivan (1:12:23)

Take care. Thanks a lot.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.