Implications of AI on the Global Balance of Power w Alex Wang, Andrew Ng, Jack Clark & Cory Booker
Tune in to today's special episode airing a recent panel with the founders of Scale AI, Anthropic, and AI Fund who gathered in Washington DC to discuss China as an adversary.
Watch Episode Here
Read Episode Description
Tune in to today's special episode airing a recent panel with the founders of Scale AI, Anthropic, and AI Fund who gathered in Washington DC to discuss China as an adversary. They argue that the papers out of Tsinghua University are just as impressive as those coming out of American universities. China is just as creative, but maybe even more motivated. While discussions of regulations have encompassed certain restraints, Alex Wang, Andrew Ng, and Jack Clark argue that we’re not moving fast enough (moderated by US senator Cory Booker).
This session was recorded live at The Hill & Valley Forum in 2024, a private bipartisan community of lawmakers and innovators committed to harnessing the power of technology to address America's most pressing national security challenges. The Hill & Valley podcast is part of the Turpentine podcast network. Learn more: www.turpentine.co
--
SPONSORS:
Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive
The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR
Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist.
Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/
--
TIMESTAMPS:
(00:00) Intro
(03:29) Assembling the founders of Anthropic, Scale AI, and AI Fund
(04:45) Predictions for AGI
(08:21) Navigating AI innovation amidst regulation
(16:15) Global AI competition and the urgency of Innovation
(24:34) Empowering future generations
Full Transcript
Transcript
Nathan Labenz: (0:00)
Hello, and welcome to the Cognitive Revolution, where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas, and together, we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz, joined by my co-host, Erik Torenberg.
Hello, and welcome back to the Cognitive Revolution. Today, we are sharing a thought-provoking panel discussion from the recent Hill and Valley Forum, a private bipartisan community of lawmakers and innovators convened to consider how the United States can best use technology to address its national security challenges. Moderated by Senator Cory Booker, who it's worth noting I supported in the 2020 Democratic primary, the panel featured three AI A-listers: Alex Wang, the visionary wunderkind founder and CEO of Scale AI; Jack Clark, co-founder of Anthropic, a company for which my fandom is well documented; and Andrew Ng, managing partner of the AI Fund and co-author of more than 200 AI research papers over the years.
As you'll hear, there was a lot of agreement amongst the panelists, including on the key point that when it comes to AI, China is already producing lots of outstanding research advances, will not easily be held back by policies such as chip or intellectual property export controls, and generally should not be underestimated. I share this outlook, and I also strongly agree that the US government should embrace the day-to-day utility that AI can provide, whether that comes in the form of AI doctors, streamlined government processes, or Jetsons-style self-driving cars and domestic servant robots.
And yet, candidly, I will say that there was something about this conversation that left me a bit uneasy. Too much of the analysis, in my view, is framed through the lens of competition between the United States and other nations. While I certainly would not want to live in Xi's China, and I deeply appreciate the fact that I can freely disagree with a sitting US Senator, I think we should make our AI decisions based on the practical value that AI systems can provide and the most grounded risk analyses we can manage. And I really worry that we will end up making bad decisions if we allow ourselves to make a habit of us-versus-them thinking.
Personally, I would love to see the United States lead the world not by striving to maintain a technology edge through any means necessary, but by working to transcend the paradigm of nation-state rivalry and seeking collaboration with all nations in pursuit of a positive AI future. You may say that I'm a dreamer for taking this position, and I certainly wouldn't want the United States to be too naive when it comes to China's capabilities or intentions. But technology revolutions do represent a rare opportunity to rethink how we live, work, play, and relate to one another. And given the overwhelming momentum toward continued rivalry and escalation of tensions with China, I think it's important that at least some of us step back and try to imagine a more positive dynamic. After all, the true others in the AI era are not the Chinese, but the AIs themselves.
If you're finding value in the show, please do take a second to share it with friends. And as always, we value your feedback. You can reach us via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. Now here's Senator Booker, Andrew Ng, Alex Wang, and Jack Clark for a discussion on the future of American leadership in artificial intelligence.
Cory Booker: (3:33)
I think you all know the folks that are up here. I'm going to introduce them in a second. But for the purposes of this panel, AI stands for Awesome Individuals with Amazing Insight. That's the best I got. But it truly is an excellent panel that I'm really excited about. We have Jack Clark, the co-founder of Anthropic. Please, more than a smattering of applause. Thank you very much. Alex Wang, the CEO of Scale AI. Don't applaud him. He's appallingly young, for crying out loud, and has a great head of hair too that makes me very jealous. And then, of course, Andrew Ng, who I love because he's from my alma mater, the managing general partner of the AI Fund.
So I want to dive right in if I can to get to the conversation. People hear enough from politicians, but you all are on the front lines of what I really believe is the greatest frontier in human innovation that we may ever see. It's an exciting dawn of a whole new era for humanity as a whole. But there are a lot of people who bring up concerns, and one of them that I'm almost tired of is the fear predictions for AGI, for artificial general intelligence. And I'm wondering if you all think that the talk is overblown, or if it's something that those who have sci-fi movie-like dystopic futures should have a right to be somewhat concerned about. And this is a jump ball, gentlemen.
Andrew Ng: (5:09)
I'm happy to start. I think the key to this conversation is, you know, there's some truth to the fear that we'll develop very powerful AI systems, but there's also a lot of truth that the AI systems we have today are extremely beneficial and can have a lot of positive impacts on humanity and on society. And I think the key is for us to develop a very grounded and scientific conversation around what are the capabilities of these algorithms, how are those capabilities developing, and when do we need to start having the conversations around, do we need to start putting controls around this?
And I think there have been a number of places that have really led the policy conversation around the development of this kind of scientific framework and the development of the right testing procedures to be able to do so. So in the United States, we have our AI Safety Institute as a part of NIST, which I think is a key part of this conversation. The UK has an AI Safety Institute that they started as well. Japan is starting an AI Safety Institute. Korea is starting an AI Safety Institute. And so I think we're seeing a global effort around developing a much more precise and grounded sort of scientific research into exactly how to adjudicate whether or not the models are going to be worthy of concern or not worthy of concern. And I think that's the right way to have the conversation, not to make it so emotional.
Jack Clark: (6:37)
Just to tag on to what Andrew said, it's kind of like if AI is the new electricity, we're sitting here wondering about how we regulate everything that gets plugged into an electric outlet, which would drive you completely crazy. So what we need to do is have standards like product safety testing measures, figure out a shared global framework, and figure out if there's anything different about this technology. And we can only do that if we build a testing regime and figure out stuff that seems sensible and avoid doing things that would restrict competition or crush an ecosystem that's just getting going.
Andrew Ng: (7:11)
Over the last year, I think many of you have seen a number of parties that invested billions of dollars in training foundation models to trash open source and shut down innovation, often on safety concerns that keep on shifting. I think a year ago, AI was supposed to take over and kill us all, then there was bioweapons risk, then there's cyber defense risk. AI does have risk, but to take the electricity analogy further, today it feels like we're trying to make home appliances safe, the applications, electric home appliances safe, by regulating the solar panel makers. We're going to the solar panel makers and saying, guarantee to me that the electron you're generating is safe for the application, and that's just impossible.
So I think the mistake that a lot of regulatory agencies in the US and abroad and in the EU, for example, are making is trying to regulate the technology. That's like saying solar panels, make sure your electricity is safe. Can't do that. Versus the applications. I want safe self-driving cars, reasonable underwriting software, safe medical devices. Go regulate the application layer, not the technology layer.
Cory Booker: (8:12)
I'm one of these folks that has been frustrated with unnecessary regulations since the time I was mayor. But when I got to be a United States senator, I was blown away that the federal government was not moving at the speed of innovation. They were really undermining that innovation, and I had a lot of worries. I used to be on the Commerce Committee. I remember the head of the FAA came in, and there was this incredible industry that was booming for drones. Europe was using them to survey mines, to fix power lines. We weren't issuing any licenses except for the movie industry, of course. And I said to the head of the FAA, if you were around during the time of Orville and Wilbur Wright, they would have never gotten the airplane off the ground.
And so I'm wondering for the three of you, there's got to be some fears about the regulators here and things they might do, specifically, that you're really worried that they might do to choke innovation. But there's got to be on the flip side of that some areas where you say, wait a minute, we need regulation. And I'll give one more example just for the sake of setting you all up. Social media was something I was an early innovator in when I was mayor of the city of Newark, one of the first politicians to really use it as a management tool, transparency tool, engagement tool for my city. But now I look at social media and I'm like, dear God, we need to regulate there. Really bad things are happening. So I'm wondering if you can give me, for this next era of human innovation, where you think good regulation is and where you're like, dear God, Booker, you and your posse in the Senate, please don't get this wrong.
Jack Clark: (9:49)
So I'll give one example, which is I think when you start an AI company, you may not be interested in national security, but national security is interested in you. And you end up building these very powerful systems that may have national security uses or misuses. And for that, I think we do need to come up with tests that make sure that we don't put technologies into a market which could unwittingly, to us, advantage someone or allow some non-state actor to commit something harmful. Beyond that, I think we can mostly rely on existing regulations and law and existing testing procedures that exist like the FDA or the FAA or other things, and we don't need to create some entirely new infrastructure.
Andrew Ng: (10:31)
I think even in earlier sessions, people used the term dual-use to refer to AI. I think that's the wrong framework to think about it. There are some technologies like nuclear or maybe rockets with some civilian use cases and some military use cases, and we can contain the military use cases without thrashing civilian innovation. AI is a general-purpose technology, like electricity. Electricity can be used for warfare, but can be used for tons of other beneficial things. So that term dual-use is not like there are two uses, one military, one civilian. Yes, there are military use cases, but there are so many civilian use cases that we don't want to crush. So I think having a carefully scoped way to contain the military applications without limiting the—I think of AI as multi-use, not dual-use, with way more civilian than military use cases.
Nathan Labenz: (11:17)
Hey, we'll continue our interview in a moment after a word from our sponsors.
Cory Booker: (11:21)
They didn't really cover, though, not to be critical of my new friends, the regulation that we desperately need. Are there things that you're standing up here and saying, I want to get back to you guys and say, please do this? You both went to defense and national security. So I wonder if you can give maybe another layer of insights there.
Alex Wang: (11:42)
Yeah. I mean, I think what we actually have in defense and national security, if anything, is the opposite problem, which is not enough use of AI to modernize our national security apparatus. And this is somewhat the purpose of this forum. But in general, the DOD is hamstrung by a lot of process, a lot of bureaucracy, and a lot of stuff that just doesn't serve an era when technology is moving extremely quickly and you have near-peer adversaries that are moving super quickly as well.
If you look at China or Russia, their willingness and speed in integrating new frontier technologies into every component of how they operate vastly outpaces what we're able to do if we continue operating in the same confines and bureaucracy that we've been operating to date. I think this is a known problem within the DOD. A lot of people are working very hard to try to solve this, both from legislation, appropriations, as well as just how the DOD itself operates. But my greatest fear is not that we don't build the right regulation to prevent AI from being used in national security. It's that we, as a national security apparatus, the DOD, and the US intelligence community, don't use it enough to ensure that we maintain our competitiveness in this next era.
Cory Booker: (13:11)
Senator Clark, you've got to put a piece of legislation on the floor to regulate this space, and I'm happy to change places with you as long as our net worths go together with us.
Jack Clark: (13:24)
Senator Clark. I think you'd want to find ways to aggressively field experimental technology into, to Alex's point, get departments to actually use it. Because at Anthropic, we discover that the more we find ways to use this technology, the more ways we find it could help us. And you also need a testing and measurement regime that closely looks at whether the technology is working, and if it's not, how you fix it from a technological level. And if it continues to not work, whether you need some additional regulation. But to Alex's point, I think the greatest risk is us not using it. Private industry is making itself faster and smarter by experimenting with this technology. We're doing ourselves a disservice, and I think if we fail to do that at the level of the nation, some other entrepreneurial nation will succeed here.
Cory Booker: (14:10)
Isn't that great? I really expected you to give me something that would create parameters, but both of you seem to be expressing the fear that if we aren't on the cutting edge of this, we open up a whole world of vulnerabilities. So if I was a legislator, I'd be telling them to start trying to create oversight over the administration to push them into doing it more. But I'm wondering, and you're on the left here, I'm wondering on the right side of the divide—centering. What, politicians always trying to create partisanship and tribalism. Do you agree? What legislation are you putting to the floor of the Senate now?
Andrew Ng: (14:48)
So if you're in a foot race with one or more competitors, I think there may be two ways to win. One is you can try to trip up the other guy. Maybe we should do a little bit of that in some cases. And the other is you could run faster and harder. I think here in the US, there's a lot we could do to run faster and harder ourselves: invest much more in training, education, upskilling, invest in compute infrastructure, as Alex mentioned, improve the appropriations process. I see lots of wins we could do there rather than constraining ourselves in the hope that by constraining ourselves, we could trip up the other guy.
And there's actually one other risk. Today, we see many nations buy surveillance software from other nations, from some of our adversaries, and that limits our ability to influence other nations' respect for human rights and privacy because it's not our software they're using anymore. I worry that a lot of the stifling regulations being proposed in DC will shut American companies out of the supply chain, cause other countries to use alternatives. And then someday, when a large language model is asked the question, what do you think of democracy, I would quite like that AI to give an answer that reflects our values rather than someone else's values. And I worry about those stifling things shutting ourselves out. It would be like an own goal of shutting ourselves out of the supply chain.
Nathan Labenz: (16:03)
Hey, we'll continue our interview in a moment after a word from our sponsor.
Cory Booker: (16:06)
Well, let's be more direct and hit it on the head. We're competing globally with China, and there is a lot of concern. Some people though write off that concern. The RAND Corporation recently suggested that China is unlikely to produce major new advances in AI because of the US's superiority in empowering private sector innovation. Is that true? I'm just not sure.
Alex Wang: (16:27)
Strong disagree. Some of the best open-source models in the world have been built by Chinese companies. So that's one indication of their very fast ability to catch up to our technology. And I think one point just on this entire topic that is very important to make is the urgency of the current moment. We kind of have a triple whammy situation right now, which is that, one, the technology is moving faster than it ever has before. I mean, the increase in investment as well as the raw scientific progress in AI is faster than any technology we've seen in recent decades, for sure.
Then number two, we are in a geopolitical environment and geopolitical atmosphere where there's continually increasing levels of conflict. We're just seeing, over the past few years, the amount of global conflict has increased dramatically, and it's not clear what the off-ramps for this conflict to decrease are. It doesn't make me feel great.
And then the third whammy is that you have dictatorial leaders—you have President Xi Jinping, you have Putin—who are nearing the late stages of their times of rule, in which case they're going to be more aggressive, more risk-seeking, and make bolder moves. And so when you look at this powder keg of ingredients, we're just at a moment where we have to act really quickly.
Jack Clark: (17:54)
And I'd like to make just one point I think is really important. At this event, there's been lots of talk of how China can do fast following and China copies. In AI, it's actually different. China has great inventors and great researchers. In response to things like the export controls, it's doing really fantastic work on distributed training to get around that, on all kinds of low-level things to make training more efficient. And it would be chronically stupid to underestimate their capacity for inventiveness here. And I think it's important we keep that in mind that we're not dealing with someone that's not creative. We're dealing with a competitor that's going to be just as creative as us and is even more motivated.
Cory Booker: (18:33)
So here, they're putting brakes on, one might argue. The United States is talking about putting parameters. You mentioned, very rightly, that from Europe to some of our Asian allies, they're all putting these kinds of lanes in which this can operate. China doesn't have those rules. Is this sort of creating a competitive advantage for them, or maybe a race to the bottom when it comes to human rights? I'm wondering if this is something we should really be concerned with, that somehow we're advantaging them.
Andrew Ng: (19:05)
I think I would love to see technology spread around the world that reflects kind of democratic values, and I think we're hampering our own ability to do that. Just to give you an example, AI now is advancing on multiple fronts. There's scaling up very large models. Jack and team have been doing that well. Andrew, that's one front as well known, but there are multiple fronts. One example, I think a major front in AI technology now is what we call AI agents or agentic AI. And what that means is if you use ChatGPT or Claude, you might prompt it and tell it to write an essay. It's as if we're asking AI, type an essay for me, going from start to finish without ever using backspace. And AI could do that, but it's a difficult task.
With AI agents, we tell the AI, write an outline, then write your first draft, then critique your own first draft, and then do some web search and improve it. And it's a much more iterative process. The results are much better. This is one example of cutting-edge AI called AI agents. And candidly, when I look at the research literature on what's cutting edge, I see probably as many innovations coming out of, like, Tsinghua University, for example, as many of the American universities. So the cat's out of the bag. But I think there is an open question of when some other country is going to download some open-source package to implement their own agentic writing journalism thing. Do we want the source to be primarily other nations' suppliers, or do we want to make sure that America has a huge voice?
Cory Booker: (20:32)
Which is the area that we should be focusing on building out our capacity in, so that America stays competitive in that way. Yes?
Andrew Ng: (20:40)
And that we earn the right by helping the world to continue to influence the world.
Cory Booker: (20:43)
Okay. Well, speaking—I'm sorry. Go ahead, Senator.
Jack Clark: (20:47)
Just one quick idea. There's lots of talk about funding compute in different countries, and this idea does not favor Anthropic because we fund our own. But we want to create more competition for these large foundation companies. We want to help startups and researchers from academia cross this chasm from research into production. And to do that, we should find ways to fund or use the US supercomputing base to give startups and researchers a leg up, because that's one way to take advantage of our inventiveness and competitiveness and use the ecosystem around us.
Cory Booker: (21:22)
And Jack, you've been great. One of my cards here was all about this idea of a national AI research effort. Strong supporter. I know you are, and more than you know, I'm grateful for that because I think it will create more investments that could produce more innovation and expand more opportunity.
Andrew Ng: (21:38)
Hey, can I say, it's actually, I don't think it's an Anthropic selfish interest to promote. That's really, you know, applaud Jack.
Cory Booker: (21:44)
I'm wondering though, and maybe Jack, you can pick up on this and then we can go through, because it is stunning to me, the incredible moves and plays that Saudi Arabia and the UAE are making in this space. I mean, at numbers that are mind-boggling, and I have good relationships with some extraordinary leaders in both countries. The enthusiasm even for being big players in this space. How do you view that for us as a nation? As a hopeful sign, an exciting sign, or does that worry you in the sense of shared values or anything that might be of concern?
Jack Clark: (22:21)
If you want leverage here, you need infrastructure. Infrastructure lets you build the systems that define this kind of reality and define the norms. So we need to invest money to do well here and to pick up on what another panelist said earlier. Who knew that the path to AGI lay through US permit reform? But that's probably one of the areas we need to work on so we can build data centers, build power, and build the infrastructure here at home that lets us lead abroad.
Cory Booker: (22:45)
That's a really wise input. Do you want to add to that?
Alex Wang: (22:47)
Yeah. I mean, I think regardless of what the United States does, the Middle East and other countries around the world are going to build gigantic data centers and are going to build huge amounts of AI computation power. So I think the key for us as a country is to figure out how do we do what we've done with many core technologies in the past—the Internet, telecom, and many others—to ensure that as much of the world's AI capacity that comes online is in accordance with our values and our system.
Cory Booker: (23:15)
Are you concerned at all about the big investments that the Saudis and Emiratis are making?
Andrew Ng: (23:20)
After Thomas Edison modernized the electric infrastructure, lots of other countries got electricity too, but it was great. It led to global economic growth, globally increased prosperity. I think AI will be like that too. I think it's great if lots of countries have a good electric grid. So I think it'll also be good if lots of countries have more intelligence, even including artificial intelligence.
Cory Booker: (23:41)
So let's end this way. I love humanity and the endless potential of possibilities for innovation and everything from the arts to the sciences, the capacity of humanity to reimagine futures that were never even seen possible a generation before. And I love when I gave you guys legislative abilities, which makes me think I might run your next campaign, that you all were like me, wanting to say that technology is not something to be afraid of. We need to create some basic security, but when it comes to everything from our actual defense all the way to expanding democratic ideals of level playing field, more opportunity to learn, to grow, this should be one of the most exciting moments in the evolution of humanity. And maybe just an acute way to end: if, when you look at the year 2055, what is one thing that you think that most people aren't even getting their mind around yet that we might be experiencing because of this hopeful, promising area called artificial intelligence?
Jack Clark: (24:45)
I think anyone will be able to learn everything. They will get a customized education plan and tutor that works perfectly for them and lets anyone pick something they're curious about and become an expert on that. I think that's within reach of this technology.
Alex Wang: (24:59)
I think the scientific breakthroughs that we're going to have over the coming decades through use of this technology will make all the sci-fi futures that we've been thinking about slowly become a reality. So we'll live in the Jetsons.
Andrew Ng: (25:12)
I think today, intelligence is one of the most expensive things in the world, which is why only the wealthy among us can hire a specialist doctor to carefully look at your condition and give you advice, or hire a patient tutor to coach your child. I think AI, artificial intelligence, is making intelligence cheap, and this means that a few decades hence, I think any of us would be able to hire an army of specialists, well-trained staff to advise us and help us with our things in a way that only the wealthiest in our society can today.
Cory Booker: (25:40)
Well, I pray and hope that this doesn't create more concentrations of wealth and power, but a more democratized spread where genius, which is equally distributed—there are many geniuses being born in Burkina Faso per capita as are being born in the most affluent places of New Jersey. But what we do inefficiently right now is cultivate that genius. And if your vision of the future is really one where human genius anywhere can have access to the tools and resources, that unlocks a potential for humanity that is sheer astonishing.
And I know from our history books, when I was in high school, history textbooks had an overemphasis on the George Washingtons, the Jeffersons, the Reagans. But the real people I think that are often unsung heroes are those that are the innovators and the scientists and those who are creating systems and opportunities that we now in our generation take for granted. You three are frontline players in ways that have me humbled and in awe. And the fact that I got a chance to sit on the same level as you for 20 minutes really excites me. And I pray that all your hope and all your aspirations for this technology come true and that generations from now benefit from your genius.
Andrew Ng: (26:58)
Don't give us too much credit. We're relying on you politicians to set the framework and do the hard work.
Cory Booker: (27:02)
I'm going to let that be the last word. I do. Everybody, thank you. You see, I think it's a professional kind of pride.