Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

Max Tegmark and Dean Ball debate banning superintelligent AI with host Liron Shapira, covering a moratorium, regulation and liability options, p(doom) estimates, unilateral ban risks, and what safe, beneficial advanced AI could look like.

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

Watch Episode Here


Listen to Episode Here


Show Notes

Max Tegmark and Dean Ball debate whether we should ban the development of superintelligence in a crossover episode from Doom Debates hosted by Liron Shapira. PSA for AI builders: Interested in alignment, governance, or AI safety? Learn more about the MATS Summer 2026 Fellowship and submit your name to be notified when applications open: https://matsprogram.org/s26-tcr. They unpack the Future of Life Institute's call for a moratorium until there is broad scientific consensus and public buy-in, contrasting Tegmark’s precautionary stance with Dean’s emphasis on experimentation, competition, and practical policy hurdles. Listeners will get clear takes on p(doom), the limits of FDA-style regulation, unilateral ban risks, and what safe, beneficial advanced AI might realistically look like.

LINKS:

Sponsors:

Framer:

Framer is the all-in-one tool to design, iterate, and publish stunning websites with powerful AI features. Start creating for free and use code COGNITIVE to get one free month of Framer Pro at https://framer.com/design

Agents of Scale:

Agents of Scale is a podcast from Zapier CEO Wade Foster, featuring conversations with C-suite leaders who are leading AI transformation. Subscribe to the show wherever you get your podcasts

Tasklet:

Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai

Shopify:

Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive

PRODUCED BY:

https://aipodcast.ing

CHAPTERS:

(00:00) About the Episode

(05:43) Cold open and intro

(09:21) Opening statements: ban debate (Part 1)

(14:49) Sponsors: Framer | Agents of Scale

(17:11) Opening statements: ban debate (Part 2)

(17:11) Licensing-style AI regulation

(26:52) Liability, tail risks (Part 1)

(33:24) Sponsors: Tasklet | Shopify

(36:32) Liability, tail risks (Part 2)

(39:23) Timelines and precautionary regulation

(47:03) Defining superintelligence and risk

(52:26) Risk-based safety standards

(56:28) Current regulations and definitions

(01:05:23) Max's doom scenario

(01:19:46) P-doom gap and adaptation

(01:34:40) National security and China

(01:43:57) Closing statements and reflections

(01:55:22) Host debrief and outro

(02:02:10) Outro

SOCIAL LINKS:

Website: https://www.cognitiverevolution.ai

Twitter (Podcast): https://x.com/cogrev_podcast

Twitter (Nathan): https://x.com/labenz

LinkedIn: https://linkedin.com/in/nathanlabenz/

Youtube: https://youtube.com/@CognitiveRevolutionPodcast

Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431

Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk


Transcript

Introduction

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

Hello, and welcome back to the Cognitive Revolution!

A couple quick notes before getting started today:

First: if you're interested in a career in AI alignment & security, you should know that MATS will soon be opening applications for their Summer 2026 program. MATS is a 12-week research program, focused on AI safety, featuring world-class mentors from Anthropic, DeepMind, OpenAI, the UK's AI Safety Institute, and more.  80% of MATS alumni now work in AI safety, and I've heard so many great reviews of the program that I personally to MATS as part of my year-end donations last year.  Applications open December 16 and close January 18, 2026, so you've got some time, but don't delay.  Visit matsprogram.org/s26-tcr for more information. That's M-A-T-S program dot org – or see our link in the show notes.

Second: to end 2025, or perhaps to begin 2026, I'm planning another AMA episode.  Last year, we got a ton of great listener questions.  This year, I again invite your questions, and will also be asking ChatGPT and Claude to mine their memories of our interactions to come up with some questions of their own.  Submit questions at the link in the show notes, or feel free to DM me, and let's see if humans can ask better questions than AIs for a little while longer, at least.  

With that, today, I’m excited to share a special crossover episode from the Doom Debates podcast, hosted by Liron Shapira.

The occasion for this debate is a recent statement organized by the Future of Life Institute, which calls for a ban on the development of superintelligence, described as an AI system that "can significantly outperform all humans on essentially all cognitive tasks", unless and until there is broad scientific consensus that it can be done safely, and there is strong public buy-in for doing so.

On one side, we have Max Tegmark, President of the Future of Life Institute, which organized the statement, and MIT Professor who's pivoted his research group to focus on AI, with some outstanding results – including a paper on training models for mechanistic interpretability called "Seeing is Believing" that we featured in 2023.

On the other side, we have Dean Ball, frequent guest on the show, previously AI Advisor at the White House and famously the primary author of America's AI Action Plan.  

To put my own cards on the table: I did sign this statement.

Because I do oppose a rush to superintelligence, especially via recursive self-improvement, as seems to be roughly the current plan at multiple top AI labs.

Of course, at the same time I am more passionate than ever about AI doctors, and you'll hear a similar form of techno-enthusiasm from Max, who's excited about AlphaFold, self-driving cars, and all sorts of controllable AI tools – just not a fully general and autonomous digital species quite literally designed to replace us, and which might lead to a potentially irrecoverable disaster.

It's that sense of impending disaster that motivates Liron to make Doom Debates, and in my opinion one of the most interesting parts of this debate was when he asked Dean for his p(doom).  Dean said 0.01%, an answer he later wrote on Twitter was made up on the spot, because he considers the concept unserious.  On this point, I have to say that, much like the concept of pornography that Max briefly invokes in this conversation, I feel strongly that the concept of doom is meaningful, and given what we are hearing from the Turing Award winning fathers of deep learning who also signed this petition, I can't see any argument getting me below a 1% p(doom).

That said, Dean makes many strong points in this conversation as well.  As someone who values bottom-up experimentation and innovation, and who believes that patients should have a "Right to Try" potentially life-saving medicines, I definitely don't love the idea of an FDA-like regulator for AI.  If that were the best we could do, I might hold my nose and go for it, but the fact that this is the go-to analogy does, from my perspective, support Dean's point that it won't be easy to convert the high-level petition statement to actual effective policy.

And similarly, while I of course reject the idea that basic AI regulation means conceding the AI race to China – and it should be noted that the CEO of Chinese AI Lab Zhipu AI, the subject of our last cross-post episode, signed the statement as well! – I have to admit that a real, unilateral ban does pose competitive risks that aren't easily hand-waved away. 

The bottom line for me is that, while no one has the clarity they'd need, on what superintelligence will look like or how it will be created, to craft perfect policy language today, I do believe that a race to superintelligence is a bad idea, and I'm glad to have done my part to help create common knowledge that many well-informed people do feel this way.

Finally, credit to Liron on Doom Debates.  I generally don't like debates, but by getting such outstanding guests as Max and Dean to focus in on what is very plausibly the most important question of our time – how likely is advanced AI to go catastrophically wrong? – he is making the format work, and I definitely encourage everyone to subscribe.  

For now, I hope you enjoy this debate on the wisdom of building, or banning, superintelligence, with Max Tegmark and Dean Ball, from Doom Debates, hosted by Liron Shapira.


Main Episode

Max Tegmark: I would argue that artificial superintelligence is vastly more powerful in terms of the downside than hydrogen bombs would ever be. So if you think of it as actually a new species, which is in every way more capable than us, there's absolutely no guarantee. that it's going to work out great for us. If we treat AI like we treat any other industry, we would then have safety standards. Here are the things you have to demonstrate to the FDA for AI or whatever.

Dean Ball: I think the fundamental thing to think about here is really assumptions. There are many worlds in which humans can thrive amid things that are better than them at various kinds of intellectual tasks. And I just have very serious issues with the idea, we're just going to be able to pass a new regulatory regime and everything's going to go fine and there will be no side effects. And these analogies of FDA to AI are not really very good. It's not to say that I don't think we need something like an FDA.

Max Tegmark: But then I'm confused by why you don't think we should have the same for AI. So what's the difference between AI?

Dean Ball: So let me just let me just let me make an uninterrupted point for a few minutes. if you don't mind. Okay. I think that there will be tons of side effects, and I think that we will stave off a lot of wonderful possibilities for the future.

Liron Shapira: Maybe the real crux of disagreement is your mainline scenario. And so let me ask both of you this question. What is your P doom?

Max Tegmark: If we go ahead and continue having nothing like the FDA for AI, yeah, I would think it's definitely all.

Dean Ball: I just kind of have this sneaking suspicion that like, If the models seemed like they were going to pose the risk of overthrowing the US government or anything in that vicinity, that like, I don't think OpenAI would release that model or Anthropic or Meta or XAI or Google. Like, I just don't think they would.

Liron Shapira: Welcome to Doom Debates. Today, I'm excited to bring you a debate between two of the world's leading voices on AI policy.

Dean Ball: The question at hand, should we ban the development of artificial superintelligence? The stakes are high. Advances in AI have become the key driver of our economic engine. Artificial intelligence is increasingly facilitating breakthroughs in manufacturing, healthcare, education, even basic science. The prediction market, Metaculus, estimates that the first true AGI, the first fully general human-level AI systems, will be achieved by 2033, less than 10 years from now. Many experts believe that milestone will soon be followed by the creation of artificial superintelligence, a system that surpasses the capabilities of the entire human species. That brings us to our debaters, two of the clearest voices who disagree about how society should approach these developments. On one side of the debate, we have Max Tegmark, an MIT professor who believes we should ban superintelligence development until there's a consensus that it'll be done safely and controllably, and strong public buy-in. His research has focused on artificial intelligence for the past eight years. He is also the co-founder of the Future of Life Institute, a leading.

Liron Shapira: Organization dedicated to addressing existential risks from AI and other transformative technologies. Max, welcome to Doom Debates.

Dean Ball: Thank you.

Liron Shapira: On the other side of the debate, we have Dean Ball, who completely disagrees with banning superintelligence. Dean is a senior fellow at the Foundation for American Innovation, has served as a senior policy advisor at the White House Office of Science and Technology Policy under President Trump, where he helped craft America's AI Action Plan, the central document for US federal AI strategy. Dean, welcome to Doom Debates.

Dean Ball: Thank you so much for having me.

Liron Shapira: Okay, let's do opening statements. Max, the starting point of our debate today was a dispute between you and Dean over your statement on superintelligence, the Future of Life Institute's statement on superintelligence that was published on October 23rd. And the statement says, we call for a prohibition on the development of superintelligence not lifted before there is one, broad scientific consensus that it will be done safely and controllably, and two, strong public buy-in. So why should we ban superintelligence?

Max Tegmark: If you negate that statement, then you're saying that we should be allowed to go ahead and build artificial superintelligence, even if there's no real consensus at all, that it can be kept under control or that people even want it, right? And if we were to say that, then we would be basically doing the most spectacular corporate welfare, because we don't do that in any other industries. Yet, right now, there are more regulations on sandwiches than superintelligence in the US. If you want to sell drugs, medicines, cars, airplanes, you always have to demonstrate to the satisfaction of some independent scientists who don't have a conflict of interest that this is safe enough, the benefits outweigh the harms. I'm just saying we should treat superintelligence the same way. And right now, 95% of all Americans in a new poll don't actually want to raise the superintelligence. And most scientists who work on this agree that we have no clue at the moment how to keep something which is so vastly smarter than us under control.

Dean Ball: Okay. And Dean, you oppose the public statement and you don't share Max's views on prohibiting superintelligence. Give us your opening statement. Why do you think we shouldn't ban superintelligence? So I think that the concept of a ban and of super intelligence in general is just quite nebulous. And that is the fundamental issue that I have. AI systems that could pose substantial danger to humans are, you know, they're not disallowed by the laws of physics at the very least. I think there are really serious questions about how close those things are and how likely we are to build those things in the near future. My guess is five years ago, if you were to try to describe general super intelligence in a law that a lot of people could agree to, you know, which would be the way that you would affect something like a ban, all the things Max referenced are things that we impose those requirements through laws, right, on airplanes and drugs and whatnot. So if you're going to have a law, you're going to have to define superintelligence in a statute. And I think that the problem you will run into there is that you will define things, you'll define it in such a way that you actually end up banning many things that we would want. There's many ways that you could plausibly define superintelligence that would negate technologies that I think would be quite beneficial to humanity. I mean, imagine an AI system that has largely solved mathematics, right? It's solved all the outstanding problems that we have in mathematics. It has advanced certain domains of science, maybe many domains of science by, you know, the famous century compressed into a decade, right? Or compressed into five years, let's say. It's accelerating the AI research itself. It's doing that. in meaningful ways, because one of the areas of science that it knows how to do experiments in is, computer science and AI research. It's a better legal reasoner than you or me or anybody else. It's better at coding than you or me or anybody else. I can imagine such a system like that existing. In fact, my guess is that such a system will exist by roughly 2030 without posing the kinds of risks that Max is worried about, which again, I don't think are impossible. I just place a lower probability on them. And so I worry that what we end up with is in practice, what you end up with if you tried to affect such a ban, would be, we're going to ban N plus GPT N plus 2, right? Is in practice what it would mean. So there's UPT-5, that's N, and there's GPT-6, which would be allowed, and then GPT-7 would be the thing where we say no. That's just, we've decided that's too scary. And so we're going to basically ban that. And then, you know, what happens after that? Well, In order to figure out anything about whether super intelligence is safe or not, you can't just do that research speculatively, right? You have to actually build the thing to some extent and put it in a constrained setting to figure out if it's safe. You have to build at least big parts of it. And once you've done that, it's like, well, okay, but there's a ban. So only the specially sanctioned group is allowed to conduct this research. And at that point, you have a monopoly, perhaps a global governmental cartel of some kind. that is developing this. And this, I also think, could potentially be dangerous. And that is, of course, assuming that you were able to get the international cooperation you would need to affect such a ban, which I also doubt. So that would be my comprehensive statement.

Liron Shapira: Okay, Max, Dean raised a few points about maybe the practical difficulties of doing this kind of superintelligence regulation, even going so far back as to define what superintelligence is for the purpose of this ban. How would you respond to that?

Max Tegmark: So I'm afraid that we might disappoint you, Lyron, here by agreeing more than you want, because you want the fierce to sort of clobber each other. I think it's actually quite easy to write the this law. And I don't think it requires defining superintelligence at all. Let me explain a little bit what I mean by that. You know, if we treat AI like we treat any other industry that makes powerful tech, we would then have safety standards, right? There are safety standards for restaurants, you know, before they can open, they have to have someone check the kitchen. So if we had safety standards for AI, they wouldn't need to define superintelligence. They would just say that, you know, if there's a system that some plausible experts Maybe it could cause harm. And here are the things you have to demonstrate to the FDA for AI or whatever that this is not going to do. You might want to demonstrate that it's not going to teach terrorists how to make bioweapons. If it's a very powerful system, you'd probably put one of the safety standards being you have to demonstrate that you can keep this under control. If the company selling it can't convincingly make the case that this thing is not going to cause the overthrow of the US government, then reject, come back when you can, right? So I didn't mention superintelligence here at all. It's the company's obligation to demonstrate that they meet the standards. And to take an analogy that might help clarify what I'm talking about here, let's talk about the thalidomide for a little bit. This was this medicine that was given to women in the US to reduce morning sickness, nausea during pregnancy. And it caused over 100,000 American babies to be born without arms or legs, right? So the dumb way to prevent such harm would have been if the FDA had a special rule that we have a ban on medicines that cause babies to be born without arms or legs. What if someone comes out with a new medicine now and the arms and legs are fine, but the baby has no kidneys or no brain, you know? That's not the way to go about it. The way you instead go about it is, You ask the companies to do a clinical trial and provide quantitative evidence of what are all the different side effects that people might not want, quantify them, how many percent get each, and then quantify the benefits. You give this to some independent experts who don't have money on the line, so they can't work for the companies, for example, who look at the benefits and the harms and they decide, is this a net positive for the American people? And then they approve it. This is how we do regulations in all other areas, and this is how I think it's quite easy to do also for AI. In summary, you don't define superintelligence, you just define the harms that society is not okay with. Very broadly, it boils down to demonstrating that the harms are small enough to be acceptable. And then it's the company's job to make all the definitions they want, quantify things, and persuade them these independent experts. Does that make sense?

Dean Ball: Yeah, I'm happy to let you guys cross-examine each other pretty freely, and I'll just step in once in a while. Okay, cool. So, yeah, I mean, basically then, instead of saying we should ban super intelligence, you know, what you're saying instead is we should have a kind of licensing regime, a regulatory regime of some kind that, you know, with respect to frontier AI systems.

Max Tegmark: Yeah, very much inspired by how we do it for other tech.

Dean Ball: Yeah, So I'd say a couple of things about that. First of all, most preemptive sort of regulatory regimes that I'm aware of, they don't generally require you to prove, I mean, you can't prove a negative, right? So I couldn't, you couldn't prove, the FAA, the Federal Aviation Administration, doesn't require you to prove that your plane won't crash. It requires you to make affirmative statements about really not the plane itself, really like many subsystems of the plane, right? So like the turbines of this jet engine have XYZ chemistry, which conforms to XYZ technical standard, which, you know, blah, blah, blah, blah, blah, right? And in fact, the way that a lot of times that ends up working is like, there are layers and layers and layers of regulation. So the plane maker has to buy jet engines that are only from people that conform to certain standards. And those standards have to do oftentimes not just with like the object level properties of the component in question, but also of things like how does information flow through the business, flow through like this company, right? There's all sorts of things like that, right? In other words, if you're like a turbine manufacturer for, if you make turbine blades for jet engines, you are probably subject to implicit and explicit regulations that have to do with risk management inside of your company. And who is the designated risk officer and all these sorts of things, right? But the point is that you have to make...

Max Tegmark: I can just jump in. I agree with everything you said here. What the companies need to demonstrate in the safety case is the high level thing. The government wants to know how many flight hours on average do you have until a failure and so on. So the companies can solve that whatever way they want, right? It's in the interest of the companies to not use flaky manufacturers, to have good procedures, and have people study crack formation, the physics of it, and so on, and then still switch to another alloy if that works better. It's the same for medicines. The government doesn't come in and micromanage. this chemical is allowed, this ingredient is not allowed. Rather, the company, if they have an effect, some medicine that, seems to be pretty good at, suppose there's a new antibiotic that seems really good against bronchitis, you know, it contains lead and aluminum and cyanide and some small doses. People will look in the company and be like, we're having a hard time demonstrating the safety of lead. Maybe this works even without the lead. Maybe we can swap out this thing. So all the innovation is driven by capitalism, by market forces to come up with the quantitative risk bounds that they want to meet. Nuclear reactors is a great example because what the law actually says there is the company has to make a real quantitative calculation of what and demonstrate that the risk of a meltdown is less than one in 10,000 years to even get permission to start building it, right? So the company has free reign to come up with whatever reactor design they want and then they will innovate and whoever you first meet those gets the big bucks.

Dean Ball: I think it's considerably more complicated. I mean, in principle, that's true, but it is practice is considerably more complicated than that because like, you know, there are all sorts of things that there's soft, what's called soft law, right? Which is like guidelines and all these other things that push people in certain technological directions and away from others. But that's actually not even my point. My point is that at the end of the day, you have to be able to make a, like in order to have a regulatory system like this, you have to be able to make affirmative statements about safety. And the problem I think would be, you know, what are the affirmative statements about safety when you consider that the systems we are talking about are by their very nature, extremely general and already, so I'm just as an example, like obviously AI systems today are being used in areas that already have regulatory, structures like the kind you're describing that affect them. So you have to like this regulator would either have to be so general and have such a broad projection of authority or it would have to be really, really narrow. And I kind of doubt that it would end up being really narrow in the context of democratic politics, because the issue that you'll have is like, there's going to be more than just, you know, X risk type issues. So even if you could formulate some statement about, existential risk and like, okay, you have to prove that the model will not do XYZ that demonstrates catastrophic misalignment. Okay, fine. But I would say in practice, you're likely to end up with a situation where, for example, the model cannot result in job loss would be a really good example of this. And then you have what I, this gets back to an article that I wrote more than a year ago called The Political Economy of AI Regulation, which is to say, because this is so general and because it's going, because the technology is going to, in its positive adoption, not like existential risk, not anything like that, it's positive adoption is going to end up challenging many entrenched economic actors and aspects of the status quo. And when those people, if a regulatory regime of the kind you are describing exists, then those people are going to be able to use it as a cudgel to prevent technological change that I think we would all agree, well, not all as in all people, but probably the three people in this discussion would probably agree is like good for the world.

Max Tegmark: So I will push back in a bit on this idea that it's so hard to get started on this, but I'd love to just I'll give you a chance to 1st to answer just a very simple question. Do you think it's reasonable to have zero safety standards on AI right now? Do you think it feels reasonable that there should be less regulations on superintelligence than sandwiches now in 21-25?

Dean Ball: Well, I mean, I certainly think we over-regulate sandwiches in the, and just for the listener who doesn't have context, I think what Max is probably referring to is sandwiches served in restaurants. public health regulations and local, public, all sorts of things like this, right? And it's true, like there are probably ways in which we do over-regulate those things and probably many other ways in which we don't. I would say that generally speaking, in America, we succeed when we regulate at the level of Like the restaurant that serves the sandwich has many computers in it, probably. It probably uses computers in many different ways, including to get the ham and the bread that brought the sandwich to us. And like those, we don't like regulate those computers with respect to their conveyance of ham to the restaurant, right? Like we just treat them as general purpose technologies that can do lots of different things.

Max Tegmark: Right. But if you go to that sandwich shop and you notice that across the street from it is OpenAI or Anthropic or Google DeepMind or XAI, if they had developed super, if they developed super intelligence this year, which I think is highly unlikely, but suppose they did, then they would be legally allowed to just release it into the world without breaking any law, because there are no safety parameters they have to meet. Do you feel that that's at all reasonable?

Dean Ball: So I mean, again, I wouldn't quite say, first of all, fundamentally, I think like you should be able to develop new technology and release it. So long as you're not behaving with reckless disregard for, or gross or gross negligence, you know, reckless conduct or gross negligence.

Max Tegmark: According to whom?

Dean Ball: Well, according, so this is the thing. That actually already is that it's to say it's illegal would imply it's a violation of criminal law, which may or may not be true, but certainly it's a violation of civil law, right? So if release of that system were to result in physical harm, loss of property, death of any person. Yeah, well, human extinction, sure. But like I kind of, again, like I'm sort of skeptical that that's what we're going to have on day one. That company is subject to common law liability, and there are common law liability. So, yes, in the tail risk case that we all die, then yes, common law liability does not help you. And in general, it's true that common law liability is not a great solution for most tail risks to the extent that the damages incurred out, you sort of dwarf the balance sheet of even the largest companies, right? So it's like, you killed, you created, let's not even say killed all people, let's say you created a pandemic. Someone made a pandemic with your model and we've decided that was reckless misconduct for which OpenAI, the creator of the AI model, bears some form of liability. Well, that's a lawsuit you can bring against them, but if the damages are, you know, $100 trillion or something, then, it's very unlikely that you're going to be able to recoup that amount of money from OpenAI, even with all the money that they have. You'll bankrupt OpenAI and still be, you know, not fully compensated for the harms that you suffered. So it is true that as a general matter, tail risks are one of the classic examples of where of where government, of where public policy outside of reactive liability makes sense. So I don't dispute that. Now, I think when it comes to the sort of foreseeable tail risks that AI models might pose, the current ones that people talk about are things like catastrophic cyber and bio. I think there's a lot of things that you can do downstream of that avoids creating this large-scale regulatory regime.

Max Tegmark: A lot of people talked about extinction too, wouldn't you say?

Dean Ball: I mean, a lot of people do, but there is, there still is not the kind of persuasive evidence for extinction in terms of not just theoretically, but mechanically. How would that work? I just don't think we've seen that to quite the same, nearly the same extent that we have.

Max Tegmark: We haven't seen any extinction yet, of course, by definition, otherwise we wouldn't be talking here today, but I mean, I can... put it on the risk list. So I'm just thinking, it's interesting what you mentioned there about the pandemic example, because I think it's quite relevant. You know, as you know, it's very controversial right now whether COVID-19 was actually the result of gain of function research funded by the US government and Peter Dashek or not. But if you just consider that there's some probability P that Peter Daschek and his research group did create it with help from others, then if someone were to sue them for these millions of deaths that it cost, it would be pretty meaningless because Peter Daschek doesn't have that kind of money. The university where he worked doesn't have that kind of money. And for that reason, the US government has We now kind of clamp down on gain of function research again and say no more of this gain of function research until we better understand what you're doing. And we also have biosafety labs level one, level 2, level 3, level 4. So if you're doing something even that seems less scary than what they do or did, you know, you have to do it in a special facility, you have to get some pre-approvals. And then you contrast that now with digital gain of function research. We had Sam Altman, in a press conference the other week being so excited about building automated AI researchers, right? And ultimately, a lot of people are excited about recursive self-improvement, which is very analogous to biological gain-of-function research. Why should we have regulations on biological gain-of-function research and still be content with having no binding regulations at all on digital gain-of-function research? That makes no sense.

Dean Ball: I'm trying to answer the, I'm trying to actually go back to the first question you asked me, which has to do with safety standards. But first of all, let me just say, yeah, what you said is completely consonant with my assertion that tail risks are not typically contemplated very well by the common law liability system. But with that being said, you know, I think like when you look at For example, actually, you can do essentially automated gain of function research with a nucleic acid language model today, right? You can do, you can basically simulate the evolutionary process that allows for more virulent viruses or whatever else. And we've seen, you know, the early stages of this from people like the Arc Institute in California. Those are not like ChatGPT style models, but it's the same architecture trained on nucleic acid sequences, right? So like, We know that's a thing. I think as a practical matter, though, the issue that you have is that those things are bits. And it's very, very hard to just purely regulate bits. So what do we do? Well, instead of imposing regulations at the layer of the model, which is a really difficult layer of abstraction on which to do it, in the same way that like we don't tend to place regulations at the layer of like computers or of software or of transistors, because these things are really important general purpose technologies that undoubtedly all three of those things have killed lots of people at this point. We don't regulate at that layer of abstraction because it's not very practical. It's not a good unit of abstraction. It's not a good conceptual unit of account. for regulation. So what do we do? Well, there are all these choke points in the physical world. Some of them are, labs of certain biosafety level categories, BSL 3 and 4, as you've said. Some of it is at the layer of nucleic acid synthesis screening. which, basically you have to say, if you're going to order the creation of a certain kind of nucleic acid, we're going to, as a matter of policy, require that you screen that against some sort of methodology that allows us to test for whether or not you're trying to make a pathogen. And again, I worked on that policy, some of those policies when I was in the Trump administration. So these are all things that we do. Again, those are safety standards that exist, that are emerging, that are kind of downstream in many ways of advancements in AI. I think the urgency of policies like nucleic acid synthesis greening goes up because of AI. So the issue, so I think when it comes to safety standards for something like, you know, at the very general level of a sort of generalist artificial intelligence model, What you need, I think in the long run, we will build those standards, right? Like, no, I don't think anybody in the spectrum is saying that there won't be standards for safety, security, and et cetera, of large language models.

Max Tegmark: What do you mean by long run? Because Sam Altman talked about 1000 days to superintelligence, and he might be wrong, but I'm curious if you're thinking less than three years or more than three years.

Dean Ball: I'm thinking that it will happen gradually over the course of the next decade, or maybe even more.

Max Tegmark: After superintelligence, maybe?

Dean Ball: After super intelligence, maybe, yes. But I think the broader point here is that this is traditionally the way that we kind of do things in the United States is that you build a technology, right? You gain experience in practice from, with its sort of, with its utility, and you sort of diffuse it throughout the economy in this very complicated way. Sometimes there are demonstrated harms, and when there are demonstrated harms, the first thing we do is we deal with that through the liability system. And again, I would point out that OpenAI, Google, and other companies have not copyright cases, but common law liability. You caused physical harm to me type of liability cases against them for chatbots, right? Yeah. And I think that at least some of those, the companies are likely to lose. I mean, they'll be determined by courts. And then gradually, over time, we codify around a set of standards that are shaped by experience, that are broadly agreed to by many different actors. And then eventually we codify those in the form of government, sort of government standards. And eventually that becomes part of an international standards body, right? Nobody is disagreeing that is not a process in which we need to invest substantial time and money and energy. And in fact, I would say the Trump administration should get points in your book because part of the reason that the administration renamed, it was called the AI Safety Institute by the Biden administration. They renamed it the Center for AI Standards and Innovation to reflect this reality that the ultimate goal of an organization like that is to produce technical standards. So you have to produce these, like it takes time. time to do, but when you actually have these things, they are coherent because they're formulated through experience. I think the problem is when you, when you try to change the sequencing of that and try to come up with standards sort of without any experience and sitting in the, the ivory tower or the, regulator's conference room, you, I think, have a tendency to create standards that are unrealistic and burdensome.

Max Tegmark: I completely agree with you, Dean. It's great that the current administration is taking biosecurity more seriously, and I get a sense that they're also taking AI-assisted hacking more seriously. I completely agree with you also that this is how things have been done in the past. that technologies come up, that people invent the car, that a bunch of people die, and then gradually you mandate the seat belt and traffic lights and speed limits and other things to make the product more safely. But I think it's important to remember that science has been getting progressively more powerful from the early, from the ancient times until now. And as a result, technology also keeps growing exponentially in its power, right? So At some point, the technology gets powerful enough that this old traditional strategy of just learning from mistakes goes from being a good strategy to being a bad one. I think it worked. It served us well for cars. It served us well for things like fire. We invented fire first. And we didn't regulate it to death. It was later we decided to put fire codes in and have fire extinguishers and fire trucks and stuff like that. I would argue that nuclear weapons are already above this threshold. We don't want to just let everybody who wants to buy hydrogen bombs in supermarkets. And then, oopsie, you know, that didn't go so well. We had the nuclear winter now and 99% of Americans starve to death. Let's regulate. For those things, it was very obvious to people. that one mistake was one too many. And we already have a bunch of proactive laws about how to deal with hydrogen bombs. In fact, even despite all your work in the government, you are not allowed to buy your own hydrogen bomb, even though I would trust you with it. You know, I know you're a nice guy. I'm not allowed to start doing plutonium research in my lab at MIT, even if I pinky promise that I'm going to be careful, you know, just because one mistake there is just viewed by society as one too many, and they know I don't have enough cash to pay the liability if I get sued afterwards. And I would argue that artificial superintelligence is vastly more powerful than... in terms of the downside than hydrogen bombs would ever be. There have been some pretty careful calculations recently showing that only about, in the worst case scenario, 99% of all Americans would die and starve to death. And if there's a global nuclear war with Russia, so there's still, you know, 3 million have survived. Whereas if we lose control of artificial superintelligence because we sloppily, somebody sloppily built the new robot species that just kind of took over, you know, it really is game over in a way that nuclear war wouldn't be. So the way I see this is not that there's anything wrong with the traditional wisdom for how to regulate things. I think that's very appropriate for all tech sort of below a certain risk threshold. And we're very lucky with AI that so many of the great benefits we have are not particularly risky. AlphaFold, an absolutely superhuman tool for folding proteins, great for drug discovery, you know, autonomous vehicles that can save soon, I believe, over a million lives every year from this pointless Rd. deaths. You know, it's so much productivity that can be gained from building controllable AI tools. And those, It's, I think, very feasible to continue having the sort of liability system you're describing, the traditional way, learn from mistakes and then fix. It's only fringe stuff, like in particular artificial superintelligence, which is on the wrong side, I think, of that threshold. where right now there's not much upside, frankly, to sprint, in my opinion, to sprint to building super intelligence in three years, if we could do it safely in 20 years instead, we would be much better off just doing controllable tools until then. And that's why it irks me so much that I think people conflate these two things A lot. I'm not saying you do, but a lot of folks I've spoken to on the Hill do, I think, and think that the only choice we have is more AI or less AI or go forward or stop. Whereas I see it instead the development as being branching into two paths. You know, either we continue going very aggressively forward to build all these great tools, but just insisting that companies demonstrate to us that they are controllable tools versus going all in on building super intelligence. I have to give you a compliment, Dean, also. I was so pleasantly surprised when I read the action plan that it didn't mention the word superintelligence a single time and not even AZI. You must, and I think that was, I don't know if you get 100% of the credit for it or 50% or whatever, but I think that was really wise because it highlights that there's so much great stuff we can do with AI tools without having to even get into the whole question of superintelligence. I don't know if there's anything you're allowed to share with us about.

Dean Ball: A lot of people contributed to the action plan, but thank you very much. I appreciate, but actually I'd say the reason we didn't use terms like AGI and superintelligence in the action plan, at least from my perspective, is like, because it's really hard to know that we're talking about the same thing.

Max Tegmark: Yeah.

Dean Ball: You know, because, and so this is, where I think, you know, maybe we ought to, we ought to spend some time is this question of like, what exactly are we, talking about when it comes to like, because, I'll give you an example of an area where I've had an evolution in my thinking. About a year and a half, two years ago, you know, I was opposed to this bill called SB 1047, which was a California state bill that had to do with, you know, it was regulating models with respect to potential risks relating to extreme cyber events and bio terrorism and other sorts of bio weapon events that could cause up to more than $500 million in damage. Yeah. And at the time, you know, the frontier model was things like GPT-4O, things like, you know, Gemini 1.5, Claude 3. And it wasn't obvious to me at sitting at that time, say the spring of 2024, wasn't obvious to me that just everyone was like, well, the next time we crank the pre-training wheel, the next time we go up another order of magnitude in terms of pre-training compute, we'll get to models that do pose these very serious bio and cyber capabilities. And that wasn't quite clear to me because, you know, I was sort of thinking like, well, I don't know, like, you're talking about cross-entropy, you're talking about minimizing cross-entropy loss here on the broad, you know, internet corpus. Like, is that really going to like create something that can cause a like bioweapon? And I said something though. I said, you know, if you showed me a model that had demonstrable system 2 reasoning, and that sort of led to the performance that I think it would, system 2 being deliberative, reflective reasoning, then I would changed my mind about some of this stuff. And then, right around the time SB 1047 was vetoed, OpenAI released a model called O1, which did exactly this. It had this system 2 reasoning. And the performance on cyber, on mathematics, and on a lot of different areas of science, including biology, went way up. And at that point, I said, shortly after that model, I said, this changes my risk calculus with respect to catastrophic events like bio and cyber, because it's clear that this reinforcement learning and inference time compute-based paradigm is going to rapidly lead to capability increases in some specific areas that we're worried about. And I can paint a really clear picture because I can go from this model can reason about biology And it can also, by the way, use tools like AlphaFold, right? It can also itself use other biology, machine learning tools. I can go, I can draw a very clear picture from that to, you know, a virus being synthesized in a lab somewhere. The virus self-replicates, it infects one human host. Now, there's a lot of complexity there, right? It doesn't mean that kind of thing is guaranteed to happen, but it means that if you're doing the expected value calculation and you're thinking about like, okay, plausibly the chances of this may be causing a pandemic, even if the chances are still low, the chances just went up a big fraction. And so we're going to need some degree of targeted regulation to deal with this topic. And that is why I was supportive of SB 53, which in many ways was a somewhat more tailored version of SB 1047 that came a year later. That was because it wasn't just because the law changed to become a little bit more favorable to things I care about. It was also because the facts on the ground changed. So why did that, why does this matter? Because there is a clear link between emergent model capabilities and an actual harm that is cognizable to me. I think the issue with the sort of human extinction thing is that it's very hard to demonstrate in concrete ways like what this looks like. So to this point, like if you could formulate what you wanted if you could formulate things that would make you feel better in affirmative, technical, like empirical things we could say about models, right? Like, okay, we've stress tested the model in this way, we've run this eval, or we've done this thing, and we have shown that the model gets, you know, the model passes what we view as an acceptable threshold, then I would totally be willing to say like, yeah, let's make that an, you don't even need to pass a law, make it an eval. I kind of promise you if you made that an eval, if you got enough of the credible people around to support it, I kind of think that labs would probably just run it, like without a law having to pass. So like, why not just do that?

Max Tegmark: Yeah. So you're raising a number of successful policy approaches in the past here. So let's just, let me just summarize some good things I think you said there and add a little bit to it. You know, more broadly with regulation of things, let's take drugs again, you know. to not drown companies in big red tape and some other innovation. One tends to look at just rough plausibility and then divide all the products into classes. We have class 1 drugs, class 2, class 3, class 4, right? So there are much higher safety standards for fentanyl. or other new opioid drugs than there are for new cough medicines for adults, or for new vitamins, So if you take the same approach to AI, then what you would say is if there's some new software that translates English into Chinese or Japanese, the most embarrassing thing that could probably happen is that it's a sort of repeat of the Monty Python skit with a fake Hungarian dictionary you've seen that some people get. get a red face. Whereas if it's some, if you have an AI that is really state-of-the-art and the protein synthesis or DNA synthesis, it's pretty obvious that should be subject to higher levels of scrutiny. And then what we do in industry now is we let companies make the safety case rather than the government. So you said there that it's hard to foresee exactly how superintelligence, if it's just smarter than all humans combined, would kill us all. But you know, being a scientist for so many years has made me really humble about these things. It would have been really hard also for the makers of thalidomide to predict that would cause babies to be born without arms or legs, when all the ostents we did was reduce nausea in their moms, right? We didn't actually understand how that would happen, but it did happen. And because of that, it would have been pretty reasonable to just say, okay, we've noticed that there are a lot of things that can cause birth defects. We don't understand exactly how it works. So before we try it on all American mothers with no prescription, you know, let's try it on a small number of mothers and then see what happens to their babies and then kind of go from there. So you shift the burden of proof away from politicians trying to, having to articulate why this is going to be dangerous. Two, the companies just have to do some basic research to make the safety case, right? And I think, again, if we did this with AI today, if I had a magic wand and we created an FDA for AI, you would have class 1, class 2, class 3, class 4, or AI safety level 1, 2, 3, 4 systems, a little bit along the lines of what many AI companies have already and their voluntary commitments, right? And they would be very, very easy requirements. for the ASL1 and so on. But when you, for the higher level systems, the companies would have to do a lot more to quantify the safety case. And I think what would happen then is we would end up in a golden age of AI progress where we would soon get flooded with all sorts of new medical treatments and amazing autonomous vehicles, great increases in productivity. The one area that would get slowed down noticeably is precisely the race to build actual superintelligence where I think nobody would be able to make a safety case yet. And I think that would be just fine, you know, if we have to wait 20 years to think to get that done properly, it's way better than racing to it and bungling it and squandering everything.

Dean Ball: Leon looked like you wanted looked like you wanted to jump in.

Liron Shapira: Honestly, you guys are doing such a great job, so I don't know how much value I could add, but just I'll give it a shot to orient the viewers. So yeah, you guys are talking about how to regulate these new AIs. Dean, in your case, like Max pointed out, when I read your AI policy document, you know, the America's AI Action Plan, it doesn't really mention super intelligence. Do you think that is a wise way to go to basically just not look at the possibility of super intelligence currently when making policy? Or do you think we should do anything to prepare for the possibility of super intelligence?

Dean Ball: Well, so I mean, I should say, you know, the action plan, it is in the sort of AI policy public world is very heavily associated with me. But, you know, of course, the action plan was written by many people within the government. And I was really just the sort of, I was It played a big role in it for sure, but by no means the only one. It was not my unilateral product for sure. And I think one thing I would say there is like, I think that's part of the reason the Action Plan doesn't talk about super intelligence is because it's very, it would be very hard to build, whereas like my Substack does, you know, from time to time talk about super intelligence, because it's very hard to build consensus among a document that has so many authors as to what we really mean. And this maybe gets into, again, it goes back to where my concerns are with laws and drafting and exactly what you, what you mean and what you don't mean. I think about a model like GPT-7, this sort of ostensible GPT-7. And I think to myself, man, like, if this is a model that like, advances the frontiers of science in many different domains and solves a lot of different math, math problems that have flummoxed humans in some cases for centuries, is better at sort of legal reasoning and all these other things than, and coding than any human. Doesn't seem inherently dangerous to me. And also seems like, how is that not super intelligent, right? Like, it doesn't, it's not like Boston minion super intelligence, right? Like, it's not like that specific definition. But I guess my view is like, that concept of super intelligence was created quite a long time ago in the grand scheme of things, you know, with respect to how fast AI advances. It's not obvious to me. I think that concept of super intelligence was a really useful way of thinking about advanced AI systems. You know, Austrum wrote that book. He wrote the book Super Intelligence in 2014, I want to say, like, 11 years ago. Yeah. So, you know, Dario talks about this sometimes with respect to AGI, where he, Dario Ahmed, the CEO of Anthropic, for the listener. where he's like, AGI 10 years ago was like, we're driving to Chicago. But then once you actually get closer to Chicago, it's like, okay, well, like what neighborhood are we going to? What street? What's the house number, et cetera, et cetera. And I think that as we get closer, we actually, we need to develop new and more specific abstractions for what we are talking about, because there are, you know, all sorts of things that I think we will probably, in the fullness of time, have like really specific kinds of technical standards and also maybe even statutory requirements for what you can and can't build with AI. So one thing I want to be very clear about is like, I'm not saying like, this needs to be unregulated for all time. In fact, I would say, you know, you made the point earlier about how we regulate different medicines with different levels of rigor. Yeah, rigor. Based on based on their potential risks, I think we already do that with the Frontier language models, right? Because like.

Max Tegmark: There are no binding regulations right now in America for anything.

Dean Ball: There's tons of binding regulations on Frontier AI in America.

Max Tegmark: There's no binding regulation preventing people from launching things. There is like reliable afterwards, right? As opposed to drugs, right? Where I think that's an interesting distinction for the listener. You can't release any drug in the US until you've talked to the FDA about it.

Dean Ball: Well, not quite, not quite, but you, actually this gets into technical definitions and things where these things matter. You can release, for example, you can release CRISPR-engineered bacteria. without consulting the FDA, because those are probiotics according to the statute. So a company called Lumina released a CRISPR-engineered bacteria that you're supposed to brush your teeth with that will ostensibly eliminate. You're infecting yourself with a bacteria that you'll be infected with for the rest of your life, and every person you ever kiss will also be infected with it. I'm just saying though that like, you.

Max Tegmark: Know, there's a lot of, there's a lab in Wisconsin that's been taking this bird flu. strain that kills 95% of humans, but it's pretty harmless because it's not airborne. And they've been working on trying to make it airborne. Yeah, so there's a room for improvement there. But I think we agree on the basic situation here that there are, you can't open your restaurant or release a new type of opioid before you've been FDA approved. There are just some interesting, there may be some Obviously, we can have some differences in opinion about things, but there are some things which I think are more in the confusion category, which are just really helpful to clear up. And one of them is around definitions. Whenever you have any term historically that starts to catch on, every hypester is going to try to latch on to it and have it mean something else, right? So Alan Turing, when he said in 1951 that if we build machines that are way smarter than us, the default outcome is that they take control. And Irving J. Good talked about superintelligence machines in the 60s and recursive self-improvement. The definition of superintelligence that was implicit in that was obviously that they could do everything way better than us, which meant that they could also do better AI research than we could. They could build their own robot factories, make more robots that didn't need us anymore, and therein lied the risk. After that, I agree with you. Right now, there's just so much hype and BS about this. Mark Zuckerberg We talked about superintelligence in a way that almost made it sound like it has something to do with metas and glasses. And we have so many different ways that people have redefined AGI from the original definition that I actually, I don't know if you saw the paper that I was involved in that we did with Dan Hendricks and Yoshua Banjo and many others called defining superintelligence. I welcome people to come up with other actual empirically useful definitions, but we found with this definition that we're absolutely not even at AGI. GPT-4 was 27% of the way to AGI. GPT-5 was 57% of the way there. So there are still a lot of areas where today's best AI systems really suck, long-term memory, for example. But we're getting closer. And I think that if we're thinking about only putting the first FDA-style safety standards on AI in three or four or five years, there's some reasonable chance that that'll happen only after AGI and maybe even superintelligence have been created, right? And that would be, I think, be a pretty big oopsie for humanity. So I think there is, there are very useful, clear definitions of what we mean. And as I said in the beginning, the way to write a law is not to define superintelligence and ban it, but I think instead ban the outcomes that you don't want, you know, something overthrowing the US government, something making bioweapons for terrorists, which are very easy to define. And then as soon as that law is in place, it's going to spur just massive innovation in the companies. I love comparing pharma companies' budgets with AI companies' budgets. You know, the leading AI companies now spend maybe 1%, give or take, on on safety, whereas if you go to Novartis or Pfizer or Moderna, they spend way more than that on their clinical trials and the safety, because that's the financial incentive, right? They're in a race to the pop. Whoever can be the first to come out with a new drug that meets the safety standards makes a ton of money. People really respect the AI, the safety researchers in those companies. They don't think of them as whiners who slow down progress. They think of the ones, them as people who help them win the race against the other companies to make the big bucks, right? So I think as soon as we start treating AI companies like we treat companies in other industries, we will incentivize amazing innovation.

Liron Shapira: Can I also throw out a question? I want to clarify Max's nightmare scenario here, because I think that's important to frame the discussion, Max. I think you're not even just concerned about something like thalidomide, where?

Dean Ball: A bunch of people die, you know, hundreds of thousands or whatever it was.

Liron Shapira: You're concerned about this runaway process where it just becomes too late to regulate forever. Is that fair to say?

Max Tegmark: Yeah, I can take a minute just to clarify a little bit for listeners who haven't thought so much about this. Many, many times humanity has been thinking a little too small. Like people People thought nuclear weapons were science fiction until they suddenly existed. People thought going to the moon was science fiction until we did it. And from my perspective as a scientist, as an AI researcher, if you think of the brain as a biological computer, then there's no law of physics saying you can't make computers better at all tasks than we are. And a lot of people used to say, yeah, but that sounds so hard. probably decades away. In fact, most professors I know thought that even six years ago, we were probably decades away from making AI that could even master language and basic knowledge at human level. And they were all wrong, it turned out, because we already have it now in systems like ChatGPT and Colon 4.5. So if we consider what would happen if we actually built huge numbers of humanoid robots that were better at us than us at all jobs, including research, including mining, including building robot factories and so on. We would have built something which is not just a new technology like the printing press, but really built a new species because it can rep, these robots can build new robots in robot factories and they don't need us anymore. Could be great. It could mean that we don't have to do the dishes anymore and we're going to live in abundance and them taking care of us, but it's not guaranteed. And Alan Turing, as I mentioned, the godfather really of our field, said in 51 that the default outcome he thought was them taking control. We have the two most cited AI professors on the planet, Jeff Hinton and Yoshua Bengio, saying similar things today. And so if you let go of this idea that AI is like the new internet or whatever, and you just think of it as actually a new species, which is in every way more capable than us, there's absolutely no guarantee that it's going to work out great for us. And I'm not just talking about how we obviously couldn't get a job, get paid for doing work, because they could do it all cheaper. I'm talking even about that we don't really have any say after that necessarily on what happens on the planet. A lot of people are working on this. It's called the control problem or the alignment problem. Broad scientific consensus that they're not solved yet, you know. So this is the scenario which I think will end up. And if we just race as fast as possible to building these superintelligent machines, rather than instead focusing on the controllable tools that can cure cancer and do all the other great stuff, and then taking it nice and slow with the things that we don't yet know how to control.

Dean Ball: So, okay, there's a lot there that I think I can respond to. First of all, I think I would start just by pointing out that, you correctly observed that lots of people will take terms like super intelligence and redeploy them to mean completely different things. I would submit to you that maybe Sam Altman, when he talks about this be existing in three years, is maybe doing a bit of the same thing.

Max Tegmark: Maybe there's plenty of height out there.

Dean Ball: You can't talk out of both sides of your mouth. You can't say like, well, this happens, but also these people say they're going to build it tomorrow. You know, you have to like pick one. But the other thing I would say that's like more serious is like, yes, you were talking.

Max Tegmark: I wasn't talking about, I'm concerned about this regardless of, you know, whether it happens in three years or 10 years. The key thing is just whether, I mean, I think right now we're closer to figuring out how to build superintelligence than we are figuring out how to control it. So I think the only way we fix that is simply by making sure no one is allowed to build it before it can be controlled.

Dean Ball: Okay, so let me just respond now. So you have a, you are basically, the fundamental difference here is that I am saying The technology will be regulated in various, in like a wide variety of different ways, which are fundamentally and mostly reactive, right? Doesn't mean that we won't pass laws. There's already laws that I'm supporting, which have to do with AI regulation and I don't think impose substantial burdens on the development. I would also say the development of this technology is a national security priority. And it seems really hard to, it seems like a really big cost to impose something that we would self-consciously, slow ourselves down when others are not doing that. But I'm not even going to, like, I think that's a valid point, but that's not even where I want to go. Instead, I feel like our crux is you kind of want this precautionary principle-based, preemptive regulatory regime that would require some group of people to say affirmatively yes before you're allowed to do something.

Max Tegmark: Like we do it with pharma and every other industry.

Dean Ball: Yeah, which is what those are, which is what those are. I think that, it's really hard. I think that there are huge costs associated with a regulatory regime of that kind. I don't think the government could do it very well. I think it's very possible that by doing that in practice, as someone who has observed lots and lots of these regimes play out, I think it's very possible that by doing that in practice, you would actually end up not just being worse for safety, but being worse for, or being worse for innovation, but being worse for safety.

Max Tegmark: Are you arguing we should close the FDA? Or did I misunderstand you?

Dean Ball: Well, I mean, so I would say the FDA is an organization that is in need of deep, deep and profound reform, because One of the things that happens when you impose A top-down regulatory regime like this is you lock in all sorts of assumptions that you have about the world. So let's take the FDA as an example. The FDA.

Max Tegmark: Before you give this answer, which I was super interested in, it's very different saying the FDA needs to be reformed, less regulatory capture, except from saying it should be shut down. So are you going to give, are you saying that we're worse off having it than we would if we didn't have it at all? Or are you arguing simply for a better FDA?

Dean Ball: Well, let me just explain like where I'm coming from here. Like, because I think that like these analogies of FDA to AI are not really very good. And so like, yeah, it's not to say that I don't think we need something like an FDA. Not that I don't think we need to test drugs before they go into people.

Max Tegmark: But then I'm confused by why you don't think we should have the same for AI. So what's the difference between?

Dean Ball: Let me just, let me make an uninterrupted point for a few minutes, if you don't mind. Okay. So first of all, when it comes to the FDA, we have this huge problem right now with a lot of drugs, which is that what we have realized after several decades of modern science, you know, as opposed to like, you know, when the FDA was made 100 years ago, is that diseases are way more complicated than we thought they were. They're not really discrete things. And there's kind of no such thing as cancer, and there's kind of no such thing as Alzheimer's. They're much more complicated, sort of broader failures of very complex biological systems and circuits. And the issue that you run into with that is that you need highly personalized treatments for things in order to solve stuff, because your cancer is different from my cancer, is different from someone else's cancer. And the FDA's regulatory regime turns out to be entirely unsuited to deal with that, because it was based on this sort of industrial era assumption of diseases manifest themselves the same way over many, over large populations of people. So what you want to do is you want to test it over a big population and sort of get an average statistical results as opposed to safety results for one person. And what that means is that what we have locked into place is an entire economic structure for the way that we treat disease that is wrong for modern science. And it's really non-obvious how we change it, because there's a lot of entrenched interest associated with the current system, including the people who run the clinical trials that are, that we operate at great expense, hugely more expensive than they should be, right? So That would be one of a huge number of examples of problems that can manifest themselves with top-down regulatory regimes of this kind. And so the idea that we need such a thing for AI, you know, I'm not necessarily saying that you, I'm saying that doing that bears an enormous cost. It carries an enormous cost with it. And I don't think that we really have the evidence that cost is worth paying with respect to AI compared to the many benefits that we get from not doing that and regulating it the way much more like we have regulated things like the computer and the internet and software and many of the other general purpose technologies, which have actually worked and grown and like made our lives so much better. I mean, not to say that medicines haven't, but like the things that have been real growth areas. And really a lot of why medicine has done so well has to do actually with software and computation and the internet and things like this, as opposed to pure object level advances in biology, which would be why we're going to cure cancer on chips that were originally designed to play video games. So I guess I'm saying like, you need to demonstrate a very, I think there's a high burden of proof. Not to say it will never be met, but to say it hasn't been met. To say every single top-down regulatory system we have was carried with it a similarly high burden of proof. And I think that like if you made a statement To go back to the banned super intelligence statement, if you made a statement that said like, we need to investigate such and such, or we need to come up with like, what are the guarantees that we want AI labs to be able to make in terms of like empirical evaluations about their models? What are the guarantees we want them to be able to make? I would be, I'd be, I don't know if I'd sign on to it, would depend on the specifics, but certainly I wouldn't, I would not have had the kind of visceral negative reaction that I did to the banned super intelligence statement.

Max Tegmark: Cool. So a lot of good stuff in there. Let me pick out three things I'd like to respond to. One about regulation, one about perceived vagueness, and then one about national security that you brought up. So on the first one on regulation, it sounds like we're actually in agreement that even though you'd like to see reforms on the FDA, you would not like restrictions on biotech to be completely eliminated. You would not want people to be able to do biosafety level 4 research to make that 95% lethal bird flu airborne, for example, just because it's cool and people can sue them later. But you would like there to be something. But whereas for AI, you feel there should be nothing to prevent companies from deploying things yet. But maybe, but you're open to it maybe in three years, four years, you just want us to think more about it. And whereas I guess my position there is that I think if we don't, if someone releases actual true superintelligence that takes over the world, it's that's going to be, it's going to be too late to regulate it then. And then, and then on the second point on vagueness, so this is really important. Many people have said to me, yeah, this statement that we put out on superintelligence, why doesn't it, isn't it written much more concretely so you can make a law out of it or something like that? And that was very deliberately because I think if you look historically in the US, when we've had new laws passed, like for example, a law against child *********** you could have pushed back and said, if someone says they're against child *********** well, how do you, that's too vague. What do you mean by how do you define child? Is it under 16 or under 18? And how do you define ***********? That's pretty complicated, right? And in the law, you can't just say, oh, you know, it when you see it. But I think that would have been, you totally can, because you totally can do that in the law. It started to be a broad consensus that we need some kind of ban on child ***********. That created the political will for experts to sit down and hash out all these details. And this is something you are very, very good at, right? Looking at how would you actually draft the laws? And the idea with our statement was very analogous. We see 95% of Americans don't want to erase the superintelligence. There are a lot of people are super excited about AI tools and view the idea of losing control of Earth to a new robot species as kind of dystopian, including David Sachs, no less, right? So if we can start getting public knowledge that most people actually don't want a race, an uncontrolled race to superintelligence, just like most people want some kind of ban on child *********** then that can create political will, where brilliant policymakers like you, know, sit down and talk to all the stakeholders and come up with carefully crafted language for how this would actually work and if there should be a new agency. So this was, in summary, not something I view as a bug, the vagueness, but it was, I think, a feature. Where we're going for here is just some moral leadership, basically, that, you know, we would like there to be some kind of restrictions on a race to superintelligence?

Liron Shapira: Let me jump in for a sec, because I think that you guys may actually dovetail more on policy itself than it's sounding like. Maybe the real crux of disagreement is your mainline scenario of what things would look like if we just kind of went on cruise control and didn't do much more than we've already done in the way of policy. And so, Dean, let me ask you this question. What is your P doom? defined as just kind of letting AI play out, not layering on additional regulation, waiting 10 years, and then the probability that it goes wrong and we kind of get this runaway super intelligence that's now too late to control or regulate. What's your P doom?

Dean Ball: Doom being defined as human extinction.

Liron Shapira: Yeah, like it's like a catastrophe, you know, of extinction scale. So maybe like half the human population dies and then we go back to being cavemen or just something extremely catastrophic or even extinction.

Max Tegmark: With a permanent 1984.

Dean Ball: I mean, if what we're talking about is like AI systems taking control over the world and killing large numbers of people. Yeah, I mean, like my P doom is very low. It's sub 1%, it's 0.01% or something like that. It's very low. It's let's say that I think there's all sorts of other outcomes from AI that seem that seem very bad. that are seem way more plausible to me that are things that I work on, a lot. But the specific doom scenario just doesn't really like, it just doesn't really seem all that likely when you think about many different things. One area would be like, look, I think that if you passed a law that said a group of people will have to take a straight up or down vote and their results, the vote will be public. a group of seven, the Supreme Court, let's just say, we'd send it to the Supreme Court of the United States. And the Supreme Court has to look at every frontier language model release. And they have to take a boat on do we think this model is likely to take over the world? I would be unconcerned about that. I don't, I wouldn't really support that law for a lot of different reasons, but I would not be, I would not be particularly concerned about that if that were literally the law. The problem is that it's not. And I think this is where you get back to like why, you know, the super intelligence ban statement was, it was written in kind of the way that it was, which is that like a lot of people that signed that statement, I would predict, have a much more nebulous set of concerns about AI than the sort of very specific ones, Max, that you have. And it's let's say that, by the way, I'm not saying that you don't have other concerns, right? I'm not saying you're not worried about misinformation or deep fakes or, job loss or whatever else. But I think that like, when it comes to like where you actually think, because, I think we would both agree that like the job loss thing is really complicated, right? Because there might be like, AI might, Matt Walsh had a tweet a couple of days ago about the conservative influencer, Matt Walsh had a tweet about, you know, AI is going to cause like 10 million jobs, 25, I forget what it was. Maybe he said like 5 million jobs over the next 10 years or something like that. And I was like, you know, that's like an extremely optimistic scenario, right? Like, in the grand scheme of things, like, because eliminating 5 million jobs, like, If you just focus on what gets eliminated, sure, it's not that much, but like you assume some jobs are created, millions, the economy creates millions of jobs and destroys millions of jobs every year, right? Like, that's like, that would be a very slow rate of change. Actually, if it only created, if it only destroyed 5 million jobs over the next 10 years, that would be like low compared to like, literally like just the normal churning of the economy, right? So I think, the job thing is certainly complicated, but you can imagine a regulatory regime that was much more like, well, we have to check what the socioeconomic impact of this is going to be. We have to make sure it doesn't harm blah, blah, blah. We have to make sure it doesn't, you know, all these other things, which are not to say they're on serious issues, but that like, can you, can we agree maybe that like, There's A plausible version of GPT-7 that's really good and pro-social that might also displace a lot of jobs. And that if we had a regulatory regime that said, we need to take a vote on whether this will take over the world, it would be 0 to 9 in favor of this is not going to take over the world. But if we had a regulatory regime that was staffed with like, you know, union representatives and whatever else, like, various stakeholders, let's say, representative stakeholders, that group of people, if their task was, do you think that this will be good for the economy? Do you think it could create job loss? Do you think it could be dangerous more nebulously? That group of people might vote against the release of that thing. and that might actually end up being a bad outcome. Do you see that failure mode? Like, do you believe that failure mode is a real one?

Max Tegmark: I totally see things like this getting very political. Yeah, it was very interesting for me because I spent a lot of time, you know, talking to a lot of the people who signed initial signatories of this statement. And of course, the ones who did I was very interested to hear why. And there were indeed, as you say, many different reasons. The NATSEC people, like former head of Joint Chief of Staff Mike Mullen, for example, I think for him, control loss was very central because he's used that as a national security threat, of course. Regardless of whether the US loses control, the US government gets overthrown by a foreign power or by superintelligence, it's a NATSEC threat. On the other hand, there were people, from Steve Bannon to Bernie Sanders, who felt that if we end up in a system where we actually have superintelligence that by definition makes all humans economically obsolete, then American workers would be dependent on handouts, either from the government, the former sensor of UBI, which the conservatives who signed off in the view as socialism and don't like, or if it's coming from the company, is the handout. Sam Altman's world coin or whatever would be viewed by people like Bernie Sanders as incredibly dystopian, you know, the most massive power concentration in human history to some tiny clique of people from San Francisco who don't necessarily share their moral values even. So that thing, I think you're right. something that bothered a lot of people. And then we had a lot of faith leaders who signed this for fairly different reasons, where they just felt this is really going to be harmful for human dignity. And a lot of people we both know in San Francisco like to joke about superintelligence being the sand god and so on. And a lot of these people are like, wait a minute, I already believe in a god. Why should I support some atheists in San Francisco building a new one that somehow got to run the show, that sounds very undignified. So the people have many different reasons, but I think in short, there are two separate questions. One is, should there be any kind of safety standards like we have for biotech or restaurants? And the second question, which is much harder, is what exactly should be on the list? I would be very happy if we could start by just having one very light requirement, which is companies have the make a good quantitative case that it's not going to overthrow the US government before we launch it. And then we can have a broader political discussion.

Liron Shapira: Before Dean responds, Max, speaking of reasons to sign the statement and these nightmare scenarios, what is your PDOOM?

Max Tegmark: Oh yeah. We actually wrote a paper, me and the three grad students from MIT, where we took the most popular approach for how humans can control superintelligence known as recursive scalable oversight. And we got very nerdy on it and tried to calculate the probability that the control fails. And we found in our most optimistic scenario that it fails 92% of the time. And I would love if people who think they have a better idea for controlling superintelligence would actually just publish it openly. so that they can get subject to scrutiny from others. But until that time, I would put, if we have, if we go ahead and continue having nothing like the FDA for AI, so people can legally just launch superintelligence and worry about getting sued, yeah, I would think it's definitely over 90% that we lose control over this.

Liron Shapira: Wow, 0.1% versus 90%. It's a pretty big crux.

Dean Ball: I just kind of have this sneaking suspicion that like, if the models seemed like they were going to pose the risk of overthrowing the US government or anything in that vicinity, that like, I don't think OpenAI would release that model or Anthropic or Meta or XAI or Google. Like, I just don't think they would. I think they probably wouldn't do that and would be quite concerned and would probably call the US government and tell them. I just, it doesn't seem like a realistic scenario to me. Let me say another thing.

Max Tegmark: I agree there. I think the makers of Thalidomide, that company, and I don't want to shame any particular company, they would not have released that either if they had known that it was going to cause 100,000 American babies to be born without arms or legs. But it was complicated and they just didn't realize it. And it's similarly very complicated here for these people at these companies to know. And Dario Amode has himself talked about, 15 to 25%. Sam Altman has also talked about how it could be lights out for everybody. So they're clearly comfortable with 5% or 10%.

Dean Ball: You can't deny the possibility that something, you know, you can't prove, right? I can't prove a negative. That's why like you can't say it's zero, right? If you're being intellectually honest, you can't say it's zero. Here's the thing. I'd say there's like 2 observations I would have about this. The first is that, some of what you are describing, some of the negative effects you are describing are going to be, including, by the way, like a lot of the labor market stuff, a lot of that is going to be an emergent outcome of a general purpose technology interacting with society that is very hard to model and advance in the way that you could, not impossible, it's just very hard to model in the way that you're sort of describing with drug testing, right? And so I think these emergent phenomena are just going to be like, I think it's really easy when you're thinking about stuff like labor, and this is why a preemptive regulatory regime scares me. If a group of people are sitting around and thinking, what are the potential risks of this thing? A, you tend to overstate them. And B, the reason you tend to overstate them, and I hear this all the time when people talk about the impact of AI on society, they don't, an economist would say that they don't endogenize the impact, which is to say they model AI as like an exogenous shock, like a meteor that's coming to society that's going to hit us. And that we will just remain completely in place and be like, oh my God, there's a meteor and not do anything. The reality is that if I, if five years ago, I showed you all the generative AI tools that exist today, just without anything about the society, I just said this exists in 2025, you would be like, wow, I bet you, blah, blah, blah, blah, blah. And you would say a bunch of stuff that would probably, and I would too say a bunch of stuff that would probably be wrong about the downsides of the technology as manifest. You would guess like, my God, their elections must be completely, you know, over there. their media environment, their this and that. And in reality, there must be huge labor market dislocation. There must be no software engineers, right? There must be none. In reality, society is an adaptive, complex system itself that, just like the human body, has the ability to internalize many different things and is, you know, incredibly adaptive and humans are quite ingenious. And so I think that's an easy thing to discount. The other point I want to make is about this recursive self-improvement thing, because it's another thing that like gets to me sometimes. Every general purpose technology in human history exhibits what you would call recursive self-improvement in the context of AI.

Max Tegmark: But with a human in the loop.

Dean Ball: I mean, like, kind of, I don't know, like, we use, all I mean by that is like, you come into the Iron Age and then you use iron to make more iron and better iron. And, we use, we turn a mill that's got an iron hammer that you're using to, manufacture more iron. And you use computers, you use computers to make better computers and you use blah to, you know, this is a, you use oil to get more oil, electricity to get more electricity, right? Every general purpose technology exhibits these kinds of like recursive loops because it's a general purpose technology. So one of the things that the general purpose technology does is make the general purpose technology better. It's very common throughout the history of technology. So I think that you can't just cite this fact that AI is likely to have recursive loops of self-improvement and the AI will be useful for AI because every general purpose technology is like that.

Max Tegmark: And so we don't disagree on anything here.

Dean Ball: Yeah, it's just like, I don't think, but there's a reason that, like, in every in the case of every other general purpose technology, that the recursive feedback loop, you know, it tends to be auto-catalytic. Oftentimes, it produces non-linear improvement, and but it never... results in these like runaway, these runaway processes, right? We've never seen that. It's not like we went, we used oil to get.

Max Tegmark: Yeah.

Dean Ball: We use energy to get more energy, and then all of a sudden the energy, we blew up the entire universe, right? That didn't happen.

Liron Shapira: Okay, Matt, maybe you can explain why you still think there's a doom scenario, despite Dean's point. And then I've got one more question for Dean.

Max Tegmark: Yeah, I'd love to comment on the NATSEC, which I think is super important also, because it's the main reason given in Washington, always why we should not regulate. Yeah, so I completely agree, of course, that there's technological progress itself has always involved these self-improvement loops. I have written about that extensively, and that is fundamentally why GDP has grown exponentially over time, because we use today's technology to build tomorrow's technology and so on. But there have always been humans in the loop. And when there are not humans in the loop, things can go quite fast. If you look at a slow motion of a nuclear bomb exploding, there is no human in the loop. You get one uranium atom decaying, and then you get, now you get two, and then four, and then eight, and so on. And we have never, the reason we've never seen anything blow up fast with our technology more wholesale is because we've always had humans in the loop as a moderator, right? It's pretty obvious, unless you think that there's something, some secret sauce in human brains that you can't build into machines, that it is possible to build machines that really don't need us. Now, if those machines think 100 times faster than us, And if they can instantly copy all the knowledge that other robots have learned into themselves, et cetera, then we could see more progress in a month than we had seen in 1000 years when they were human in the loop. This is not my idea at all. Irving J. Goode articulated this very nicely in the 60s. I think this is just a pretty basic simple argument. We can't say, oh, it never happened before, so it won't happen again in the future because we've never built super intelligence before. All the other tech, like the Industrial Revolution, just replaced some aspects of human work, like our muscles. We made machines stronger and we made machines faster. We've never had machines that could entirely replace our cognitive abilities. So can I just comment on that a little bit here? Because I think when I talk to politicians on the Hill, especially when I listen to AI lobbyists, of whom there are now more in Washington, DC than there are farmer lobbyists and fossil fuel lobbyists combined, the main talking point they use to explain why we should not have any binding regulations is, in one word, China. But China, they say, right? If we don't race the superintelligence, China is going to do it first. And I think that is just complete baloney, you know? I think There is not one race against China. There's 2 separate races which people really need to stop conflating with each other. One is a race for dominance, which was very eloquently articulated, Dean, to your credit in the AI action plan. A race for dominance economically, technologically, militarily. And the way to win that kind of race is by building controllable tools. which on all four, right? And yes, you need big data centers and all the other stuff to do these powerful tools we can control. Then there's the second race, which is who can be the first to release super intelligence that they don't yet know how to control? And that's the one which I've been arguing is a suicide race, because we're closer to figuring out how to build that than we are figuring out how to control it. And the Chinese Communist Party, Xi Jinping, as well. Clearly, really like control. I think it's quite obvious that they would never permit the Chinese company to build technology if there were some significant super intelligence to just overthrow them and take over China. I even got a firsthand anecdote on that from Elon Musk. He told me in the spring of 2023, he had a meeting with some quite high up people in the CCP where He said to them, look, if someone in China builds super intelligence after that, China is not going to be run by the CCP. It's going to be run by the super intelligence. And the reaction in the room was hilarious. Elon said, like a lot of long faces, like they really hadn't thought that through. And within a month or so after that, China rolls out their first ever AI regulations. I'm also quite confident that the Chinese have much more surveillance on Deepseek and their other AI companies than the US government has on our companies and have both the ability and the willingness to stop something that they think would be a, could cause them to lose control. So I think this is actually something which the way I see this going, when I said P doom of over 90%, that was Lyron, to be clear, if we don't, if we just don't do any regulation, right? I'm actually quite optimistic that things are going to go well instead, because I think there's no way China is going to allow a race to support diligence since we don't know how to control it right now. I think China is going to continue steaming ahead, trying to build all these powerful AI tools for the race that Dean was describing in the AI action plan. But Absolutely not let anyone build super intelligence. And I think that's what's going to happen in the US also. I think I even know already a growing number of people in US NatSec are beginning to view this as a NatSec threat. Maybe they listen to Dario Amodei talk about a country of geniuses in the data center in 2027 and they're like, wait a minute, I have here a list of countries that I'm keeping track of as NatSec threats. Did Dario just say the word country? No, maybe I should add that country to my... my watch list also. And then suddenly we end up in this really great situation where the US will also prevent anyone from building stuff that they don't know how to control. We'll have this race to who can build the best and most powerful and helpful tools. And that's the future I'm really excited about living in.

Liron Shapira: Okay, Dean, let me just ask you the last question and then you guys can make your closing statements. So Max has laid out his nightmare scenario, which is we don't regulate AI, and then it can go uncontrollable. It will have, you know, recursive self-improvement or just more power than the human species, and it just doesn't go well. He's even said the probability could be up to 90% if we let this happen.

Max Tegmark: At least 90%.

Liron Shapira: At least 90%. Okay. And then Dean, you see that as a very low probability scenario, but you do have your own nightmare scenarios, which are on the side of regulating AI too much, right? If I understand correctly, your two nightmare scenarios are losing the AI race because your AI action plan focuses so much on winning the AI race. And you have another nightmare scenario, which is that some kind of overregulation could lead to like a tyranny situation, right? Just undermine democratic governance. So explain your nightmare scenarios.

Dean Ball: Well, I mean, certainly I think all manner of tyranny is possible. with AI. And I worry quite a bit about that. I think I'd say fundamentally that what I worry about is, so I'll start by saying that like, we are going to, AI is going to challenge the structure of like the nation state, kind of no matter what, in the good scenario and the bad scenario, it challenges that in various ways. And it requires institutional evolution, conceivably revolution in certain places in the world. And so, buckle up, because that's coming no matter what. But there's a version of that institutional evolution that basically looks like what we get is a, we get a rentier state. We get a state that is run, think of the Middle Ages, right? We get a state that is run by a small number of people that control something. And, you know, that thing is a, it's certainly a tool of violence, but they're not like quite legitimate in the way that we think of democratic legitimacy. And there's some sort of middle class of rent-seeking humans who have legal protections from that upper class. And then there is this large underclass of people that have very low practical agency, very low ability to really meaningfully contribute, and they're kind of, you know, stuck in some sort of dystopia. That seems very likely to me. And I think there are many regulatory regimes that, including a licensing one, that make that outcome substantially likelier. And so I think that's the, we face these kinds of trade-offs no matter what.

Liron Shapira: Should we hit on losing the AI race or should we hit on like a tyranny scenario? Or do you think what you described is kind of the main nightmare scenario?

Dean Ball: I mean, basically, what I certainly like losing, I don't really even know what losing the AI race to China means. Like, it's hard to know. Like, certainly, yeah, there's a world in which China becomes the dominant technological power and sets the standards and all that stuff. And that's a really bad world too, in many ways. But I'd say that's like a bad scenario. It's not my nightmare scenario. It's not like the worst possible thing I think that could happen, but it's definitely not a good thing.

Liron Shapira: Okay. All right. We've covered a lot. So let's go to closing statements, starting with Max.

Max Tegmark: All right. So we've talked a lot about Doom here, and you've kept nudging us in that direction later on because your brand here, the Doom Debates. But I'd like to end on an optimistic note. You know, the real reason I'm so engaged with this topic is because I'm fundamentally quite optimistic person. I spent so much time playing with this guy, and I'm very excited about the potential for him to have an amazing future where he doesn't have to worry about dying of cancer. We can prosper like never before on Earth, and not just for an election cycle, but life could prosper in principle for billions of years that have even spread out into the cosmos. We've completely underestimated as a species, right, how much opportunity we have. And that's why I think it's so important that we don't squander all this greatness by making some hasty chess moves here and blowing it all. I think that there are two very clearly different paths that we're choosing between right now. And we have to make our mind up, I think, within the next year or two, probably. One of them is the pro-human future. America was founded to be a country run by the people for the people. So there was a real emphasis that America was supposed to be good for the people living in America. It was not founded to be good for the machines of America. That was not the idea here. And then the other, so the way to get there is to stop the corporate welfare towards AI companies, which is hard because they have so many lobbyists. But hey, so did the other industries that we ended up putting safety standards on eventually. And then steer technology to really be pro-human, to make life better for humans. cure diseases, make us more productive, et cetera, but make sure it's always us in charge. So that first one is a very pro-tech scenario. Notice we go full steam ahead with ever better AI tools. The other scenario is we raced to build super intelligence, which by its very definition is super dystopian, right? The very definition of this, none of you can earn any money after that's been built doing anything. So you're going to be dependent on handouts from the government or some tech CEO, or you're going to not have any money and life will be very bad. That to me is an incredible, unambitious vision for the future. Why should we, after hundreds of thousands of years as a species on this planet, working so hard to build ever better technology so we can become finally the captains of our own ship, so we don't have to be worried about getting eaten by a tiger or starving to death? Why should we throw away all this empowerment we've gained through all this hard work by just deliberately building something that's going to take over from us? Ridiculously unambitious. I want us to take charge of this, and I think 95% of all Americans in these polls clearly agree with this, but to deliberately say, okay, we're in charge now, let's keep it that way. A journalist asked me, What on earth do these different people who signed the statement have in common? I don't even understand, she said. I thought about it for a while. What does Steve Bannon have in common with faith leaders and Susan Rice and Chinese researchers, et cetera? Well, then it hit me. They're all human. So of course they want the pro-human future where humans are in charge, right? If we found out that there was an alien invasion fleet heading towards Earth, obviously we would all work together. to fight the aliens to make sure it's us in charge. And now you have a quite small fringe group from Silicon Valley with very good lobbyists basically saying, yeah, we should build out all these aliens and they're probably going to take over. Elon even said that openly the other week, right? That is the most unambitious ending to this beautiful journey of empowerment I can imagine. And to me, inspiring future that I'm excited about and that I think we will actually have, once people understand more what this is all about, is where we remain in charge and we keep AI a tool and create a future that's even cooler than the sci-fi authors could imagine.

Liron Shapira: All right, great. Let's go to Dean.

Dean Ball: I think the fundamental thing to think about here is really assumptions. Most AI doom debates, to use the name of the podcast, revolve around one of, at least one of the interlocutors, usually the one who believes in doom, assuming their conclusions. And here, you know, we've had a lot of conversation in this discussion about how superintelligence has many different definitions, and it could mean many different things, and we don't quite know what it means. And there's one version of it that you can articulate that means all sorts of bad things. And doesn't mean that thing is likely to be built, doesn't mean that thing is going to be built, doesn't mean that thing is possible to be built in quite the way that we imagine. It just means that it's a thing you can say, you know, it is a valid sentence in the English language. And then it takes a big leap to assume that that's what we're actually going to build. And I think we shouldn't assume that. I think that, as Max said at the end of his statement, the future is often profoundly stranger than we can possibly imagine. The future that we live in today would have been unbelievably alien to someone 50 or 100 and certainly 200 years ago. In many ways, would have been incredibly alien. And many of the jobs we do would have seemed quite odd. and the relationships we have with one another and our institutions and all of it would be like deeply alien. And my guess is that continues. And my guess is that the things that we assume today about the technology of the future are probably wrong. And we don't want to embed too many of those assumptions into the law, into regulation. We want to maintain, I think right now more than any other time, given the speed with which AI evolves, we want to maintain adaptability and flexibility. So I just wouldn't assume that super intelligence means the bad thing. I would instead at least consider that there are many worlds in which humans can thrive amid things that are better than them at various, you know, various kinds of intellectual tasks. and that humans can still have a role because there are certain things that are not replaceable inherently by machines, and that we can gain a tremendous amount of wealth, live much better lives, and find all sorts of new things to do that are economically and practically useful. That's been true so far throughout human history, and it wouldn't have seemed that way. It didn't to people at the time. Since we have a written record, we know what people's reaction has been to new technology, and it's always been like this. And you can say this time is different, and that's fine. But I think that you have to, I think we should demand a higher standard of evidence. Max talked about how, you know, America is by the people, you know, and for the people. True. But we also have a system that makes it quite hard to pass new laws. And there is a reason that we made it quite hard to pass new laws, which is that our founding fathers, and this is not like a statement of opinion, this is a statement of fact about American history, our founding fathers were deeply distrustful of democratic impulse, raw democratic impulse. The word democracy was a pejorative from the people that wrote our Constitution. It was an insult. to say that proposal seemed too democratic because they believed that you had to balance raw democratic will, people's raw intuitions about things. You had to balance that with deliberative bodies that make it hard to pass laws because laws ultimately are rules passed by people that have the monopoly on legitimate violence. And that's a very sacred and important thing. And we don't want to just give them willy-nilly all these new powers. We've done that a lot. And I just have very serious issues with the idea, logically and sort of at a more philosophical and even moral level, with the idea that like, We're just going to be able to pass a new regulatory regime and everything's going to go fine and there will be no side effects. And I think that there will be tons of side effects. And I think that we will ban tons of technological progress and stave off a lot of great, you know, a lot of wonderful possibilities for the future. I would say there are many ways to investigate and interrogate the concept of super intelligence and to advance the safety and controllability of that thing. There are many ways to do it that don't involve banning it, which was the original topic of this debate. There are, I note that Max did not spend that much time defending the thing that actually was the subject of the set of the statement that FLI put out. But, you know, this sort of, I think you can build something much better than just a regulatory regime. You can build a society that is capable of grappling with this technology and institutions that are capable of evolving with it. And I think that's ultimately going to be a much healthier, better outcome for the world. That's the one that I work on every single day. It involves taking the risk seriously. It involves taking the technology very seriously. You should not also be, you shouldn't be a radical in either direction when it comes to this technology. You should be willing to update your beliefs frequently. But at the same time, details matter. Getting this right is not going to be, it's not going to be a matter of taking regulatory concepts that we've developed for other things from off the shelf and applying them to this. It's going to be much more difficult than that. And so I guess I'll close with that.

Liron Shapira: Thank you. I'm thankful to both of you for stepping up to debate the difficult policy questions around super intelligent AI.

Dean Ball: It's such a complex issue, and so there's so many different positions. It's not black and white. It won't work to do it in an echo chamber.

Liron Shapira: It won't work to reduce AI policy to left versus right politics. Respectful debates between smart people with different views is what we need right now as a country, as a species, That's how we can stress test different ideas and bring out important nuance. I'd go so far as to say debate is a key piece of social infrastructure. So thank you again, Max and Dean.

Max Tegmark: Thank you, Dean, for a really great conversation.

Dean Ball: Thanks to you, Max. Thanks to you, Laurel. This was great.

Liron Shapira: Wow, what an illuminating debate from two people who are actually in the room for these kinds of policy discussions. So regarding America's AI action plan, the document that Dean Ball helped draft, both Max and Dean were happy that it doesn't mention superintelligence, but for very different reasons. Max was saying, we need a whole nother statement about superintelligence. And he even proposed a statement saying it should be banned until there's broad scientific consensus that it will be done safely and controllably, and until there's strong public buy-in. That's what Max thinks we should do regarding a superintelligence statement. And Dean is saying, yeah, it's good that we didn't mention super intelligence because it's too vague right now. Dean is saying from our current perspective today, we don't know what super intelligence will look like. Maybe it'll just work out really great and doesn't need that much regulation. So it's a very diametrically opposed perspective, pushing to ban it until there's consensus versus, well, you know, we'll deal with it later. Like it's fine for now. The crux of disagreement between Max and Dean As we uncovered during the debate, it really does come down to their P doom. If you remember, Max was saying his P doom is greater than 90%, assuming that we don't have these tough regulations on AI. But Dean's P doom is only about 0.1%. It's a much, much lower P doom. Dean is basically not worried about plowing forward and dealing with issues as we get to it. Whereas Max says we better be preemptive because it might be too late to regulate if we don't start right now. The reason I say P doom is the crux of their disagreement is because that's the thing that I think would really change their mind about the other stuff. I think their policy recommendations are totally downstream of what they see as the probability of doom. So for example, if they were to meet halfway, if Dean were to go up from 0.1% to 25% and Max were to go down from 90% to 25% or just anywhere in that middle range, 40%, whatever, they'd start coming up with very similar policy ideas. I think in that case, Dean would naturally say, okay, well, we need very tight security on this development. You can't just retroactively do it because there is a high risk of total destruction, right, of runaway AI. And they would just naturally be thinking along the same lines. It's just going to be downstream of how much doom they expect. So they went on to talk about the FDA analogy because they said, what does good regulation look like? And Max was saying, isn't the FDA a success story? Don't you like the FDA? Dean's response was he didn't go full libertarian. He didn't be like, oh, the FDA is evil. We shouldn't regulate things like that. He said, yeah, the FDA is helpful, but you can see it has this baggage. It has this legacy idea that one medicine has to treat one disease and really science is a lot more complex than that. So even the FDA is kind of putting the straitjacket on where you have to jump through all these hoops, but it's just, you're not necessarily getting a lot of productivity, like you're paying a high cost in terms of drag. And so Dean's analogy is that when we get to general artificial intelligence or artificial super intelligence, this kind of straightjacket regulation could be an extreme case of why Dean doesn't like the FDA, even though he admits that some amount of regulation is good. So that's kind of where they landed with the FDA analogy. Like the idea makes sense, but there's this failure mode that's going to bite. But once again, from my perspective, the reason they're not on the same page about the FDA analogy is just because Max is seeing this runaway risk, this doom risk. And Dean is like, no, it's really just about yet another technology, right? And this comes up a lot in other debates. Is AI just another technology? And my take is that Dean is very much leaning on the yes side of that and Max on the no side. In conclusion, what a stark divide. We have a scientist saying there's a high risk of a catastrophic outcome from superintelligent AI in potentially less than 10 years. And a policymaker saying, you haven't made a strong enough case for why the risk is high, so we shouldn't ban a hugely valuable line of research. And regulation by default is burdensome, so we should be constantly worried about overregulation. It's quite a difference of opinion to reconcile. And what's crazy is that the stakes are so high and the timelines are so short. My prediction is that we're going to keep seeing policy that's downstream of the policymaker's P doom. One of the benefits of having Dean Ball in this conversation is that we heard his perspective on P doom more explicitly because it wasn't explicitly mentioned in America's AI action plan. More generally, I think debates about P doom, or doom debates, if you will, are extremely productive for discourse. It's not just about this one disagreement between America's AI action plan and the Future of Life Institute's statement to conditionally ban superintelligence. It's a bigger picture. It's about building the social infrastructure for high-quality debate. The world is complex, and debate is one of the most powerful tools that we have as a society to navigate our way to appropriate policy decisions. But it has to be high quality debate. The people have to be informed. It has to be respectful. It has to stay focused on the issue, not about finger pointing or character assassination or scoring political points. It has to be nuanced. It has to be seeding points where the two sides are actually trying to find common ground, if at all possible. And it has to be productive for policymaking. If you think this debate was productive, You can support me and my team at Doom Debates in building the social infrastructure for having more of these kinds of debates at the highest levels by donating to the show. Go to doomdebates.com slash donate to learn more. Thanks to viewer donations, we're currently in the process of building out a professional recording studio to elevate the show. Production is an ongoing cost for the show, funded by donations from viewers like you. And I'm happy to say that this will always be independent media, managed solely by me, based on my personal perspective, not doing anyone else's bidding. You can make a 501c3 charitable donation, and every dollar you donate goes directly to production and marketing of the show. Again, go to doomdebates.com slash donate for more details. If you're new to the show, check out the Doom Debates YouTube channel. I've been having debates with some of the top thinkers in the space, like Gary Marcus, whose P. Doom is apparently 1 to 2%, and Vitalik Buterin, who's more in the 8 to 12% range. And in each case, I ask them the question, why isn't it 50% or 90% like Max Tegmark over here? What is going on? How do we reconcile this huge gap? And how do we do it fast so that we can make productive policy decisions? While you're on that YouTube channel, smack the subscribe button so you'll conveniently get new episodes in your feed. And I look forward to bringing you the next episode of Doom Debates.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.