Today Dean W. Ball, former White House AI policy advisor joins The Cognitive Revolution to discuss his role in crafting the Trump administration's AI Action Plan, his reasons for leaving government, and his perspectives on AI policy, US-China competition, and the future of AI regulation and adoption.
Watch Episode Here
Read Episode Description
Today Dean W. Ball, former White House AI policy advisor joins The Cognitive Revolution to discuss his role in crafting the Trump administration's AI Action Plan, his reasons for leaving government, and his perspectives on AI policy, US-China competition, and the future of AI regulation and adoption.
Check out our sponsors: Fin, Labelbox, Oracle Cloud Infrastructure, Shopify.
Shownotes below brought to you by Notion AI Meeting Notes - try one month for free at https://notion.com/lp/nathan
- White House Experience & Government Role: Dean Ball served as senior policy advisor for AI and emerging technology at the White House Office for Science and Technology Policy (OSTP) for four months.
- AI Regulation & Government Approach: Information asymmetry exists between government and AI labs, "Having worked at the White House, I don't know tremendously more about what goes on inside the Frontier Labs than you do."
- Private Sector Innovation: Dean emphasizes the importance of private sector-led initiatives in AI safety and standards.
- Future AI Developments: Dean believes agentic commerce is "right around the corner" but sees little discussion about it from regulatory or conceptual perspectives.
- AI Action Plan Development: It emphasized concrete actions for AI implementation across government agencies rather than just theoretical frameworks.
- Personal Updates: Dean is reviving his weekly Hyperdimensional Substack, joining the Foundation for American Innovation as a senior fellow, and plans to share his long-held insights on recent AI developments.
Sponsors:
Fin: Fin is the #1 AI Agent for customer service, trusted by over 5000 customer service leaders and top AI companies including Anthropic and Synthesia. Fin is the highest performing agent on the market and resolves even the most complex customer queries. Try Fin today with our 90-day money-back guarantee - if you’re not 100% satisfied, get up to $1 million back. Learn more at https://fin.ai/cognitive
Labelbox: Labelbox pairs automation, expert judgment, and reinforcement learning to deliver high-quality training data for cutting-edge AI. Put its data factory to work for you, visit https://labelbox.com
Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive
Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive
PRODUCED BY:
https://aipodcast.ing
CHAPTERS:
CHAPTERS:
(00:00) About the Episode
(05:36) Dean W Ball returns to the Cognitive Revolution
(06:56) How Dean got his role at the White House
(10:20) Navigating bureaucracy and the realities of working in government
(15:26) What to know before taking a policy role
(20:05) Sponsors: Fin | Labelbox
(23:19) How AI is (and isn’t) used in government
(30:19) The impact of government processes on innovation
(37:22) Sponsors: Oracle Cloud Infrastructure | Shopify
(40:27) Personal and professional reasons for moving on
(49:45) The process and philosophy behind the plan
(01:00:54) How different groups on the right view AI
(01:12:05) How the public and politicians are influencing each other
(01:45:47) The three pillars of the action plan and their significance
(02:12:49) Building the infrastructure for AI’s future
(02:26:13) The global race for AI and semiconductor leadership
(03:05:49) How the government views and works with leading AI companies
(03:06:35) Dean’s future plans and advice for the next wave of innovators
(03:15:54) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...
Full Transcript
Transcript
Nathan Labenz: (0:00) Hello, and welcome back to the Cognitive Revolution. Today, I am thrilled to have Dean W. Ball back for his fifth appearance on the podcast. Fresh off a brief but historic tenure as senior policy adviser for artificial intelligence and emerging technology at the White House Office for Science and Technology Policy. Is living proof that a talented person with a passion for understanding AI can go from a newcomer to the field to the highest levels of influence in as little as a year. After spending the first 10 years of his career at policy think tanks, Dean took a leap of faith and quit his job to start thinking and writing about AI full time, launching his substack Hyperdimensional in January 2024. Around that time, he sent me a Twitter DM about apparent contradictions between OpenAI's preparedness framework and its super alignment plan. So he wonders if an OpenAI plan to rely on automated alignment researchers for safety, while at the same time flagging model autonomy as a leading source of catastrophic risk? It was a good question, and I became an early subscriber. The first of Dean's posts that really stuck with me was 1 he called software's romantic era about his first interactions with Claude 3 opus in which Claude expressed the desire to understand what embodiment feels like and to hear music. Claude's first request would be Beethoven's ninth symphony, a fascinating choice since Beethoven was deaf when he wrote it and never actually heard it himself. I still remember 1 of the concluding lines. Quote, here finally is an AI system whose thoughts I want to hear. What a brilliant way to capture the spine tingling difference between Claude 3 and earlier models and more broadly between AI and all other technology. Dean's profile began to rise in a serious way, thanks to his early criticism of California's SB 1047, which inspired open source and progress advocates to take an interest and ultimately led to multiple rounds of revisions. While we never ended up fully agreeing on the bill, I was impressed that unlike so many of the shrill voices in that debate, Dean moderated his criticism as the bill itself evolved. From there, he started showing up everywhere, at the curve, as an FAI fellow, as an affiliate of Fathom, and with a podcast of his own. And then he got the call to join the White House. Now regular listeners will know that I did not vote for president Trump, but I was nevertheless very excited for him and, more importantly, for the country. There are not many people on the political right or, frankly, anywhere who combine the impulse to get hands on and use the technology to the fullest with a healthy respect for how powerful it could soon become, plus a deep appreciation for the nuances and challenges of integrating it into society and government effectively. I thought then and I continue to believe now that Dean was perhaps the single best person the Trump administration could plausibly have picked for the role. At the White House, Dean led the effort to develop America's AI action plan, which very well might be the most well received policy the Trump administration has put out to date on any topic, with positive reviews from many people who expected to hate it, including an endorsement from president Biden's former national security adviser, Jake Sullivan, on the most recent episode of this very podcast. Yes. It is a broadly accelerationist document framed in terms of beating China, But it's also extremely sophisticated, calling for what I believe were proved to be very prudent investments in evaluations and risk assessment, IO security, AI enabled science, mechanistic interpretability, compute governance, and much more. In this conversation, we discuss why Dean is now leaving the White House, including his candid self assessment that while he's great at conceptual work, implementation can probably be better done by somebody else. We explore how the American political right is grappling with AI from deregulatory impulses to culture war concerns and worries about AI's impact on children. We also dig into the mechanics of unlocking spare gigawatts for data centers, hear Dean's perspective on which frontier developers should be considered live players and how they're perceived in Washington, and get his perspective on what the AI safety and governance communities can best do to complement the government's work going forward. Along the way, we also hear how Ying used LLMs to simulate interagency feedback meetings, how the 4 year presidential term, which ends in January 2029, just happens to put the administration in sync with many AGI forecasts, and why Dean believes that the Republican Party might become the better home of the AI safety world in the long run. Between his rapid rise and his brief White House tenure, you might say that Dean embodies short timelines. And while I am sad to see him leave such a high impact role, I hope that his story inspires others who may have similar potential to take their own leap of faith and get into the AI game. We need all the Deans we can get, and we need them as soon as possible. Finally, I wanna say congratulations to Dean and his wife on their upcoming first child, another reason he's leaving the White House, and to thank Dean for doing this. He could easily have booked a much bigger platform for his first post White House interview, and I'm sure those will still come. But I'm grateful we chose to have this conversation here. It was super fun for me, and I always appreciate the chance to dig into so many important details. With that, I hope you enjoy this super deep dive into 4 months in the Trump White House and the crafting of America's AI action plan with Dean W.
Nathan Labenz: (5:36) Dean W. Ball, fresh off a short but historic tenure as senior policy adviser for artificial intelligence and emerging technology at the White House Office for Science and Technology Policy. Welcome back to the Cognitive Revolution.
Dean W. Ball: (5:49) Thank you for having me, Nathan. It's great to be here.
Nathan Labenz: (5:52) Yeah. I'm excited to, catch up on your recent, tour of duty. It's it's been an influential 1 to say the least. We've come a long way. This is your fifth time here on the podcast. The first, time just a little over a year ago, think maybe a year and a half, we were talking about frontiers in neurotechnology, a little side hobby of yours that has probably withered a little bit in the meantime. But if ever there was a Cognitive Revolution bump, and I say this in jest, to go from, just starting to write about AI all the way through to this White House role and playing a key role in the the writing and the release of the AI action plan. It has been quite a run for you. So I think there's a ton obviously to get into. Maybe for starters, wanna tell us just about your experience? You know, first of all, how did you even get this job at the White House? You know, is it something that somebody tapped you on the shoulder and said, hey. We want you for this. Did you have to interview? Did you have to put your head in the ring? Tell us the whole thing.
Dean W. Ball: (6:57) Yeah. It's a great question, the way these things work. So I never, actively solicited for a role in in the administration. What I would say I did is after after the president won in November, I published a piece, maybe the week or 2 weeks after that was called, here's what I think we should do. And, really, the goal there was very much like, well, let's we have a new administration coming in. I think I would have done that if if Kamala had won. You know? It was like, obviously, I would've felt less, excited about the the prospect of those policies being realized, but but that's basically what I would've, put forward either way. And so I wrote that, and that kinda did reasonably well as a piece. And some people who I knew were going into the administration reached out to me. And my view was always, I would happily do it. It's not something, I'm not desperate to go into government. Just through a lot of different social interactions, I ended up coming into conversation with Michael Kratsios, the director of of OSTP. This was before the administration, so he was appointed by the president and not yet confirmed by the senate and all that. So I we talked a little bit, and he was like, do you think you'd wanna come in to government? And I said, sure. And then it was several months of figuring out, You know, the hard part was not actually getting, the initial offer from Michael. The hard part was, figuring out administratively how to achieve the job because this is, a little bit of inside baseball, but OSTP, the Office of Science and Technology Policy, is unlike a lot of components of the White House. OSTP was created by Congress in in statute, and so it's technically an agency. And what that means is that it has a budget set by congress that doesn't go up very much. It has a pretty tiny budget, OSTP. The whole White House, actually, generally, has a pretty small budget and a relatively inflexible 1. And so most many, if not most, OSTP staffers are either sort of affiliated with other parts of the government or affiliated with nonprofits or universities, and in through various sort of bureaucratic means, get transitioned into the OSTP role. That took a while to figure out, and I was uncertain it actually would be worked out. I knew it was sort of being worked on, and I I knew it was a possibility, but I didn't bank on it. And then it all came together really quickly. Like, in the last, week or 2, it all happened really quick. And so, I remember finding out that my last day at Mercatus, would be you know, it it was basically the same business day that that I figured out. Okay. Was like, today's my last day, and I had to go tell everyone to do a quick, rush and say goodbye to everybody. Get all my things in a box, go, and then start, in government on the following Monday.
Nathan Labenz: (10:21) Sounds like a little of that red tape that might need to be cut to get the right people in the right seats faster in the future.
Dean W. Ball: (10:27) It's it's hard. Hiring for government is hard. It's it's really true. So
Nathan Labenz: (10:33) you've been, writing. People have been exposed to your ideas. You've got a lot of good feedback on your ideas. But I have to imagine it's a pretty big difference to wake up 1 day and be like, alright. Today, I'm going to the White House. And now I'm not just, writing to kind of socialize ideas and, hopefully kind of gently steer the public, discourse and thinking, but I'm actually gonna be, in the seat, playing a significant role in making the rules, making the policy, deciding what we should do, with real consequences. Yeah. Did you get
Nathan Labenz: (11:08) training with your show up?
Dean W. Ball: (11:10) That You get training on some stuff. They make sure you know that people can't pay for your bar tab or your lunch, things like that. There's, various procedural ethics rules that you have to comply with, and they're very serious about that. There's, an IT training. And then beyond that, no. It's the the thing that's interesting about joining government in general is that I have truly worked at universities and in think tanks that have more internal bureaucracy than the White House does. There are certain things in in in the White House that, you run into, and it's like, wow. That is a hard bureaucratic wall, and there is just nothing you can do about that. That is just there. But then, a surprising amount of stuff is way more flexible than you would think, and you can actually move, quite quickly, which was 1 of the things that I I really liked about being there. So, that I I would say no 1 really prepares you for it, and nobody can prepare you for it. It doesn't ever stop being surreal. Well, I was only there for 4 months, but I have a feeling that if I had been there for 8 or 12 or 16, that it it would have, never stopped being surreal. And I think there's, a few aspects to it. Like, 1 thing I would say is that I think it's very hard to know if you're the kind of person for whom power is like an addictive drug, that you just want more and more and more of, like a drug addict wants more, of of whatever their drug of choice is all the time, and you kind of, need it to sustain yourself versus are you the sort of person that, doesn't? You know? I I think it's, it's just really hard to know. And I think a lot of people that, without disparaging anyone, but a lot of people that are, in Washington DC are the kind of person you know, they're the former kind of person. What I think I learned about myself is, I'm I'm probably more the latter than than anything else. I'm probably, like I I found myself relatively unmoved by it. And if anything, it's in some ways burdened by it because, the amount of incoming you're gonna get from other people is insane. Right? Like, I at any given point, I probably had between 30 and 40, unread communications of various kinds. People reach out to you on Signal and WhatsApp and and email and all sorts of places. Just like asking me to meet or do various things or or read this or or, think about that or whatever else. There's just a ton of requests for your time, and, even people that are your friends or colleagues, the relationship with them changes in some really important ways. And so it's, very hard to to, anticipate how to deal with it. It's it's a really weird situation. I don't know if it ever, stops being weird, to be totally honest, because it's just like you know, these are these are pretty extraordinary jobs. It's a tremendous honor, though. You know, you just walk around, and when you have a good day, you really feel like, wow. Like, I actually really drove this thing forward and and, did whatever. And, it's real. Like, it's actually a real thing that we're gonna do. You know? It will be public policy of The United States Of America. And, yeah, it's crazy. You have to take it very seriously, and it's a huge responsibility.
Nathan Labenz: (15:03) You told me you wrote a letter to yourself before going in to just try to ground and calibrate yourself to some of those possible issues. How would you advise people who, might be thinking about taking on such a role? Like, to write that letter, what else can they you know, since there's no training, what can you offer in terms of wisdom to, people that might step into this in the future?
Dean W. Ball: (15:27) It's a really good question. I would say a couple of things. I would say, first of all, if you're going into a policy role, not every role. You know, there's operations roles, all kinds of different roles in government. But but if you're going into a policy role, you should have substantively all of your policy ideas, pretty well baked by the time you go in because you will not have any time to develop policy. You know what I mean? You're not gonna be, sitting around thinking that much. You don't have time to think. Right? I mean, the volume of of things that happen to you when you're in government is just wild. Like, it'll just as an example, x y z major developed country is coming through the door. They'll be at the White House in 3 days. And so when you're when you're dealing with things, at that velocity, you just you have to have, policies, very well developed already in in your mind or at least, reasonably far along, I would say. But also, I think you need you need to have a a very important set of sort of, what your principles are. Ultimately, your job as a staffer in government, whether it's the White House or elsewhere, your job is to execute the vision of the president. Your job is to achieve the president's objectives. And so there's a lot of staffers who I think convince themselves that, oh, well, president is you know, he's operating at such a high level. He doesn't know exactly you know, I know the right thing to do here, even if it somewhat disagrees with something that he said recently. You know, that's that will not get you far. I mean, yeah, people do it all the time. You know? But I would say, particularly in this administration, that will not get you far. But you do need to have a sense of what your own of of what's important to you personally. It might not be that, the president the president's objectives conflict with your principles. I mean, that's certainly possible, but it's not so much that. It's more about drift. It's about drift. The the being inside of government is kind of, it's very weird. It's like you're inside of a of a pretty self contained cube with, glass walls and, everyone in the world is, shouting into the cube all at the same time. You can hear what people are saying, but you, can't exactly interact with people in quite the same way. You know? Just as 1 practical example, let's say you have a question for for companies in an industry, right, about, some aspect of their business that that, might be nonpublic and you sort of feel like you need to know it to make some policy determination. There are a lot of rules that govern how that interaction can go, that that structure interaction. A lot of your interactions will kind of be governed in ways that will not be intuitive to you if you don't have prior experience in government. I think 1 of the things, in particular, 1 of the things that that that I felt quite acutely is, I felt myself drifting a little bit on, public scrutiny of my ideas, which is weird because, in a certain sense, the action plan is the most publicly scrutinized document by many orders of magnitude that, I have ever contributed to. But, nonetheless, that was, a hard thing for me, and it turns out that that's really important to me. It's really important to me to get public scrutiny of my ideas. That's 1 of the things that I I wrote to myself is like, here, do you feel like you're losing like, yeah, in a letter to myself, said, there's there's a risk of drifting from ground truth. Right? Because you're just kind of in this bubble, and so you might drift away, and sort of, become more attuned rather than to the ground truth of the world to the logic of the system in which you operate, which has its own very distinct logic and does not necessarily operate according to anything intuitive. The second you start to see that happening, you have to be careful because that can really change who you are in the long term. So that was something that was really important to me, and I would say that's, a general flavor of thing that I think is very common, to encounter in government.
Nathan Labenz: (20:01) Hey. We'll continue our interview in a moment after a word from our sponsors.
Nathan Labenz: (20:05) If your customer service team is struggling with support tickets piling up, Finn can help with that. Finn is the number 1 AI agent for customer service. With the ability to handle complex multi step queries like returns, exchanges, and disputes, Fin delivers high quality personalized answers just like your best human agent and achieves a market leading 65 average resolution rate. More than 5,000 customer service leaders and top AI companies, including Anthropic and Synthesia trust Finn. And in head to head bake offs with competitors, Finn wins every time. At my startup, Waymark, we pride ourselves on super high quality customer service. It's always been a key part of our growth strategy. And still, by being there with immediate answers 20 fourseven, including during our off hours and holidays, Fin has helped us improve our customer experience. Now with the Fin AI engine, a continuously improving system that allows you to analyze, train, test, and deploy with ease, there are more and more scenarios that Fin can support at a high level. For Waymark, as we expand internationally into Europe and Latin America, its ability to speak just about every major language is a huge value driver. Finn works with any help desk with no migration needed, which means you don't have to overhaul your current system to get the best AI agent for customer service. And with the latest workflow features, there's a ton of opportunity to automate not just the chat, but the required follow-up actions directly in your business systems. Try Fin today with our 90 day money back guarantee. If you're not a 100% satisfied with Finn, you can get up to $1,000,000 back. If you're ready to transform your customer experience, scale your support, and give your customer service team time to focus on higher level work, find out how at fin.ai/cognitive.airesearchers
Nathan Labenz: (21:58) and builders who are pushing the frontier know that what's powering today's most advanced models is the highest quality training data. Whether it's for agentic tasks, complex coding and reasoning, or multimodal use cases for audio and video, the data behind the most advanced models is created with a hybrid of software automation, human judgment, and reinforcement learning, all working together to shape intelligent systems. And that's exactly where Labelbox comes in. As their CEO Manu Sharma told me on a recent episode, Labelbox is essentially a data factory. We are fully verticalized. We have a very vast network of domain experts and we build tools and technology to then produce these data sets. By combining powerful software with operational excellence and experts ranging from STEM PhDs to software engineers to language experts, Labelbox has established itself as a critical source of frontier data for the world's top AI labs and a partner of choice for companies seeking to maximize the performance of their task specific models. As we move closer to superintelligence, the need for human oversight, detailed evaluations, and exception handling is only growing. So visit labelbox.com to learn how their data factory can be put to work for you. And listen to my full interview with Labelbox CEO, Manu Sharma, for more insight into why and how companies of all sorts are investing in frontier training data.
Nathan Labenz: (23:25) Yeah. That's interesting. The timepiece, calls to mind an obvious question, which is how much leverage are you able to get from AI in your work at the White House these days?
Dean W. Ball: (23:38) So it's interesting. I always had to be really careful how I answered this question inside the building, but, I would say this. So so LLMs are actually, not permitted on White House computers. That's not true of all of government. That's specific to the White House. And it's because I never ever got super into the weeds on what the problem exactly is, but it relates to compliance with a law passed after, I believe, after Watergate called the Presidential Records Act, which relates to, as it sounds, the specific documents produced within the basically, within the executive office of the president is the sort of name of the bureaucracy that that is the also called the White House. And there's something about compliance with that law that affects the ability of the White House to use, as I understand it, lots of modern technology. So, we don't you we can't use, Slack or Microsoft Teams. You can't use, Zoom or Google Meet or or Microsoft Teams for you have to use, Webex. Google Docs is another good example and and also LLMs. That being said, the main and the main aspect of that is is, can you it has to do with sharing predecisional documents, draft documents, things that are, literally policy being developed live, which, obviously, there's a lot of. Pretty much everything I worked on all day long was deliberative predecisional drafts. Those things very, very seriously cannot touch LLMs. They can't leave government, computers. Obviously, that's, a very important thing and and not something, that I would ever mess around with. That being said, I used AI as a chief of staff for me and as kind of a research assistant. So, I was it's you know, not a single word of the action plan was written by, edited by, or seen by AI prior to the release, at least not by me. But there were so many times when, it's like, okay. Well, I have a question I wanna ask. You know, I wanna, I wanna try to develop some policies relating to x y z. And so sometimes it might be like, well, let's brainstorm, 1000 different things. Sometimes I used it for that. Most of the time, I actually had a pretty good sense of, directionally what I wanted to do, and it was about scoping it correctly. It was about really, okay. Like, let me understand the statutes here really well. Like, let me get, all what all the statutory authorities, all the laws that enable us to do this thing that's a recommended action in the action action plan. Plan. You know, you have to you have to understand all of the, regulatory history, legal history, interpretations, etcetera, of, all the relevant statutes. You have to really understand, how are how exactly are you constrained? What exactly can you do? If you're trying to do everything you can, the way it felt to me was like I was thrown into the cockpit of a plane. I've never operated a plane before. In fact, my bad, I don't know that much about the federal government in the grand scheme of things. I know a lot more now. But, you know and and you're, kind of like, okay. Well, what do all these switches do? What do all these buttons do? And you're kind of you know, the LLM is a really good guide for that. The other thing I did that I think was really valuable is, the the action plan was I was actually looking earlier today just at at some initial notes that I did for, actually, back in February when I was drafting, for for the public RFI. Pretty close. And, by by late April, by 2 or 3 weeks into the job, the action plan was, I would say, 2 thirds to 75% of where it ended up publicly, basically. But that 2 thirds to 75% is like a blurry image that was sharpened by lots and lots of interagency feedback. So it's the difference between a useful image and a not useful image. Right? Because 1 thing is just a bunch of blobs and the other thing is actually an in focus image. But 1 thing that I found it very useful to do was for for all the different items in the plan to kind of, without directly taking the text of the plan, just saying, okay. This general idea, this general concept of doing, something, let's simulate interagency feedback on that. Right? Let's let's actually simulate, what would an interagency meeting be like here? Like, okay. Like, what is the the grizzly career, in house general counsel at the federal key communications commission gonna say about this? Right? Like, to this young whippersnapper who works on AI, what's he gonna say to me, right, about my about my bright idea? You kinda simulate that feedback. And over time, you you can basically actually, do the first like, maybe not the entirety. I mean, you can do, the first couple meetings kind of that you might have had to otherwise might have taken a lot of time. You could actually just kinda simulate that. And so that ended up being really useful. And I think what what I always say to people is, the action plan was not at all written by AI. The ideas in it did not come from AI. It was all written by humans, and not just me, many humans contributed a lot to it. But it is probably an example of an AI enabled productivity boost because it got done in a pretty short amount of time. It was about months. It is true that, a very small number of people, really drove the text. You know? And so I think it's I think it's probably an evidence evidence of some kind of a productivity boost from AI.
Nathan Labenz: (30:02) The idea of simulating interagency meetings is definitely a fascinating 1. Do you wanna give any more detail on you you mentioned, there's some surprises when it comes to the bureaucracy. You know, some things are just hard barriers. Other things are more flexible than you'd expect. I draw a blank on, what that would actually look like in practice. Are there examples that would be informative?
Dean W. Ball: (30:25) There are definitely some things. I mean, 1 that's a classic 1 is, and this is something that's actually kind of a meme in policy circles, but it is true, is, something called the paperwork reduction act, which was passed to make government more efficient and reduce paperwork in government. And what it means is that anytime you, as a government employee, want to reach out to more than 9 members of the public, whether they be individuals, companies, etcetera, etcetera, nonprofits, etcetera, etcetera. If you wanna ask the same thing, ask for the same information from more than 9 members of the public, you have to go through a whole, like there's, an entire bureaucratic process you have to go through for that, and there's, all sorts of rules that govern it. And you just, wouldn't guess that because, you you might be like again, I was using the example of, reaching out to to to industry. Like, I think, 1 thing that, a lot of people, taking my job, 1 of the first things they might wanna do is go talk to the frontier AI labs and say, okay. Like, where's the secret briefing? You know? Where's the where's the where's the briefing for for high level government employees that, you don't share stuff with the public. Right? And, you gotta be careful about that because if you reach out to more than 9 companies, then you have brought yourself into paperwork production act territory. You wouldn't guess that Okay. Well,
Nathan Labenz: (31:52) I have to ask, what's in those secret briefings for the government? Are there actually such briefings? And what can you tell us about them?
Dean W. Ball: (32:00) There are I mean, there are meetings where where, we learn things that are are, of course, you know and and not just with the AI labs, with all kinds of companies. You know? You come in, and we we learn all sorts of things that aren't public. I would say, there is not the the sort of, the meeting where they come in and say, here's our AGI road map. Like, that doesn't happen. That's you know, the companies are are pretty it's not like we keep it's not like government employees are, known for keeping secrets super well. Right? So it's not like the company is, they're they're pretty cautious about what they choose to share and and not share, in meetings. I think different people push them to differing degrees. I find I would say, there was a lot of times when when talking to to companies about, nonpublic information was was a very important part of my decision making. But I also think it's probably the case that, like like, there are certain kinds of questions that many different people in government have about, some aspect of AI. Right? Like, how much power do you think that's a classic 1. How much power do you think you guys are gonna need? Right? And I feel so bad for the people on the OpenAI and Anthropic and and Google and Meta policy staffs who get this question from, 20 different government employees a week in, different different parts of the White House. It's like and every single 1 of them wants a briefing. Right? But, there's not, coordination. You know, we don't we're we're often not aware of what the others are doing. And it's hard to be. Right? Because it's like, well, how often do I see the every, every single staffer with a plausible interest in that question at every single agency that relates to it, which are many. And 1 of the other things that would be, I think, unintuitive to a lot of people is that, like it's not unintuitive. It's just like you you might know it intellectually, but it's like the government just does a lot. Like, there's just a lot of stuff inside of agencies. There's, a lot of people doing all kinds of different things that, it would never really occur. So you would never be like, yeah. Like, why is the state department here? And it's like, oh, because, oh, that's actually extremely important. Like, they were doing this thing. You know? So, yeah, I would say, those meetings do happen. They're probably not as structured and organized as they could be. There is you know, to be honest, in some ways, I felt like I knew more about what was going on inside the labs outside of government, just to be totally honest with you. Because 1 of the things they do once you get the dot gov email address is you know, the the the labs will, put you in touch with researchers and stuff. I had a lot of friends at the researcher level, But that all of a sudden becomes, a lot more complicated once you're in government. You know? Like, talking to, other researchers at labs, it's it's it's definitely possible to do. But, they're gonna be more nervous about it. They're gonna wanna bring in their policy. And the second you bring in the policy people, it's like, okay. Well, now we're gonna have, a script, and we're gonna say exactly the right stuff. And so it's always very funny. Sometimes the, the labs would, organize these briefings. And there's also, different degrees of knowledge about this stuff. Right? So, I'm someone who, like I wanna have conversations at, the relevant margin where knowledge is accruing on a topic. Right? I wanna be like, what has this you know, what do you what do you what are your thoughts on the sample efficiency of, reinforcement or something like that. Right? I wanna ask about, some architectural question. But, a lot of other government employees with with plausible nexuses to to AI, they need, a much more basic kind of a briefing. And so that's typically what the labs prepare. So, there's a there's a particular briefing. I won't say I won't say which company, and I won't say who the researcher was, but it was a researcher who's very well known on the Internet with the policy people where they were they were briefing us about some some of the latest developments they were rolling out in their models. And they so desperately wanted it to, be a certain kind of script that was, designed to satisfy the you know, what would a what would a staffer what would a staffer with little context want? And I kept asking, all these, like, highly technical and, in the weeds questions. And and, the policy staff, they don't, stop you from asking those questions, but there are definitely moments when the policy staff like, the researchers will start going on into something, and the policy staff will be like, woah. You know? So, yeah, it's it's it's it's actually interesting. I I think it's it's I don't know if it's definitely true that I knew more about what was going on inside the labs. I felt like I had a better grasp on it as a as a nongovernment employee, but I think it's possible that that's true. I haven't thought enough about it, but I think it's possible that that's true.
Nathan Labenz: (37:23) Hey. We'll continue our interview in a moment after a word from our sponsors.
Nathan Labenz: (37:27) In business, they say you can have better, cheaper, or faster, but you only get to pick 2. But what if you could have all 3 at the same time? That's exactly what Cohere, Thomson Reuters, and Specialized Bikes have since they upgraded to the next generation of the cloud, Oracle Cloud Infrastructure. OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high availability, consistently high performance environment, and spend less than you would with other clouds. How is it faster? OCI's block storage gives you more operations per second. Cheaper? OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking. And better? In test after test, OCI customers report lower latency and higher bandwidth versus other clouds. This is the cloud built for AI and all of your biggest workload. Right now, with 0 commitment, try OCI for free. Head to oracle.com/cognitive. That's oracle.com/cognitive.
Nathan Labenz: (38:37) Being an entrepreneur, I can say from personal experience, can be an intimidating and at times lonely experience. There are so many jobs to be done and often nobody to turn to when things go wrong. That's just 1 of many reasons that founders absolutely must choose their technology platforms carefully. Pick the right 1, and the technology can play important roles for you. Pick the wrong 1, and you might find yourself fighting fires alone. In the ecommerce space, of course, there's never been a better platform than Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all ecommerce in The United States, from household names like Mattel and Gymshark to brands just getting started. With hundreds of ready to use templates, Shopify helps you build a beautiful online store to match your brand's style, just as if you had your own design studio. With helpful AI tools that write product descriptions, page headlines, and even enhance your product photography, it's like you have your own content team. And with the ability to easily create email and social media campaigns, you can reach your customers wherever they're scrolling or strolling, just as if you had a full marketing department behind you. Best yet, Shopify is your commerce expert with world class expertise in everything from managing inventory to international shipping to processing returns and beyond. If you're ready to sell, you're ready for Shopify. Turn your big business idea into cha ching with Shopify on your side. Sign up for your $1 per month trial and start selling today at shopify.com/cognitive. Visit shopify.com/cognitive. Once more, that's shopify.com/cognitive.
Nathan Labenz: (40:34) Yeah. Interesting. Why are you leaving now?
Dean W. Ball: (40:38) So, yeah, it's a it's probably a surprise to a lot of people. You know, I'm leaving for a few central reasons. And I'll kinda go in in in order of maybe, like some are, highly personal, and others are are a little bit bigger. So the the most personal 1 is that my wife and I are are having our first child in a few months. And so Congratulations. Thank you very much. In fact, I I I found out about her pregnancy the day before I started at the White House, the night before. It was, 9PM that evening, and it was, the next day you're going to the White House. So it was a lot to process all at once. The other and just so I I wanna be able to spend time with with with them and, devote frankly, enjoy the last couple months of pre parenthood and then, be able to really be a very attentive and loving father in in the the initial months of parenthood. So that's part of it. And I think it's not impossible to do that as a government employee, but it's hard. And and I worry that I would I would either do 1 or both of the jobs badly if I had to balance them both. The other reason, really, is is about, what I'm good at and what I'm not good at. Right? So, what I I spent a big chunk of my career as a manager, and I was, decent at it. I wasn't bad at it. I I I did, reasonably well and got promotions and stuff like that. But the truth is is I didn't really enjoy it. Like, I didn't love it. And the fundamental reason for that, I later discovered, is that my success depended quite often a lot on the whims and attitudes and what side of the bed other people in the world woke up on. And so, I just wasn't independently responsible for my own success. And I found that when I did projects where I was in control of whether or not the output was high quality, found that to be much more satisfying. And so, with the action plan, there was this, rare opportunity. I obviously, I was not the single author of it. It wasn't like a true you know, it wasn't like a substack post where I just hit publish. But, I exercised, a lot of creative direction over the action plan and, stylistic direction and and all these these sorts of things. And I just, like like, that's the kind of thing I'm I'm really good at. And I'm just not that good at, the process of of managing the implementation of the plan, sort of making sure the agencies are doing it, thinking, all the procedural levers you need to be pulling, all that stuff. Like, it's not I don't have confidence that I'm great at it. I think I could do an okay job, but I think if you only think you can do an okay job, you shouldn't be working for the president. Just like if you only think you can do an okay job at, catching a football, you shouldn't, play for the Miami dolphins. Right? Like, you're supposed to be the best in the world. You're supposed to be it's like, this is the AI policy of The United States Of America. Like, this has to be really, really effing good. And I had a high conviction that I, along with the rest of the team, could deliver that in terms of the text of the action plan. I had high conviction about that. I think that the administration is able can implement it. I think there's a lot of very, very savvy and driven people who can also implement the plan. But I just don't think that, I am necessarily 1 of them. And I actually think there are probably ways in which, like just reflecting on my own writing trajectory over the last 18 months, there's probably ways in which I would be, actively unhelpful. Because, you know I mean, if we went back to, 1 of those earlier episodes of Cognitive Revolution, the first s p 10 47 debate we did, the neurotechnology, stuff, I'm sure there's a ton of stuff I said in those episodes that, nowadays, I would look at and, very happily discard. Very happily. Oh, yeah. I totally don't believe that anymore. That's fine. I've evolved past that. You can't really do that with, like like, you're, setting the federal government's policy. It's like a big ship. You can't really, be like, yeah. Well, I changed my mind about that. Like, I decided that was dumb. Certainly, there's some you know, there's there will be room for flexibility in terms of exactly how it's implemented, but you can't just, throw things out at will. And you also can't add new stuff, all that easily. Right? And so, I think there's just like, my nature is to be very much, everything I say is provisional, and I'm always, evolving. And so just, me personally, I think I wouldn't be good at it. That's basically what it come came down to. And then finally, some of the stuff we were talking about earlier where, I I really missed found myself missing public feedback. Reg working in public with regular scrutiny and feedback from a community of my peers. You know? The experience of doing that, it just was so useful to me. Epistemically, it was so useful to me. And I just, it's much harder to do when you don't get that real world feedback. That was a a concern I had, and I kinda wanna get back to that because I think that, developing sort of getting my head around really thorny ideas and developing answers to to to to hard questions and then, trying to communicate those answers compellingly, that's what I do. And I think I do an okay job at that. I don't know if I do, like I don't know if I'm, the master of the bureaucracy. I just really don't. And I think you have to be honest with yourself about things like that and not try to do, more than you can. I think, way I've been describing it to people is sort of know thyself. So all that gets put together, and, not any 1 of those things might not have been enough. I I might have, maintained it all, but the combination of all those things made it feel to me like now is probably an appropriate time to take a step back and, contribute to the both the action plan and AI policy more broadly from the outside, where, frankly, I think, we're still so early in terms of societal conversations on AI that there is so much intellectual work to be done and, civilizational scaffolding to be built around this technology. And I think that stuff inherent I mean, 1 of the key ideas of the action plan is that that kind of stuff, in addition to the technology itself, is fundamentally gonna be built by the private sector. And so, I I kinda wanna return to to doing that. So, yeah, that's like the it's a long winded answer, realize, but that's kind of that's kind of where I came down. It was it was a tough decision. It really was. You know? It's not like this was easy for me to do. But 1 thing I think is important to note is that I live with you know, I was not pushed out because of, political infighting. This is not as there are like, to the extent that there are, people on different sides of debates that might externally be referred to by press as factions, This is not an indication of 1 faction rising over another. You know, this is a decision that I made personally myself and is not because of it's also not because of any policy disagreements with the administration. That's like it's not like I was like, oh, I'm mad about this. I'm bleeding. Nothing. Not at all. And I believe with absolutely no bad blood and nothing but warm feelings for all of my colleagues in the in the federal government.
Nathan Labenz: (49:39) That's great. And notable contrast to, many departures that we've heard tell us about. Yeah. Well, congratulations again on starting a a family with your wife, and, obviously, that's a once in a lifetime transition that you'll be able to give the, full energy to that you I I think safe to say, would have to make some compromises on if you were staying in the job. That makes a ton of sense. I I wanna get into a little bit of the context in which this AI action plan was developed and then, get into some of the details of it as well. 1 thing though that you had said toward the end there that I found a little puzzling, and maybe this is just my, worked in government naivete, but what is it that stands in the way of a more open, feedback welcoming process? Like, I don't know how many people saw drafts of the AI action plan or contributed to it, but presumably, it was, reasonably small number compared to how many people would have happily read and commented on a draft, if you if you had put 1 out there and and asked for feedback. Why can't that happen? Is is that or could it happen? And it's just sort of a administrative decision? I'm confused, though. Like, especially because it does seem like AI in general you know, it's not AI in general. AI specifically, it's such a dynamic situation that, the faster feedback cycles we could get, presumably, the better. So yeah. Is it is it what stands in the way of kind of realizing that way of working that you prefer within the White House?
Dean W. Ball: (51:20) Well, I think it's a couple of different things. So first of all, there are absolutely way. You know, the the action plan itself, before it was the final product, it went through a request for information, which is a formal, process by which government agencies can ask the public to submit, their thoughts on a given issue. Something like 10,000 people did that for the action plan, which is a pretty huge turnout. And a lot of them were, really, really heartfelt things written by individuals. I I read, many, many of these comments. And, of course, every company I mean, it really was amazing. Like, just the the the range of corporations, industries, Hollywood actors, right, former politicians, members of the public, so many people submitted comments for this. And then, of course, I met with with hundreds, if not 1000 plus people, from the public in various ways leading up to the action plan to gather, to talk about ideas, and gather feedback. So it's not like that it's more like just me purse what I meant more is, like like, I wrote a Substack post every week, and very often, there was, lots of speculative stuff where I was like, yeah, I think we should probably do stuff like this. And, you just can't do that. You can't do things like that. And the reason you can't do things like that is that, when when you are working on something that is there is just a different gravity that happens when something comes from the government. It's a huge machine. We can think that we're stepping lightly and, in fact, be stopping around. And, you know so you if we were to say, oh, yeah. Like, well, here's kind of like, if we shared, a early draft of the action plan, let's just say, right, if that was like, yeah. Here you go. Like, we're we're putting it up on archive. Like, let us know what you think kind of a thing. The blowback or the plausible blowback from that is just enormous because, it could be political scandals. You could there are conversations going on with congress. There are 1000000000 trade negotiations going on. There are companies that'll freak out. Right? They'll be, oh, this would damage my business. Blah blah blah blah. And so, this is why, this is why no. It's not the only reason, but it's 1 of the fundamental reasons that people leak stuff is to basically make things like this happen tactically, as a move within the bureaucracy. It's like, he this other guy is pushing for something that's really controversial and would be really unpopular. So what I'm gonna do is you know, he's doing that at, the staff level. So I'm gonna go leak that to the New York Times, and then, all the senior people are gonna see it, and they're gonna get mad, and that's gonna kill the idea. And so a lot of times when you see leaks, what you're seeing is someone attempting to kill something from within. And it a public disclosure process would totally, do that, and then you wouldn't there need to be lots of procedure around it, etcetera. The other thing I would say is that the action plan yeah. Like, 1 thing in theory that you could do, this isn't something like government doesn't tend to do this for, reports, which is kinda what the action plan is. But it will it does it regularly for public policy, where it will release, draft regulations with notice and comment rulemaking. Right? There's various different things you can do. You can request information on on a potential role. You can be like, look. We're thinking about doing this kind of regulation. What should we be aware of here? Like, what do you think we should know? And that's a structured way of of, doing that. You can also release drafts of regulations and say, we invite comment on that. You can totally do that. And and, governments do it all the time. I think the the trade off specifically with the action plan and and the, the process for getting the action plan from a blank sheet of paper to the final, product was, largely conceived of and run by me. I obviously ran it by other people, higher on the totem pole than me. I was like, if I do it this way, is that okay? The trade off between, having a really collaborative process where lots and lots of people, even inside the government, are sharing comments is that, it's just the classic design by committee thing. Right? I would say as a general matter, interagency feedback for for me, from my perspective, was almost always, really productive, constructive, and and useful and, essential for making the action plan good. But we did it. We did we did employ a different prop like, we didn't do a traditional interagency usually, what happens is the White House if there's, a policy you're working on, the White House will convene a a policy process. And, every every agency with potential equities in that thing shows up, and then you kind of, go you sorta just you talk about the thing. You conceptualize a policy. You share drafts. And, that feedback process can be, really quite extensive. We kind of did that for the action plan, but we didn't do it super formally. And we only shared agencies, for the most part, only saw the portions of the action plan that were directly relevant to their agency. Because the 1 thing that I felt very strongly about is, I did not want every single person in the federal government commenting on every aspect of the AI strategy because if you do it that way, there's just gonna be more people with unproductive comments, and I think probably a lot of cool ideas might have gotten squashed by that, to be quite candid. And so I think it's it's absolutely essential that you do a formal process if you are actually doing policy. You know, the action plan is recommendations for policy. They're, I would say, strong recommendations, but it is not literally, the president is not saying you know, president didn't sign the action plan. Right? As opposed to, the executive orders that the president signed, where, that is that uses words like shall. You know? Not may, not should, but shall. And that is like the president is saying, this is a command. Go do this to the to federal employees. And so that's a legally binding document. The action plan is not legally binding. If you're making legally binding policy, then, of course, you have to go through a traditional process, and you have to tolerate some of the bumps and inefficiencies of that. The executive orders, for example, that the president signed along with the action plan went through traditional process. But the action plan itself, a little little more flexible.
Nathan Labenz: (58:31) You know, 1 big background thing that I'd love to understand from your perspective is, generally speaking, what is the political right thinking about AI now? Right? We're headed for this in this 4 year time, many people are saying that, we might have, AGI. Who knows? Maybe even superintelligence, certainly some sort of powerful, potentially transformative AI in this term. And, as with many things, when you get into power, it becomes kind of your problem, where in the past, you were sort of just able to comment and and, criticize the the people whose problem it was before you won the election. So how would you characterize the factions if you want or maybe just different perspectives that ultimately make up the coalition that elected president Trump? And what different sort of perspectives or, priorities are they bringing to the AI discourse there? I guess I would start maybe with just Trump himself. Like, we've heard a few comments, but I don't really know, how much does he think about AI? Like, is he using ChatGPT? Is he you know, does he have, a strong take on some of these core issues? And then there's, the tech right, which many listeners to this show, I think, were at 1 point optimistic was gonna be ascendant, and it seems like maybe sort of. There's also sort of the religious right, which has maybe a very different set of priorities, and it would seem like in some tension with the the tech right. And there's probably other, groups that you might put into that, mix as well that have, strong and and perhaps, quite distinct points of view. Yeah. How would you lay out that landscape for us?
Dean W. Ball: (1:00:20) So the, the president was elected with, a really diverse and broad coalition. It's 1 of the things that is is is so striking, and I think such a such a narrative violation for, what you've heard from a lot of people in, sort of more left wing media over the last, 5 to 10 years about the Republican Party and president Trump. The reality is that, in terms of, income level, ethnicity, way way of life, background, etcetera. The the president's coalition is just really, really diverse. And so, it is a coalition that has many different views on AI, and those views are also evolving in real time. I think you actually, started to see them evolve during the time that I I I served in government quite significantly. So, yeah, I would say, very broadly speaking, there are people who come from maybe the somewhat more traditional, deregulatory impulse of of of conservatives that's been around for a long time. President, certainly is not a, a hardcore libertarian by any means, but, he has a lot of that. Like, yeah. Look. Like, fundamentally, America the business of America is business, as as president Coolidge said. You know, I think that's, very much, part of part of, the president's personal intellectual DNA. So, I think, he definitely feels that way about AI. You know, I think all of the principles, of the action plan are things that he thinks are are are enormously important. He has now weighed in publicly on issues like copyright and preemption where, and and and on both of those things, he's he's kind of leaned, I would say, in the direction of AI development and adoption. Right? He's leaned in the in the rather than putting up new blockers, of course, the environmental permitting and things of this kind, something that everyone in this administration is very serious about. I think there are some areas where there's a rift, I think, among some on the right based on just, the level of risk that you perceive from from the technology and also sort of, how consistent you see the technology with being with, social media and the Internet and things like this. There's a lot of hostility and, frankly, I think justified on the right against the big tech, particularly the social media platforms, UGC type of platforms, where they you know, I think for a long time, people felt as though, right wing ideas were discriminated against, and I think it's true that they were. And I think things like fact checking and misinformation analysis and things of this sort, very often were deliberate attempts to, to shut down right of center viewpoints from from broad dissemination on the Internet. And I think that's a shameful thing, and I think those companies should be ashamed of it. There are a lot of people on the right who I and I have argued this both in public and privately in group chats and all sorts of things. I have tried to say, look. You are fighting the last battle, and you should stop doing that. I'm like, stop fighting the social media battle. Because while that is an important thing, no doubt, it is important. And, certainly, the outputs of AI systems matter a great deal and, know, ensuring that they're that they're truth seeking and and not actually, the result of of sort of top down ideological programming. I think that is, extremely important. But it's really interesting because, you talk about that issue, and you get really quickly into the most important issues, of the traditional AI safety world, very quickly. Because very quickly, the issues implicated by, for example, the woke AI EO, the in federal procurement that the president signed, you get very quickly into things like, well, how do we actually know, what this that the system is aligned? Right? How do we actually know what this thing's gonna do, that it's gonna do what we want it to? You get very quickly into issues of concentration of power where I think there's a broad perception that, my god. This technology is gonna be so foundational. We really need to understand the character, the virtues of these systems, the values that they hold. And so I think not you know, reality has a way of of of coming at you sort of regardless of what you know? Like, if you believed that AI that those kinds of issues that I just said, like loss of control, alignment issues, thing, if you thought that that was like I think a lot of people a year ago on the right would have been like, oh, that's all lefty doomer stuff. That's all EA stuff. And now they're actually coming in and thinking about, okay. But I care about this, and it's like, wait a minute. Wait. These are actually fundamental issues in AI, which is what, people have been saying for a long time. And so I think, what you're actually starting to see is the right get their head around these issues in a much more serious way. And I I actually like, I have predicted before in public, before I joined the administration, right after the president won, I said, I think there's actually a reasonable chance that, the Republican Party is the better home of the AI safety world in the long term because just because of the way that the incentives of the party work and the way that the party is kind of hooked up. So, yeah, you are starting to see that, and that is there are some rifts there. And some of those things are, people fighting the last battle. Some of those things are, people starting to get their heads around these bigger issues and, what are the values of the system gonna be. I think a an issue that kinda, falls right in the middle of those 2 things relates to the sort of child safeguards specific to child use of AI systems. There's obviously the very, very tragic story of the boy who killed himself, I think 14 year old in in Florida. You know, that's an issue that really resonates. It's it's something that a lot of conservatives are very worried about. And, I'll just put my cards on the table, and for sure, this is not federal government policy. This is not the opinion of the White House or the Office of Science and Technology policy. But, when I see big, well capitalized companies that are, making pornography with AI available indiscriminately to people. I'm sure that there's some sort of, age gating, but I doubt it's really all that good. Like, I just get mad for so many reasons because it's like, obviously, that stuff is inevitable, but, let that happen on, like let that be the open source model in the in the North Korean, porn bot farm. Like, don't, don't take money from, the world's biggest institutional investors and be 1 of the flagship brands of AI to the world and then do this kind of stuff in such a public way. It's so crass. And, it's not gonna end up well with conservatives. It will not end up well with conservatives. And so, I think that, these culture war issues that have been going on for a long time are, still happening, and they animate issues in AI. And that, intersects in weird ways with, preexisting AI safety stuff, and you're starting to see those 2 things merge together. And I don't exactly know where that's gonna end up. You know, my hope, and I think, what the action plan is all about is, look. Like, I think there are good LLM safety laws for kids that you could pass. I think there are also really bad ones. I think there are prudent things you can do, and I think there's lots of nonprudent things you can do. But the point of the action plan is to and to a certain extent, is to say, look. We're early in this, and we can we don't have to fight. We don't have to be at each other's throats. We can identify reasonable things that make progress at the relevant margins, and we can be positive some about this. We don't have to play the traditional game. And that's a very important part of of what what I was trying to do with the action plan, what we were trying to do.
Nathan Labenz: (1:09:56) So, yeah, I guess I I hear you saying it's a live question still, basically. Right? We've got the deregulatory impulse. We've got the sort of cultural conservative protect the kids and uphold traditional values impulse.
Dean W. Ball: (1:10:11) Yeah. And and, no one's gonna win. You know what I mean? It's not like there's ever gonna be a day when you're like, oh, that side won. I I actually just, want to dispute the idea that there are, factions here, different sides. My view of it is, yeah, there's people with different embassies and different priorities on this stuff. But, if even within, the Republican Party, they perceive themselves to be enemies of 1 another, it's like, well, jeez, there are some 0 sum issues in politics. Don't get me wrong. Like, that's definitely true. But a lot of things, they're not necessarily that way. And so, yeah, I would say, though, right now, I think the I think there's just a lot about, capitalism and, the general, structure of American society that is inherently going to, be accelerationist and, create incentives to to develop AI rapidly and and diffuse it rapidly, etcetera, etcetera. And so, in that sense, I think AI development is gonna be in a very good place. But I do like, I think I think it will be really interesting to see, how the somewhat more skeptical people, a lot of whom remember big you know, are are still sometimes, it's, literally the same people in some cases. Right? It's like, well, wait. Like, you guys built YouTube, and now you're building Gemini, and we didn't like we thought YouTube censored against us, it seems like Gemini also might. So, we're worried about we're very nervous about this. We're very nervous about, the amount of money that's being spent and the capabilities that are being promised, and people like me saying, oh, this is gonna be foundational to everything you do in 10 from now does not necessarily alleviate those concerns. It often makes people more worried. And it could totally be true that that ends up turning into a pretty unhealthy impulse, and we get a lot of laws that freeze our society in amber because of various cons because because, you don't want change. Like and I think, on the other hand, I think it's possible to to to to develop to develop ideas in, a way that actually is productive. And that's and I will tell you, that's another thing that I think I can do. That's part of, why I chose to to leave. I think I can contribute to pushing things in a more positive direction there. And, I think I can probably do that more effectively outside the administration than I can within.
Nathan Labenz: (1:13:02) How how would you say, rank and file Republican voters are thinking about this right now? Are they being I mean, generally speaking, it seems like the survey results are, more bipartisan consensus on this particular issue than on, almost any other issue. Right? People are generally worried. People generally want the government to do something. You know, the public is, pretty warm, I think, to the Terminator, scenario style risks, but salience is low. Are they who's leading who right now? Is it are is the public, telling the, political class what it cares about and political class is listening, or is the political class leading the public, or is the public just, not focused on this enough that they're, really moving the needle yet?
Dean W. Ball: (1:13:56) Yeah. When it comes to AI, I think for I mean, it depends. It definitely depends. And, a lot of our politics, is is geographic. Right? So there's senators that represent particular states that have a lot of some kind of industry that is particularly affected by AI, and therefore, senators, it has a higher political salience than it might in a state that doesn't have such things. But as a general matter, I think it's probably right that, I think AI in general, in terms of, people who think a lot about it, I think it's and especially AI policy, I think it's probably still more of a, elite coastal type of an issue than it is, an issue that normal people are, truly fired up about. I expect that will change in some ways. I expect it'll be very hard to predict how that will change. But, 1 of the 1 of the things that I had it written on a whiteboard in my office for a while is a mantra the vice president had at an event I attended where he spoke. He said, our job is to make normal people's lives better. And I just, really, really like, that that for whatever reason, that resonated with me quite deeply. And, I think that our job is to to do things that make normal people's lives better. And it's also, we I think we have to communicate about the ways in which AI can make normal people's lives better and not sort of talk in abstract ways about the future. I think it's like the art of making it more concrete for people is is is gonna be extremely important. But, yeah, eventually, this will gain higher salience. It'll probably do so in some sort of a scandal or crisis or something like that. Who knows exactly what that'll be? Seems like it's just but thus far, it's, a pretty continuous, just, up and up and up getting more so. But most people are, much more concerned about, immigration and the economy and things of that of that kind.
Nathan Labenz: (1:16:13) Is there any, utopian thinking or anything similar on the right that people
Nathan Labenz: (1:16:21) look to for inspiration?
Nathan Labenz: (1:16:22) I mean, I'm you know, as you know, I always say the scarcest resource is a positive vision for the future. And I don't know if I'm missing, any visionary writing or thinking on the right. Is there anything like that at all?
Dean W. Ball: (1:16:36) Not really. Not that I'm I'm sure some people are, and I just don't know about it, but not that I'm aware of. In fact, what's funny is we got accused by a relatively prominent person who is is often in right of center circles on I won't say the name, but a pretty big account on Twitter. We got accused in the action plan of being, utopian because we talked about AI being able to unravel ancient scrolls once thought unreadable. And I was like, that literally happened, too eerie. Like, this is
Nathan Labenz: (1:17:17) Yeah. Think that kid now works for Doge. Right? And that's Yeah.
Dean W. Ball: (1:17:21) Luke Luke is Luke is now at Doge. That's right. 1 thing is that, a lot of people are not aware of just, the amazing things that are happening right now and how remarkable all of it is and just like yeah. So I think there needs to be much more of that. But I think the ground this and it's actually again, it's this is 1 of the many things that I hope to do in in my work. Not necessarily utopian, but but positive visions. I hope to to do some work along those lines is something I was sketching out some stuff where I just wrote about, like like, relatively mundane industries and just, automating you know, the various ways in which AI and automation are gonna are gonna be transformative and, really you know, in in cool and interesting ways and, improve people's lives as a result. There's definitely more of that to do. I think you have to make it concrete. And I think but 1 of the things I've realized is that, when you start talking about that stuff, if it's like if you if you're doing, true pie in the sky, oh, we're gonna have, civilization on other planets and stuff, people will just be like, nah. Yeah. But, we've heard that for a long time from techno optimists. So I think part of it what I've really realized is, part of the job here is to make people aware of, the astounding reality that is before us today. Like, the actual miracles of modern technology that enable our lives all the time. And I don't just mean AI. I actually mean, more broadly than that. This is something you really uniquely see at the White House because so many people come through where it's like, yeah. We make like, we manufacture, this thing that, is an essential part of everything. Right? And it's like, wow. This is, incredible that this works. Right? It's, incredible that, fiber optics works. It's wild. It's completely insane when you actually think about what's going on. I think most people don't. And so I think to some extent, there's, just explanations of, our current civilizational infrastructure. I think, I think you can do poetry there if you try. I mean and you also the other thing that goes into that is, you you think about the the ways in which government itself, plays a role in a lot of like, I I was in an airplane a couple of days ago flying back from from a trip a work trip. And just kinda, looking around in the plane, and I was like, there are, technical standards made with government mediation and help for, every single thing in this aircraft. Right? Like, the plastic the or the the chemical content of the sheathing around the electrical wire in this plane has, a specific technical standard that, someone worked on, right, and, people maintain. And there are, meetings about this and you know? And it's like, you just realize, how vast it is, and you you really do appreciate that specifically in government, how, just unfathomably large the whole operation is. And it continues to astound you when you think about the sophistication of a lot of this stuff that's going on all the time that our governments just sort of competently do, actually. Like, it I think we we we should absolutely want to improve. But I also think that we have a tendency, to to we're, we're in a very negative mood as a country, and we have been for, the last 10 years or maybe maybe more. And we only focus on the negative. We only focus on the areas where where, where we're falling down. But what I consistently saw in government was actually, a great deal of confidence and and skill. So, anyway, I I I think that we actually just need to, give ourselves a pat on the back sometimes.
Nathan Labenz: (1:21:39) Certainly, as a country, still have an unbelievable amount going for us and should not take that for granted either. How, how AGI pilled would you characterize different parts of the federal government as being? Maybe starting with yourself even though you're, recently departed.
Dean W. Ball: (1:21:59) You know? So, I've always been, I've been pretty bullish on the the you know, on deep learning and for a long time. Right? I've been bullish on deep learning since, 2015.
Dean W. Ball: (1:22:14) I think like, it it I guess what I would say is, AGI itself is so nebulous as a concept, and and my view is, I think what AGI would have to be to be true AGI would have to be something that has, genuine, human sample efficiency and flexibility. And I don't know that we're, especially close to that. The analogy that I've always used and and, frankly, that, I've probably used on this podcast before is, the bird airplane analogy that, human cognition is like a bird where, I can go fly over to the tree over there and land right exactly on that branch. And I can do it with grace and an energy efficiency that that that, is is, really, outrageous. Whereas, an airplane is hugely energy inefficient when compared to the bird and, requires, a giant runway and, you have to build all this dedicated infrastructure just for the airplane. It's, an extremely unwieldy thing when compared to the bird and yet, highly useful. And people like Dorkesh have have been kind of turning a little more negative recently. I think they are actually turning to like, they're calibrating to roughly where I feel like I've been for for a little while, which is, basically, just the idea that, we are probably going to build the, Cognitive Boeing 7 37, and that that is going to be, super useful, but it's not gonna be an automated bird. Right? It's we're not just making a mechanical human brain here. We are going to do something different, at least in the beginning. Maybe eventually we get to the AGI get into the more true AGI. How many people are AGI pilled within the admin, though? And, like yeah. Just to be clear, I think we're gonna build the Boeing 7 37 thing, soon. Like, somewhere between, 2027 and 2030, I believe we will do that, and I maintain high conviction in that. Nothing I've seen changes that. I think the bearishness on GPT-five is kind of nuts. I haven't used the model extensively yet, but, like yeah, there's a bit of, common industry commentary and discourse commentary. In terms of within the admin, there's a good number of people who are, pretty convinced that AI is gonna be, super transformative. I don't know how many people have, like and there's a good number of people that aren't. Right? There's good number of people that are like, nah. I think it's gonna peter out and gonna be hype and whatever else. It's a little hard for me to, describe the the ratios because as you can imagine, the people who I ended up working with probably were disproportionately likely to be, other people who think AGI's get or AI's gonna be really transformative. I think I'd say, it's there, but what that means, is different to different people. Like, I think it's pretty difficult. I mean, I think, the the there's a reason that the verb is feel the AGI. Right? It's because it's it's it's an emotional experience. And there's a lot of people who I I I I think have experienced feeling the AGI intellectually, but I don't know that they've done it emotion there's a difference between the the emotional experience of, feeling the AGI is, a pretty wild thing, and it's happened to me at different phases. And, also, there's a certain aspect of it that's, not resignation, but, anticipatory nostalgia, I would say, where you kind of, realize that things are gonna that certain things are just gonna go away and certain dynamics are just gonna fundamentally change, and you kind of, actively miss those things. All that being said, I think there are people who have who have done that, but I think it's just like it it means very different things to very different people. But, certainly, AI is the president's top technology priority. It is 1 of the hottest issues inside the administration, and it's just something that, everyone does care about and and thinks about. So I would say it's, functionally, a pretty AI it's it's an functionally, the administration just places a very high priority on AI, and that comes right down from the president.
Nathan Labenz: (1:27:09) So 3 more little test points, I guess, to to try to calibrate my understanding of that better. 1, obviously, a White House is gonna deal with a ton of different issues all the time. Is there a dynamic of, well, there's gotta be an AI guy in the room for, any given issue because we just sort of have to expect that, there's an AI element to everything, or has that not really happened yet? A second 1 is, are there explicit sort of timeline assumptions built into any of the planning or reasoning? Like, we think there's a, 25% chance Dario is right, and we're gonna get, the beginning of mass unemployment by 2027. So we gotta kinda have 1, kind of contingency plan that's for that and maybe another 1 that's for that doesn't happen. And a third 1, obviously, there was a big bill that allegedly is gonna increase the federal debt. And I wonder if there are people going around saying things like which might not I don't mean to suggest this would be wrong, by the way, but, it certainly would be a leap of faith to say that, well, AI will, help us grow our way out of the debt so we can sort of afford to take this on because the AI will, pay the bill for us in a, in time before we, really get into big debt related issues. So, yeah, comment as you will on note.
Dean W. Ball: (1:28:43) Right. So, okay, going going in in order, the first question was not about timelines. It was about what was remind me what the first 1 was.
Nathan Labenz: (1:28:54) Just like, is there an AI guy in the room for
Dean W. Ball: (1:28:56) Oh, yeah. Increasingly. I think increasingly. Not necessarily always. And, there are areas where, I don't even know what our nexus would be. But, yeah, especially, I think, like like, post action plan, as as people throughout the government saw, a, how high a priority I mean, the action plan event had the president, the vice president, and 5 cabinet secretaries. That's, packed, packed with, superstars from the admin. And, I think when peep that's a signal to people, okay. We really need to be thinking about this seriously. You know, you'd surprised how important signals like that are within the government, how much that matters institutionally. And more people are understanding so it's like has it like, is that, entirely true? But it's it feels more and more the case that that, AI people are being brought into, functionally everything, which is tough because there's not there's a good number of them, but there's not, that many in the grand scheme. So they they end up being stretched, I think, a lot of the time. I certainly felt that. In terms of timelines, the convenient thing about timelines is that there is 1 timeline that we are quite certain of, which is that the president's term ends on 01/20/2029. And that happens to be, a pretty good you know, there so we I mean, actually, truly, 1 1 of my absolute favorite things about working in in the Trump administration is the sense of urgency that everybody has. It's the combination of a sense of urgency and a feeling of not being beholden to the past or traditional ways of doing things. Like, we are are, I think, really, developing a new conceptual lexicon for American steak craft. And a lot of it we're doing, and it's like I think, the administration doesn't always frankly do the best job of communicating some of that stuff. And I I think also that they often do a very good job of communicating, but it gets misinterpreted. And there's a lot of people who just ignore a lot of the great work and focus on whatever the latest scandal is. So I think Americans are, quite badly apprised of of what the Trump administration is actually doing. So, that timeline of early 20 29 happens to line up with lots of AI timelines. And so, when we were thinking about, the data center power issue, the default timeline that you give is, somewhere between 2028 and 2030. Right? That's, they're like, yeah. We need to do that, and that just happens to line up really well. So I don't know. It's hard for me to say, what would if that if that similarity didn't happen to be there, how would people's timelines be, different? Like, I don't know. That's a good question. The debt thing, I can't say I ever heard anyone say that. I've certainly thought it myself, but but I can't say I've ever heard anyone else say that. Definitely, an another thing I have heard is, on some of the, immigration debates, you will sometimes hear people say, well, there seems to be inconsistency here that that, the pro immigration camp says, well, we need, lots and lots of people to grow the to do all this stuff, but then also, well, wait. Are we about to automate a bunch of stuff? You know? Like, are we about to massively improve labor productivity? And I would agree. That actually is a a dissonance in that argument. So yeah. There definitely are ways in which, some of the more, I I would say, flagship issues from the the the president's, campaign and, of the platform of the party. There are ways in which AI is now kind of, inserting being inserted into that in different ways. Also, there's 1000000 ways in which, like like, border security is, in many ways, an AI problem. Like, there's a lot of stuff that AI can do on on things like border security. So, yeah, in fact, you've had a you have had a guest on this podcast, a former DHS employee, Department of Homeland Security named Michael Boyce. And, yeah, and he yeah. I mean, he worked on AI at DHS. That's border enforcement. You know, a lot a lot of what they do is border enforcement.
Nathan Labenz: (1:33:48) Is there is there any talk about, a sort of, step short of Utopia, but, any sort of new deal for the, American worker? I mean and and this starts to get into, some parts of the action plan kinda go this way. But, Bernie Sanders has recently said, we should, make it our explicit policy to, share the, the productivity gains from AI with workers by, for example, having a 4 day work week or whatever. Is there any traction with those kinds of ideas?
Dean W. Ball: (1:34:20) No. But and and I I wouldn't say either way. Like, I don't know that, like certainly, we like, the the administration takes a worker first priority with AI, and that is, very serious. We do think about that a lot. I think it's hard to know exactly what that means at this stage. The administration takes, a extremely worker centric approach to AI. I think we don't know exactly what the future of labor is. Like, we don't know how acute the labor market disruptions are going to be. We don't know where they will take place or if, if are they gonna be distributed by occupation type, by industry, by skill level, or or experience level? You know, we don't quite know exactly. There's some data that starts to look a little worrisome for software engineering, but there's also, plenty of of data on the other side of that. So I think we don't entirely know where that's going. But as a general matter, we will yeah. I I personally think something and, again, super not White House policy. But I I think about a 4 day workweek all the time because, the the 5 day workweek comes from the last industrial revolution that came in the twenties. That came during Calvin Coolidge's presidency and was the fruit of an industrial like, I think sort of multiple technology revolutions that happened at the same time. You know? And and I think that's kinda what's happening in in you know, for us right now too. So it would not surprise me if if, at some point in the future, we did actually go to a 4 day work week. That's an idea of of Bernie's that I have no you know? I could see the world going that way. I wouldn't say I endorse that idea, but I could absolutely see that idea making sense.
Nathan Labenz: (1:36:20) Interesting. Yeah. That's cool. I think about drivers maybe as 1 really, mundane 1, but the sort of analysis there seems so simple in many ways. It's like we already have statistics that show Waymo is a lot safer than human drivers. We are starting to see in, recent Boston local politics, some of these things that have been slow to materialize from my perspective relative to my expectations. But, nevertheless, we're starting to see this, protectionism where it's like, well, wait. What are the Teamsters, have to say about this? And then you got the people that are, pushing back with, so the Teamsters say that, thousands of people should die because we need to protect their jobs, which is, harsh but not fundamentally inaccurate. And it seems like that is going to be a question that society is gonna face that doesn't have, a nearly as much nuance as maybe a lot of the other things because it's literally just like, is the AI gonna drive the car or is the human gonna drive the car? Is a human gonna be required to sit there? You know? Even if the AI is driving the car, these are, seemingly relatively simple questions by comparison to a lot that we're gonna have, and we're just starting to see the the political battle lines being drawn. It seems like kind of right now, again, later than I would have expected.
Dean W. Ball: (1:37:45) Yeah. So, well, I think to some extent, though, there there has been a lot of state action on you know, there there were AV fights, at the state level, 10 years ago that that all predate the current AI. You know? Because everyone said self driving was gonna be solved, 10 years ago. Mhmm. So a lot of that actually started preemptively. I mean, that is a really good 1, and I think there's, a lot of non obvious stuff that happens there. Like, I what I'd say is I think you are right that, on the software side, it's much easier to make the case that, you will see traditional augmentation, and you will see, people will become more productive. And, yes, there will be some labor market disruption, but in general, things will grow. Whereas, yes, the self driving thing is like, well, that's just kind of a ambiguous replacement. And so the question is, will we will that generate new kinds of jobs of some sort, though? And I think the answer to that is maybe. Automated logistics, and, the ability to move, to navigate the world, the in autonomous ways, that could like, I could see I think the self driving car is such a simple early example of, where we could be going, and that could create all kinds of interesting new opportunities. So I don't even know there, but I do agree it's, more of a 1 to 1 replacement. And, the other issue that I think you're getting at that's that's that's very important is, the the state, local, and federal division of labor here. Right? Should we have federal autonomous vehicle rules, or should we not? On the 1 hand, it's a little crazy. If I take a Waymo, from my house in Connecticut to into Manhattan, It would be a little crazy making if I, during that time period, passed through somewhere between 2 and 10 different regulatory jurisdictions governing the safety of self driving cars. That'd be a little weird. Maybe that's fine. You know, maybe that's just an aspect about the future that's weird, and, it's fine. You know? I I I haven't thought,
Nathan Labenz: (1:40:07) like Uber
Nathan Labenz: (1:40:07) kinda had to deal with that.
Dean W. Ball: (1:40:08) Yeah. Uber does. Uber has to deal with that right now. Yeah. Exactly. And, we can there are all sorts of ways to make policy issues like that, coherent and and, solvable. You know? It's not that it's not that big of a deal. So but then there are unambiguously things that should be federal. And, yeah, maybe it's a I think maybe it is up to a city to decide how they wanna deal with AVs. You know? Maybe and and, I I kinda personally have that instinct. I would rather, let that be, the the federal let that let that be an area where we experiment at at different levels of government rather than just try to, occupy the field with 1 federal standard. I think the other thing I would say that becomes really important there, though, is, when you think about autonomy in the physical world, I think especially in the physical world, maybe also digital, I think 1 of the things that becomes, full autonomy, Waymo level, no human, really in the loop at all. 1 of the things that happens is is, that that kind of, changes the nature of the liability. Right? Because, all of a sudden, you're held Waymo is held to a very high standard. Right? If Waymo is successful or if the self if Tesla is successful, assuming, we really get rid of the steering wheel in the future, right, assuming that's the future that we head toward, that's gonna be a world where, the companies that operate those vehicles are going to be they will have the liability risk for, all accidents in the country on their balance sheet. Right now, that liability right here is on my balance sheet. What that means is that in that's I I think that, maybe is a problem, maybe not. It'll probably vary by industries to whether or not that's a problem. I've made this criticism before about, basically, AGI. Like, that's gonna be nuts, An insane amount of risk to insure without a really, really well structured liability system. That's actually, like especially in the physical world and for these labor market things, that's, actually probably a good thing if we're being honest. Because, in order for 1 firm to be able to internalize the negative externalities of, 5,000,000 car rides a day or whatever, 1000000 car rides a day, those cars are gonna have to be really damn safe. You're gonna need, a lot of nines of reliability. And so the safety benefit has to be, so overwhelmingly positive, and the reliability has to be so good to really make it happen at nationwide scale. That'll inherently be slow. It'll take a long time to do. And but I also kind of think, the existing liability system is, fine for that. Like, even though it does it is kind of an it's not like the most accelerationist thing. Right? The most decelerationist thing would be to say, well, the technology is important, we should give them a liability shield. The way we should have liability shield. Like, I don't personally think that, and I I I kinda don't think that's I'd be very surprised if the Republican Party, goes in that direction too, at least under its current leadership.
Nathan Labenz: (1:43:21) Yeah. I think they should have to earn it too. I agree that, put up or shut up when it comes to safety statistics is kind of my attitude, and I it would seem like they are well on their way to to proving it.
Dean W. Ball: (1:43:32) But Yeah. Like and and, autonomy is just like if we're talking about real autonomy, then I think we should have pretty high expectations of the companies that are that are doing that. I just I think that's, like that's that's probably true. I don't think you actually need a lot more regulation than that, other than, yeah, you'd have, something governing, the testing regime. If it's a taxi service, you gotta have, licensing of some sort. But I think that's, fine. I think that beyond that, I don't feel like you need a regulator to be like, self driving cars have to be safe. I feel like that's baked into the laws of America already.
Nathan Labenz: (1:44:12) Well, if there's 1 trademark of the Cognitive Revolution, it's that we take our time to get to the headlines. So we've done that here today. But the headline, of course, and we've alluded to it many times is the AI action plan. This seems to have been perhaps with no exceptions, the best received thing that the administration has done so far. Everything else, I feel like there's haters coming at it from all directions, rightly or wrongly, and we'll leave that aside because this is an AI venue, not a general politics venue. But the AI action plan seemed to get remarkably positive reviews from just about every corner, including from people who I think expected to hate it. So I think that is a, real feather in your cap as somebody who, I don't know if you want would wanna sign on to led the effort, but certainly played like a, a central role in making that happen. Instead of me, walking through it, why don't you just tell us, how do you think about the action plan? Like, what's the story, that you would tell about what it is, what it's trying to do? You know, you could weave any any of your experience. I I saw a number of funny tweets along the way where you were like, final version v 2 revised final final whatever. So I'm I'm sure there were some funny stories there. But, yeah, take it from, kind of the high level first and just give us sort of Dean's you know, how do you think about it now that it's been out for a few weeks and it's kind of in the rearview mirror? What's the what's the real headline of the AI action plan?
Dean W. Ball: (1:45:53) The sort of, like the object level description is that, we decided we would, we wanted to write something that was not, a nebulous strategy document. We decided, let's do a concrete to do list for the federal government for, a lot of different things that we can do to advance the ball on AI. It's not necessarily everything we could or should do, and it's not the long term answer to any of the burning questions that people have about AI. You know, we we can't answer those questions, and I don't think it's a good use of our time to try to answer these unanswerable things or to pretend to Americans like we can. You know, what what what was very important to us is we want to deliver something that we can credibly execute for the American people. That's the heart of it. And, from there, 1 way to think about it is, if you were to look at an outline of the document, there's, 3 big pillars. And then within that, there's, maybe like, within each of them, there's, maybe a dozen, headers. And then been there's, a paragraph of text. And then below that, there's, recommended policy action, and there's, somewhere between, 1 and 10 bullet points with recommended policy actions. America's AI strategy is kind of the the headers and the text below them. Right? Like, those things where it's like, we need to like, this is like these are our strategic objectives. And so there are many ways in which agencies and also people outside of government can help advance those strategic objectives if they would like to. And then there's, a list of things below that that are, here are, 5 bullet points. Here's what we're gonna do right now. But just so you know more broadly, this is a big priority. Like, this is this is the strategic objective. So, the action plan is kind of the bullet points, and then the strategy is kind of the the the the header and the the paragraph or so of text below the header. And, I'm I think, 1 other thing that it was never textual in the action plan. It's it's subtextual, but I kind of alluded to it earlier. I think this kind of basic idea that, America can do this. Like, we can mature our institutions to deal with this problem, we can we can we can adapt. We can evolve, and we can, absolutely lead the way. And we can do it in such a way that we don't have to be at each other's throats. We can identify lots and lots of win win things here. So I'd say the action plan is a deeply positive sum document that comes out of a city that is usually quite 0 of some, And that is the subtextual message all its own.
Nathan Labenz: (1:49:12) Yeah. That's interesting. Very interesting framing. So the 3 pillars, accelerate AI innovation is 1. Pillar 2, build American AI infrastructure. And pillar 3, lead in international AI diplomacy and security. Let's maybe spend a, minute on each 1. I don't wanna go literally point by point through the whole thing. People can obviously read it, and they should. And they should read these rundown of it as well. You know, innovation seems to be proceeding pretty quickly. How much of this is sort of, stuff that's gonna happen anyway? How much of it is, like what do you think are, the real pivot points? Right? Because, mean, anybody would look back at the last few years and say, yikes. You know, AI AI innovation is happening fast. And if you listen to the people in the frontier developers, they're telling us we should continue to expect it to be quite fast, and we've got, IMO gold medals, to prove that even though that hasn't quite hit the GPT-five, product surface just yet. If we didn't do any of this stuff, would we would we lose? You know? Or or would or would we, slow down? What what is the what is the rate limiting factor here that you're kind or what is the bottleneck that you're, alleviating or bottlenecks?
Dean W. Ball: (1:50:27) Yeah. So I would say, the the innovation section of the action plan is I mean, obviously, it's number 1. So it's the thing we think is it's the first 1. It's the thing that we think is, very, very, very important. And, you are correct that at least today could change 6 months from now. But today, there are not that many laws on the books in The United States that govern the development of frontier AI systems. And so, it is true that, right now, the innovation there is proceeding apace. But where I think we need to be more reflective and and, self critical, I think, as a country is what about transformative adoption of AI? So, a word that, is throughout the action plan. It's a theme that's embedded in every single item, really, is all about adoption. What I want is, I want flying cars and, automated agriculture producing abundance of food and nuclear fusion. And, I want, the commerce, right, with, tons of agents bidding on everything around me all the time and, super, hyper hypermarkets for, that stuff is cool to me. And I think those are the areas that's gonna be what changes the world. You know? So when people are like, well, whose model is gonna set the global standard? It's like the people who are gonna win the set the global standard are the people that find product market fit. Like, this isn't about, we don't sit at rooms, in the White House and, sit around a table and say, what should the standard be? And then write a standard and then, try to convince other countries to adopt that standard. Like, sure. People do that, but that's not the path to victory in in technology. The path to victory is people emulating your use cases and using the tools you build because they are useful. And so there's a lot of adoption where I think I think we you know, I think we're doing pretty well, and I think the American AI ecosystem is maturing in in a lot of really impressive ways. But that's sort of the relevant margin for me is the the adoption and, the you know, 1 of the things that's, very subtle in the plan is like, we talk about, federal doing an RFI, OSTP that's, my my old office, doing a request for information from the public for, regulations that impede AI adoption. And, that's a subtle thing because we're not talking about, regulations on AI for the most part. And we could be. Maybe there are things that that agencies have done that we you know, and there are some, and and, those those are those are areas that we should look at. But what I'm also thinking about there is, when a are there laws or regulations that have assumptions built into them that are going to be made outdated by AI and associated technologies. Great example of this is, in surveying of construction sites. Right? There are, oftentimes state, local, and federal laws that require you know, you have to, survey this construction site and check for whatever. You know? And it it oftentimes the way it's written is that a human being has to do it. A human being has to personally do it. As opposed to, well, what if we just had continuous monitoring of sites through drones with LLMs built into them that have contextual understanding of what they're looking at? Like, turns out that's actually illegal. Right? So that would be a great example, and there's thousands of things like that. So that's where I'm that's that's really kinda what we're thinking about. And then the other thing that I would say is really important is there are some areas like science science where The US has the US federal government has pretty high leverage over those institutions, and so we can specifically drive things there. And, 1 of 1 of the examples of that that I've been on about for a long, long time is, automated experimentation. Right? And so, you saw National Science Foundation announced their Programmable Cloud Labs initiative, which is gonna be a $100,000,000 to different comp to different companies, academics, etcetera, who are building automated labs for, massively scaled scientific experimentation. Few years from now, we could be in a world where, that there is automated science infrastructure that can be used as a cloud service, basically, by AI agents. Right? And, obviously, significant safety issues with that. But but, I think the way to think about this is, that that that might not be something that actually exists. Like, that might not happen on its own. The the they'll exist. Like, automated labs exist today, and they're inside of corporate r and d labs, but it might not be shared, infrastructure, very much like how the federal government led the way in the early days before there was a commercial application of high performance computing facilities. The fed there wasn't 1, and the federal government viewed it as a public good, as common scientific infrastructure. And so we built that. And then we also built the Internet because we needed to network those facilities together. I think there's things like that that that are very exciting. And the idea, of the NSF thing is to build a network. So that's, 1 very specific thing, but there's a lot like that throughout the the plan. There are a couple other things that are about, trust and reliability of the technology, which I do think is gonna be very like, if if the if what people know the technology people understand AI is, the thing that makes it impossible to get justice in court anymore because, you can't validate, media. Like, that would really that would really be awful. And it just so happens that, there are levers that we can pull inside the federal government on that, now. So, let's do it. You know? That's that's a good example of, something that is actually a bit of a like, that that's a risk management thing. Right? Like, that's more of a risk type thing, but it's in the innovation section for a reason.
Nathan Labenz: (1:57:25) What do you think is more promising to address that? Is it, the encoding of origin into synthetic media, or is it the, on device attestation type of stuff for real cameras? Or maybe it's both, but how do we get out of that problem?
Dean W. Ball: (1:57:43) Plausibly, it's both. But I think the thing that's actually gonna work will be the the you you gotta you should focus policy effort on the scarce thing, which will be the human. It well, I don't wanna say human created. What I think I wanna say is the stairs thing will be, actual photons that hit actual glass in the world, not, the were processed by real image sensors, not the a the AI generated will be the super abundant thing. So I do think, yeah, it's pretty much, in the long term, what we're gonna have to do is, have some sort of common standard for validating real world stuff. I think it's actually, like it's not that bad yet for the most part because, it I think we'll have to have a new level of scrutiny for, image and video based data, But it's still even, today, even, v o 3, it's, it it's still different enough from the real world that, you can go back and, do you know, with the with the scrutiny of a legal system, get out. There are there are still things you can do, but it's like an area that we need to make sure we're actually doing those things.
Nathan Labenz: (1:59:00) On the diffusion point, wonder how you see that or how you, you could maybe say how you see it or how you think the administration probably sees it. I've been struck by kind of 2 contrasting viewpoints. 1 is, as I'm sure you're aware, Jeff Ding has sort of popularized this idea that The US leads in terms of our ability to diffuse technology through society and, get it to a broad base and get the, the practical value. And he argues that China lags in that in that capability. But then I also see a line in the the action plan that is enabled the adoption of AI in the defense department. And in an episode of today, Jake Sullivan also, talked about the memorandum that he and team put out that was basically arguing that our national security establishment broadly maybe or maybe, that the military is sort of less agile than our Chinese counterparts when it comes to adopting new technologies. Would you say that that's like your view as well? Like, we just we're better in the private sector. We're we're slower in the government, or would you, complicate or contradict that analysis?
Dean W. Ball: (2:00:22) So I think it depends. Like, 1 thing that America is quite good at compared to, well, everyone else in the world, including China, we're very good at, the pipeline of, deep capital markets to cloud computing and cloud computing applications to, consumer and enterprise adoption of new technology. We are, pretty darn good at that, and, China does not have a lot of, the much bemoaned b to b SaaS. Right? Like, China does not have a ton of that. And, a lot of it's actually, pretty useful. Like, it's actually, ends up being quite important in, the sort of cybernetics of of the business organization. So it it matters a great deal, in fact. I think we should be proud of of the fact that we we do well in things like this. There are other areas where like, in terms of AI adoption in the military specifically, I can tell you that our military, is is not as agile as I would like it to be. I have not carefully analyzed, the the the difference between China's military adoption and our own. I think 1 thing this is always true in analyses of China. But China, a lot of their governance of themselves is very, KPI driven. And so there will just be, like like, there'll be directives from on high. Like, DeepSeek comes out. Right? And it's like, there's a box. Every bureaucrat in every part of the country will, in a day, or, a week later or something, have to, check a box that's like, did you use DeepSeek for something? You know? Are you adopting AI? And what that ends up with is, summary statistics that would suggest that China is adopting AI more quickly in its government than The United States is, but a lot of that adoption is pretty shallow.
Dean W. Ball: (2:02:34) You know, I I think we need to move it more nimbly for sure, but I'm just always aware that that's, that's, an issue when you're analyzing China is that, they will oftentimes, optimize for, specific number for hitting specific numbers, numerical targets that are, a little, like I'm not sure if that's actually connected to the thing that you want. I think what you really want is, deeper adoption. Because as, like part of what happens is, we we drive deeper. We drive deeper adoption of AI, and then that reveals new problems and new subtleties, and that creates a positive feedback loop with product development and finding product market fit. So, you're starting to see that. Right? Like like, you would not have guessed the form factor of Claude code. It's a very interesting form factor, actually. You would not necessarily have guessed that you would be using CLI based tools to, do some of the most cutting edge, AI automation 3 years ago when you first saw ChatGPT. I don't think that many people would have guessed that. And yet here we are. Right? So you're starting to see all that happen. You'll see much more of that kind of stuff happen as we figure it out. But I still think a market based process is gonna, do a better job at diffusion if we don't get in the way of it with bad regulation. Because what China will do is, if there is some transformative use that's blocked by a regulation that they care about, they will unblock that. Like, Beijing Beijing will, put out a directive, and it's like, unblock that regulation, and it will happen everywhere. Right? So that's where they're they're just better at, cybernetics than our government is because we're not a centralized state. Right? We're not a centralized authoritarian state, fundamentally. Some some people think that's what we're turning into. I I think those people are dead wrong. But, certainly, it did I I will tell you, as someone who served in the Trump administration, did not feel like I was I did not feel like I had authoritarian levels of power over anything. So yeah.
Nathan Labenz: (2:04:49) What I mean, with the military in particular, obviously, it's like it seems very fraught. It seems like there's kind of a major mismatch between the way the military has traditionally thought about things and, maybe should think about things versus the way that LLMs work. You know? It's like, my kind of standard refrain on that is I would wanna know that any issues of deception or, scheming against the user are fully resolved before I go into combat with my AI battle buddy. And I know we're maybe not immediately jumping to AI battle buddies, but, if it's just the kind of, making paperwork more efficient, at the DOD, that doesn't seem like the kind of change, or sort of, that doesn't seem like it's going to be China. Not that I'm, know, as you well know, focused on doing everything to beat China. But if it is like, sort of actual combat operations applications, it seems like we don't quite have the right AI for that right now.
Dean W. Ball: (2:06:01) I think 1 thing about the about military history is that, the the flow of information throughout military organizations ends up actually very often being, a quite decisive
Dean W. Ball: (2:06:19) advantage that militaries have. There's a great history I read once or sort of, analytical book called I think it was called information in war that was about how different communications technologies ended up changing the nature of warfare. And, it was there was stuff on the Internet, but there was also stuff on, the adoption of, the radio inside of militaries and how actually that completely changed. Like, the radio was the first thing that allowed for truly centralized command of militaries, which changed the whole everything that was possible in warfare. So I actually would dispute somewhat the idea that a pure information processing technology is not actually, a quite important military advantage. Even if, even if you're literally just talking about, GPT-five, today's technology, something that's, capable in particular, if you think about synthesizing of, like I mean, the the staggering amount of information that our government collects on the world every day, You know, a lot of it, the intelligence agencies and deeply, deeply secret stuff. But, we we we pick up a lot of information on the world, and the ability to quickly analyze that is, like and and make and sort of make decisions or present decisions to to humans, for a final call, that could be, quite decisive. And that's basically just like a pretty traditional LLM adoption story. I mean, it's more things than that too. It has to do with information sharing also, and that's quite difficult. When it comes to the sort of weapons side, I think, once you get into the physical world, I think DOD starts to become much more equipped at, okay. We need to like, DOD DOD is good at being like, alright. Like, this is these are the these are the the performance characteristics that we need before we can adopt this thing. But as you'll see, first of all, on the DOD adoption side of things, the action plan has a section about, sort of a a physical facility for testing of autonomous technologies. And that will be, that'll be a physical thing that the DOD will use to to be able to sort of write those specs out. But I I have faith that their culture is institutionally well suited to that if they have the right tools at their disposal. But another thing, of course, is, interpretability and and control and, you might call control alignment. Right? Like, the this is an area of, I think, deep importance for the military. Particularly, when you think about the fact who knows how an LLM how how, how this would work in a system deployed at DOD. But, when I ask an LLM questions, at this point, it is situationally aware that it is talking to like, o 3 is situationally aware that it is talking to, a person who worked, at the White House on AI policy. And I'm asking you a question about that, has a plausible connection to AI policy. How is that affecting the model's outputs, if at all? Is like, woah. That seems important to understand really well. And I don't think we have a very good understanding of that right now. And so hence hence why I I, I I think and expect that that DARPA will place a quite significant investment into into those exact issues.
Nathan Labenz: (2:10:10) Yeah. I'd say little to no understanding of how that's impacting just yet. But it it is definitely a fascinating question. That's a good point about information processing speed being an important, dimension of competition. Obviously, not the only 1, but I think, it's a compelling point that that could matter even if it you know, there's no actual, firing of weapons, by, LLMs at any point.
Dean W. Ball: (2:10:39) Yeah. Most military planners that I have talked to think that, basically, it's logistics and cybernetics. Right? It's like which is a like, it's just like it's moving things about the world physically, and then it's, moving information through. And if you do that, yes, you will like, we focus on the sort of tip of the spear, which is the, the the hypersonic weapon or the the the autonomous drone or whatever else. And that stuff is important. We need to be able to make sure we can do that stuff. We need to keep up, etcetera, etcetera. But I think if you bolt that stuff on to a more like, to a twentieth century military model, I think you even if we have the world's best hypersonics and we make them abundantly and we have a super big army of drones, If the if those things are being commanded by a military organization that is is rooted in the industrial revolution era technologies, then I don't think we will fight successfully.
Nathan Labenz: (2:11:42) So broadly, I guess if, part 1 is, don't get in our own way and let our private sector, continue to lead in a relatively unencumbered way and and, do what it does best and we have advantages, and we'll naturally kind of, maintain those advantages because that's who we are. Part 2 around building infrastructure is the part where it seems certainly much more plausible that we need to actually up our game to to achieve the, the visions that we have of a, intelligence too cheap to meter future. How optimistic are you, I guess, in the first place that, we're actually gonna be able to do this? You know, there there has been quite a long time since we, we're energetic, so to speak, about building nuclear energy or, just building infrastructure fast, almost at all. Right? Like, exceptions, I guess, but not too many. Is this something you think the administration is really gonna be able to change in a short period of time such that we're actually bringing all these things online in, whatever, 27 to 30 time frame?
Dean W. Ball: (2:12:56) Yeah. The short answer is yes. Like, I don't know that we will have lots and lots of nuclear reactors, producing gigawatts of power of new nuclear reactors producing gigawatts of power in the next 3 years. Right? That would seem hard. I think we're we're doing some really significant stuff, on nuclear. There are series of executive orders that came that were driven in part by our office relating to nuclear that came out, a couple months ago. And, there's, an entire reorganization of the nuclear regulatory commission going on. So exciting stuff to be sure. You know? And I think fusion is plausibly not that. I'm I'm way more bullish on fusion than most people of the government. That's 1 thing I can definitely say is, most people, they got like, there are people who got very mad that fusion was even mentioned in the action plan. You know, people were like, well, it's never gonna happen. It's 30 years away. You know? And I think, that's just a sort of an older point of view that I think comes from not being engaged the current frontier of that technology. But, I think when you look at the next couple years, I think we will have to be I think we're gonna do it. I think it's also gonna look a little different than what you might expect. Like, I'll I always remember, Leopold has, like Leopold Ashenbrenner, situational awareness, has, this this
Nathan Labenz: (2:14:32) AI He's a first namer. It's all it's all good.
Dean W. Ball: (2:14:34) Yeah. Has this AI generated image in situational awareness that's like a endless field of data centers, and you see, big, you like, natural gas peaker plants in the distance, and we're gonna build stuff like that. I think there will be things like that, by the end of the decade. I totally do. But, the idea of building, 100 gigawatt or terawatt, types of facilities, I don't think it's gonna be necessary. And I I think like, also, the way that we will get there will be through different kinds of unlocks that are important but not as well discussed. Like, 1 thing is that the American electricity grid, as a general matter, is actually quite over provisioned because the grid is designed for the worst case scenario. So it's designed for, the day when it's a 112 degrees in Texas at the peak time when everyone's AC and TVs and electric cars are on, all that. Right? It's designed for those moments of peak demand, which means that the vast majority of the year, there's actually, many gigawatts that are available to be used, assuming that you don't need the power a 100% of the time. So, there's a you know, there's kind of a viral report that went around, that came out at Duke, a guy named Tyler Norris, that, basically said, something along the lines of, if data center operators were willing to curtail their electricity demand for 0.25% of the year, you could unlock 76 gigawatts just from that, from the without building any new physical infrastructure, just purely through what's called demand response. And the problem, though, is that I'm getting really technocratic here, but but I just I mean, the the problem is that, there there's there's an entity called FERC, Federal Energy Regulatory Commission, and they regulate the interconnections that happen. Like, so if I build a new data center, I'm this is a 1 gigawatt data center. I'm adding 1 gigawatt of demand to the grid. What will happen when I'm trying to build something of that size is there will be the utility in a state will do an interconnection to the grid, and they'll do modeling, computational modeling of your demand to see how much you know, what how what's it gonna do. And when they do that modeling, they assume it's completely stable growth. They assume it's, 1 gigawatt a 100% of the time. And when you do that and you also factor in the the worst case scenario day, then it's like, oh, well, we're gonna have to build, an entire new plant natural gas plant. We're gonna have to build, totally new transmission infrastructure to accommodate this. And then that becomes a delay, and it becomes a cost for the and the data center operator is gonna have to bear that cost. The utility will build it, and the data center operator bears the cost. You can there's a federal entity. A lot of this is being done at the state level, but there's a federal entity called FERC that when those interconnections implicate interstate transmission line electricity transmission lines, which is, a lot of them, FERC has FERC has jurisdiction at that point. And so what FERC can say is don't model it as a stable demand. If a data center is coming online and says, we're willing to curtail our our demand by 0.5% of the year, then then we're gonna give them a faster interconnection. We're gonna move their time to power. We're gonna get them on the grid in 2 years rather than 5. You've unlocked a ton of energy, and you've accelerated demand. So there's a and and, also, the other amazing thing about this is that, there is hardware for demand response, but there is also software. When you think about what demand response is of, I got a signal from the I'm a data center, and I just got a signal from the utility that I need to cut my power by by 50% in the next 10 minutes. You have all this equipment running. You have all you have the GPUs, you have the cooling equipment. You have all this other stuff. And you have, different priority batch jobs, right, of, okay. Like, there's all these different workloads being processed by the data center. 1 is, me making a cat meme, and the other is, a patient's medical records in the same facility. All of a sudden, that starts to smell a lot like a reinforcement learning problem, doesn't it? Right? Like, that starts to smell like an AI problem of, oh, we need to, optimize how to scale down our power dynamically. Anyway, there's a lot of different things you can do there, and, that alone is cape I think if we do a good job of it, that alone, you can unlock, a 100 gigawatts. And then in addition to that, we're gonna make the permitting easier. And even if congress does nothing, at the margin, that will be useful. And there's a lot of energy on state permitting reform too right now that that that folks are engaging in. At the margin, we'll be better. Are we gonna have, 100 gigawatt data centers by the end of the decade or, lots and lots of nuclear power plants? Probably not. But I think that by the by mid decade, it'll be pretty crazy. Like, by mid decade, I believe that there will be, lots of new nuclear online by mid 20 thirties.
Nathan Labenz: (2:20:52) I've not heard about this, dynamic management, but I did have a personal experience maybe, I don't know, 6 or 7 years ago now in Detroit. 1 cold night in the winter, we all of a sudden got a text from our local utility that was like, we got some sort of problem. Please turn your heat down to 65, everyone. And that's only happened once in my life, talking about things that we can be, grateful for and, shouldn't, take for granted. The, uninterrupted provision of, such utilities, for my entire life is definitely 1. But, basically, that worked. You know, people, responded to the text, and, we made it through.
Dean W. Ball: (2:21:32) And, obviously, it doesn't scale to, send text to people. So, you do need to, automate this in some way. But 1 of the things that's amazing is if you play the tape forward on that and, we were to roll out technology of that kind to, lots of industrial facilities, including the data centers, what the net effect would be is that we would utilize existing electric electricity generation more efficiently than we currently do, which would actually lower prices over time. So you literally can build the data centers while lowering prices if you do it right. Total again, theme of the action plan. Win win. Positive
Nathan Labenz: (2:22:08) stuff. That's great.
Dean W. Ball: (2:22:09) Yeah.
Nathan Labenz: (2:22:11) Taking 1 step back, I don't know, quite how to evaluate this, but it was reported in defense 1 by defense 1, a quote from an anonymous Pentagon official. The quote is, we're not going to be investing in, quote, artificial intelligence because I don't know what that means. We're going to invest in autonomous killer robots. This administration cares about weapon systems and business systems, not technologies. Would you call that fake news? Or, like mean, there's certainly some fake news out there, but would you like, how would you reconcile that with your, characterization of your conversations with
Dean W. Ball: (2:22:52) Well players. I would say 1 thing is when I said the conversations with military players, I was not actually necessarily referring to people inside the administration, inside the Department of Defense. I was just like, over the years, I've talked like, I've heard podcasts and talked to various people that know about this stuff. So that's not so much a Trump administration view that I characterized. That quote gets at, weapon systems and business systems. So it's the latter. The business systems would be, the latter case of the, sort of information and communication stuff. Certainly, it's also true that, there's totally, transformative things, hardware things that that we'll be able to do with AI, and I don't even know how to think about the form factor for those things. Right? Like, I think drones are an early 1, but there's gonna be so much more. And, there's I mean, I I'm very excited about, autonomous, autonomous boats and autonomous ships. I think that's potentially real I mean, I think 1 thing is, America has problems with shipbuilding, which I hope we get better at. I think, again, this is an area where this administration is doing more than anyone has done on the shipbuilding problem. But we're actually good at boatbuilding. We're actually perfectly competent at boatbuilding. And so if you build, lots and lots of autonomous boats, like you know, that might be a really interesting way to think about the future of naval warfare. Basically, a bunch of, school buses flying going around the ocean. So, no, I I to be clear, I was not discounting the benefit of of, weapon systems, of of hardware weapon systems that are AI enabled in some way or another. I was just saying, in fact, was saying, I just think DOD will be better equipped to, buy that kind of stuff, particularly because they're getting significantly better at working with startups. And so, you're already starting to you know, there's there's this blossoming ecosystem of defense tech companies, and I hope that stays. That that continues. And, Androle's kind of operating as a new neoprime now and selling all kinds of AI enabled capable hardware capabilities to the government. So it's already happening. But but I would also say, what is the actual basis of Andro? Like, what's the real like, the the fundament of of their business model is software. It's the lattice software system, which is exactly about information sharing. It's so that, when we build new we have a software platform, so they're like Apple. As they add new things, it all fits into the Andoril ecosystem, and everything can talk to 1 another and communicate in, really high bandwidth. So that would be that'd be a good example of, exactly what I was talking about.
Nathan Labenz: (2:25:36) Gotcha. Okay. Interesting. When it comes to bringing chip manufacturing to The United States, That's another big challenge. I guess there's a couple of different angles I wanna come at it from. 1 is we've got these Gulf deals where I'm interested in your take on, why are we doing those deals? It the answer I always get, which doesn't quite satisfied me, to be honest, is because we want them to build on our AI stack versus China's AI stack. But then I also feel like, well, does China really have any chips to sell them? And couldn't we have sort of if we're concerned about, American values, like planting these giant data centers in these Gulf countries, which I would say, frankly, simply don't really share American values, doesn't seem like the obvious move. Seems we could have held off on that a little longer, because they didn't really have anywhere else to go. But maybe they're doing something for us that I don't understand. It might be energy. It might just be, speed of regulatory approval locally. Maybe it's cash on balance sheet. And then the other angle, of course, is the Taiwan angle. And I I wanted to get your take on, how do you think they are thinking about this right now? Right? They have this sort of tricky position where, obviously, they're right in China's shadow. Everybody knows, that that's a flash point. They've managed to put themselves in this position where they're super relevant, where, if if Taiwan goes dark from a, know, Western United States perspective, that's a huge problem. And so we're at least compelled to, be ambiguous about exactly what we wanna do or would do, under various scenarios. But now they also have this sort of tricky situation where they they, need to be a good friend to us to keep that dynamic going. And that means sharing some technology and putting some TSMC know how on American soil, but they probably don't wanna overdo that. Right? I mean, they don't wanna share everything, because then maybe we wouldn't need to defend them as much. It seems like the Gulf States are kind of trying to do a trying to engineer their way into a similar position. And so, yeah, I don't know. Break down the the sort of various geopolitical strategies that folks are playing
Dean W. Ball: (2:28:09) I think a couple of
Nathan Labenz: (2:28:10) those areas.
Dean W. Ball: (2:28:11) A couple of things a couple of, foundational things that that go into, UAE. And I I should say I worked, quite carefully and closely on that on on the negotiations for that. First of all, I think much like the action plan, the UAE framework that we agreed to is very positive sum. What we are saying is, this is not rival we are gonna do a big industrial build out in The United States. It's gonna generate a lot of jobs, and it's gonna generate a lot of wealth, and it's gonna be a an asset that that, people all over the world and especially Americans are gonna use. But we don't see that as being rivalrous with the idea of other countries that are strategic partners doing really ambitious things too. So, what makes The UAE special of countries around the world? You asked about AGI pilled governments. The UAE has an AGI pilled government. Like, they are the most AGI pilled country in the world. They, think about this technology in very sophisticated ways. I do think it's very important that that America be partnered with other, other countries that are sophisticated and I think have a have a pretty good grasp on the likely trajectory of this technology, I think it's really good if all of us, all of us leading, countries, in terms of intellectually, that that we are all partnered. And and and we, we don't want them building on the the the sort of Chinese stack. You know, I think there's probably different estimates and different projections about the future of where China's chip sector is gonna go, how quickly they'll be able to catch up. I think as a general matter, it seems like the trend has usually been that they catch up a little faster than the technology analysts here, guess. I don't know. I don't have a, I don't have, a deeply, technically principled answer to that question. But what I do know is as a matter of policy planning, which is, what we were engaged in with this deal, I think you probably can't assume that they are going to be slow. Like, I that feels like a weird hinge point to be, complacent about. And, I I frankly got that. Like, a lot of the people I know who are critics and I think a lot of the people in the prior administration were just, weirdly complacent about that 1 thing. They were like it was like it was like they got so they they just were, so enamored by, the specific thing of, extreme ultraviolet lithography that they're like, they'll never be able to figure it out. And it's like, oh, no. Like, I'm not sure about that. Like, it's hard. It's definitely hard. There's other ways to do it. Right? Like, not the only way to not the only plausible way that you can achieve that small on onto silicon wafers. There are all kinds of different things you could do. And there's also scale, and there's also the fact that they don't, that, they'll just eat the profit margins. Right? They'll just Yeah.
Nathan Labenz: (2:31:36) They do it if you make money.
Dean W. Ball: (2:31:37) Yeah. Yeah. Yeah. Like, that's all eventually, they do. Right? I mean, this is I think Noah Smith might have said this maybe on your podcast even, but, the world where they win is just such a gray world. It's so dreary because, you know what that means? Like, no profits means no new stuff. Like, it means nothing new happens because no 1 can ever reinvest, and so great. Like, you've made a ultra mega super abundance of chips. It's a dystopian hellscape that they're trying to build. And, at this point, there's, you know anyway, I I I I dive I diverge. But
Nathan Labenz: (2:32:18) That seems a little harsh for what it's worth. I have to I have to at least briefly comment that, what I see of China doesn't look like a dystopian hellscape. Even if
Dean W. Ball: (2:32:26) But if
Nathan Labenz: (2:32:27) are a political dissenter, it might quickly become 1. But, for most people, it seems like life is getting better.
Dean W. Ball: (2:32:33) If China occupies the role that we currently occupy, they're the world's, true frontier economy and the biggest global powerhouse, and we're, kind of a more a significantly less relevant country, that would suggest that, there is no like, technology development becomes much harder because you have no like, if you have no profit, you have no ability to reinvest into the business and make new things. And so it's like they just, take everyone else's stuff and then, make it super low margin and drive them out of business and then make super abundance of that stuff. Like, in a certain sense, sure. That's fine. But, the long term results of that is a less innovative world. If that's, actually the strategy, it's a less innovative world, and it's a less colorful world.
Nathan Labenz: (2:33:33) I do think we see some exceptions to that, though. Right? Like, Huawei is not doing that. They're notorious for, reinvesting huge amounts, and my sense is they are at the frontier, if not maybe, genuinely pushing the frontier in their domain. Right?
Dean W. Ball: (2:33:50) Kind of. Yeah. I mean, I don't know. Like, I I I think Huawei Huawei is an interesting case, and there's only so much I can say. But, Huawei how is how is a somewhat different case. You're right. It's not uniformly true, but I just mean, if you play the cape forward, it's it's a less it's a less beautiful world. But in any case, I think where were we?
Nathan Labenz: (2:34:18) Yeah. So what I mean, this is all downstream of why did we have to sell the Gulf countries these huge data centers?
Dean W. Ball: (2:34:25) So so yeah. I mean, I think that we do want them to be, on our technology stack. I think that's, a 100% true. I think they're, a valuable strategic partner. And then, of course, yes, it's also the case that, we think that, there are terms that that we secured with the UAE's government that, require them to make reciprocal investments of similar size to the data centers that get built over there. And those investments could take the form of data centers, but they could also take the form of, investments into energy infrastructure, all kinds of things associated with the AI build out. So, when you're talking there about about hundreds of billions of dollars that that they are, eager to invest into our country. And so, I think the we have to get the security details right. That's what's going on right now. You know? That's what, my my former colleagues at the Department of Commerce are doing, is getting the security details right. But if we do it, we're like, it's just it's it's it's a total win win. You know, is it the case that, do they have a different government and different values than we do? Like, they totally do. Absolutely. But, I don't know. I think the idea that we can only engage in commerce with, purely democratic countries, I just it's just a weird rule. It's like, it's never been true before. It's not like we've only we've only done that before. And, we would love to sell to other democratic countries. The reality is that most of like, lot of them, a lot of the developed democratic countries don't really like AI very much. And they spend more of their time talking about how to put a put put a straight jacket on it than they do talking about how to grow it. And so, I hope they change their tune on that. They are going to be big customers. I think they will regret the fact that they're not currently big customers. I think they will regret it in the relative near future. But, you did just see, Norway just did a big, OpenAI deal for their Stargate for countries thing. I hope I hope we do much more of that. You know, I I I I we wanna sell to lots of people. Right? Like, our export promotion program is about treating unless you are, a strategic adversary of The United States, we want to treat countries equally, and we want to engage in commerce. This is about commerce. And so, yeah, there's I think there's a lot of win win to be had.
Nathan Labenz: (2:37:26) Yeah. I do support commerce, with with, in trade, with generally with people that we don't see eye to eye on everything. It does seem a little strange coming from the American right, though, which I would say is generally, not fond of, I don't know, Islamic values broadly. Right? Like, there's a there's a weirdness to that that I see kind of being memory hold. And I guess if, if I was really forced to pick, whose values are more compatible with ours, the Saudi governments or the Chinese governments, I think I lean China. I mean, I know they have a lot of, I know they're, a more serious competitor to us. That's a different question. But I've recently this is a bit of an aside as well. But, in the quest for is there anything kind of stable and robust that we could try to align AIs to that would work out well for us if indeed they become, super powerful in a not super long time scale? Ancestor worship has been 1 candidate that kinda keeps coming to mind. You know, we're we're the ancestors, and that, maybe is good for us. I think, that's pretty Chinese flavored notion. I could I I recently watched a little TikTok. I know you're not on TikTok, but there was just a guy in a small town who took us into the ancestor hall in his small town.
Nathan Labenz: (2:38:55) I think that's the name
Nathan Labenz: (2:38:56) that he gave it. And it was basically sort of the community center, a place where they have events, but also, this sort of public, and it's amazing, how much of this is still there. I think a lot of it got wiped out, at, critical moments in Chinese history. But, at least in this place, there's still this, testament to these are our ancestors, going back, pretty far. And he said, this is basically in in this dude's mind, this is Chinese religion. And I was like, jeez. You know? And relative to other religions, that seems like a pretty decent story to try to get the AIs to live with. I don't I don't think it's a solution, but I don't know. I wanna say something in favor of Chinese values or at least in defense of Chinese values, certainly comparing to
Dean W. Ball: (2:39:50) You are hitting on some pretty fundamental things there that I actually worked on when I was in college, believe it or not. I I became really interested in the resonance between ancient Chinese philosophy and conservatism, various strains of conservatism. I think it's pretty clear that, the concept of emergent order, the concept of of emergence, and, kind of, that that idea, which was sort of, pioneered in the West, was pioneered, by, Hayek and, complexity science in, the twentieth century, and maybe has somewhat earlier predecessors in, the Scottish Enlightenment of the of the eighteenth century. That idea is crystallized in a Taoist concept called wu wei, which is thousands of years old in China. Right? Like, that's a you know, these things are so there's all kinds of interesting resonance, and there are actually, many, many different paths there. I would say, the extent to which China actually, embodies today, the values of its ancient intellectual traditions, Confucianism, Taoism, etcetera, I think is questionable. Certainly, it's not a very Taoist country. You know, there are some, obviously, some elements of the Confucian system that have persisted, but but I think it it it's there is more continuity with other intellectual traditions that are somewhat less wholesome, than what you're describing. The UAE, and and Saudi, there are definitely, very stark differences in terms of the way that their societies are hooked up and the kinds of values that that they embody. You know, I think this is 1 area in which president Trump is is very different from from previous presidents, and I think in a good way. I think president Trump believes, he he genuinely seeks peace. He is, 1 of the most like, I think of any president, certainly in my lifetime and and probably in modern history, the most earnestly peace seeking president we've had in a very long time. And, also, sort of the flip side of that coin is he he really actually, like for all the stuff about trade and everything, he actually cares, quite a bit about global commerce. Right? Like, he wants it to happen on on terms that are a little that that that are that are more favorable to The United States, but he actually, cares a lot about it. And he thinks that the connection, between commerce and peace for him is very deep. And so, I think he'd love nothing more than to see a lot more sophisticated commerce happening, between our countries and and within The Middle East at the region. So it's a strategic sector. You know, there are definitely areas in which their values I think I think we're gonna have to be able to accommodate a pretty wide range of values into our into our systems in the fullness of time. And I think the stuff you know, there is fundamental sovereignty there, and I think we have to be respectful of that. And, again, we're not anti we're not opposed to selling AI systems to, advanced capitalist democracies that
Nathan Labenz: (2:43:37) are,
Dean W. Ball: (2:43:37) you know, largely secular places, like like Western Europe. And we totally do. We sell, we sell them lots of stuff, and they're they're big AI users. But it's also the case that, yeah, like I said, they're not the most enthusiastic. And and very often, they're kind of hostile, and and I I wish they would stop being that way. But that's where they are right now, in their society, and that's also their choice.
Nathan Labenz: (2:44:05) 1 thing that has struck me is just generally how much continuity there seems to have been between the sort of Trump, Biden, Trump, administrations, at least on, the narrow set of issues we're talking about today. Not, necessarily, obviously, all of politics. But, generally, the sort of managing and muddling through strategic competition with China seems to be a a clear through line. The expert expert control seems to be, more similar than different. You may see that differently, but it you know, from my sort of, not super deep, interrogation, it seems like there's a lot of continuity there. There's the sort of anti woke notion, but it seems like a lot of people, and I'd be interested to hear your thoughts on this, have sort of said, yeah. Well, they kinda had to put that in there for rhetoric, but it wasn't like didn't seem like, their hearts were really in it. But how would you describe the there's also, of course, the, the biosecurity angle where there seems to be a lot of, continuity between Biden neos on, DNA sequence, presynthesis screening, and, I think you guys have even extended that. How would you characterize the, level of continuity and, what what do you think are the most important points of divergence?
Dean W. Ball: (2:45:26) So, I I think there there is a lot of stuff that's that's pretty discontinuous. You know, I would say, the export stuff is 1 good example of that. Where, yes, it is true that export controls on China are are a thing that, to a certain extent, the Biden I I think it's actually more appropriate to say that the Biden administration is consistent with the first Trump administration, because the export controls on China, including EUV of EUV, that that was initiated under Trump '45. So I I would actually characterize the Biden stuff as as as an expansion of work that was originally pioneered in in the first Trump admin. You know? But things like the diffusion rule, I can't tell you how much damage that did to this country, internationally. People love to talk about the damage that this administration is doing, to our reputation internationally, and, fairly or unfairly, we can set that aside. 1 thing I can tell you for sure is when foreign governments come to talk to us about or came to talk to us about compute know, AI, diffusion was always at the top of their eyes, and they really felt like it was a huge slap in the face. You know, to put 2 of the largest by the way, democracies, largest growth markets for generative AI, Brazil and India, into tier 2, into the, we all like you. We don't trust you tier was such an unnecessary self own. I think, getting rid of that and being much more oriented toward export and, at the margin, less oriented toward control. You know, it's not like we don't we we don't want 0 control. We have security provisions that we're gonna be very serious about throughout the world, but we're somewhat less interested in micromanaging the global diffusion of the hardware. You know, we're significantly less interested in that. That's a big deal. The woke stuff, I think we are cognizant or were cognizant of the you know, we're not we don't wanna meddle in the markets. Right? So a consistent theme throughout the administration is, we, we used tools that we used the federal procurement. You know, that's the hook there. Right? We're not trying. We're very explicitly, not trying to, tell AI companies what the models that they sell to you in a private market transaction. We're not trying to tell them what they should do. What we're saying is that for our purposes, as the federal government, we're not a very big customer of these LLMs just yet, but we think we probably will be in the future. And we are saying that it matters to us what the political values of the systems are, and it matter matters to us. Like, you can go and if if you wanna go, there is a, MAGA handshake Daniel Cocatello, thing you can do here where, 1 thing that's, not inconceivable to me is that, if if especially if we if we had the dynamics of Trump '45 where, American elite institutions really, like you know, they like to performatively, undermine the administration live and so did the tech companies. They'd like they liked it was, fun for them to undermine the president of The United States, American companies. Right? That still makes makes me personally angry and a lot of other people angry. You know, it wouldn't surprise If this technology was diffusing under those political dynamics, it would not surprise me at all if there were covert efforts to, sabotage the LLMs used by, used by the Trump administration in various ways at the margin. And, we wanna be damn sure that's not happening. So we, are actually, quite serious about that. Getting the guidance right will be difficult. But I think it's like there's actually, there's very talented people who are gonna be working on that. And so, we don't wanna create a massive regulatory burden with that. That wasn't the objective that we had in mind. What we want, is transparency. Right? Like, it's it's it's really you know, the the the EO says that a way you can comply with this, and I think probably the easiest way to comply with this, will be transparency around system prompt, model spec, constitution, testing. It's gonna be stuff like that, which is actually pretty another area that's pretty consistent with AI policy, with, a lot of, sort of, non MAGA AI just general AI policy. I think, is there consistency with the Biden administration? In those senses, very, very deeply not. The Biden administration was totally committed to the idea of using of using regulation and scary words like misinformation and bias to impact the political outputs of and to politicize the outputs of LLMs all over the place. They were very committed to that idea. If Kamala had won, that's the big thing they'd be doing. They would be pressing at the limits of the constitution to do it in a way that we absolutely did not. And it just wouldn't be called pressing at the limits of the constitution because when Democrats press the limit of the constitution, it's called ambition. And when Republicans do it, it's called authoritarianism. So, that that's just the rhetorical standards that we have to live with, and, we're gonna get that kind of hate, and we're prepared for it. You know, it's like, that's the world. Right? But I would say, those things are stark differences, but you are also right that, the way I describe this because there were people who said this internally. You know, there were people and and who who were like, wow. There's, some stuff that's, similar. And it's like, we're in the early stages of a policy field developing here, and, you are expressing surprise. Like, it's, early we're in the early stages of financial services regulation, and you are expressing surprise that the concept of interest rates is similar between the 2 parties. And it's like, no. I mean, these are just important abstractions in this field that we must have. So, yes, interpretability is important to us, and it was also important to, to the Biden folks. That's true. I think, we have a very different posture. Vibes actually do matter a great deal. You know, the Trump administration has a very different posture toward the technology in general, but there are some technocratic things that are similar. Same again, like biosecurity. You know? I praised on your podcast. I praised the 2,024 nucleic acid synthesis framework. And so it's been, wildly cool to have the I played a a pretty substantial role in the rewriting of the the nucleic acid synthesis framework that that the administration is is embarking on. I can't talk too much about where that is because it's not quite out yet, but it should be out pretty soon. And you will see, what it is is there's many commonalities worse, but, the the Trump administration will strengthen it substantially in some some pretty neat ways. There is some consistency. Also, know, outside of action plan, the, NP material stuff that our Department of Defense pulled off, the Biden people wish they could pull off deals like that. I think 1 thing that is really different that's more inside baseball, but it matters, is, institute internal culture and energy. The Biden admin was way more, by the books, procedural, lots and lots of process governing everything, and that slowed everything down. We are way faster moving. Like I said, we're like, Trump administration is creating a new lexicon for American statecraft, and we are way less beholden to, the past, to the way things have been done in the past. We are in terms of, inside the bureaucracy. So it's an inside baseball thing, but in terms of culture and, what kind of things would you expect an organization run this way versus the way that the Biden administration ran things, I think you should just expect, the Trump admin to move significantly more quickly, sometimes in directions that you won't like. Inevitably, that would be the case. But they won't move more quickly. They will try more things. And will it also be true that, because of that, some some people will will characterize that as, oh, that's chaos. You know, I don't choose to see it that way, but I'm sure some people will.
Nathan Labenz: (2:54:55) Yeah. There's many many ways to narrativize the same events. That's a lesson I learned over and over again. Well, I don't wanna keep you on this podcast until you have your kid, and we're entering into Rogan territory. Maybe just a couple more things to bring us home. Taiwan, what are you just to recap that question, what are they thinking? How are they trying to walk this tightrope?
Dean W. Ball: (2:55:25) So, yeah, I think, the the Taiwanese and they are, obviously, they're making large investments into The United States, and I think they see that, the geographic diversity, as being really important, but they also are like a silicon shield for for the island. You know, I'm not, the world's best person to to talk about the dynamics of that. I just I'm not, like that's not my area of expertise. What I would say is, from the perspective of of building more semiconductor fabs in The US and, indigenizing that, I think, we're making very strong progress. You know, the way that the commerce department, in particular, has some some work that they've been doing that I feel like has kind of flown under the radar, But they've, revised a lot of the chips funding deals that the Biden administration had inked with with different companies and just actually made them, more favorable and bigger and better in a lot of cases, which is good because, a lot of those a lot of those deals, like yeah. You can make the case that, the Biden is the Biden people, like I mean, they put in a bunch of crap. You know? They put in a bunch of, oh, well, you have to have, diverse employees and all this stuff. Right? And like The different care centers on-site. Yeah. Yeah. Yeah. So, commerce took, a lot of that stuff out. Right? But it's also a strategically intelligent time to revise those deals because a lot of those deals were negotiated in the, '22 change when chip companies had way less faith in the AI thesis, 2223, in that time frame. Now they're, way more bullish, and everyone's like, oh my god. We're gonna be under revision. So they were actually willing to make bigger investments under better terms for us. So that's been that's been great. Obviously, the trade deals will play have played and will continue to play a really important role in getting more stuff. But, I think we are actually at a point in The United States where we are gonna have multiple, really robust, enclaves of different kinds of semiconductor development. You know, there will be in Indiana, there will be the HBM stuff kinda anchored by by SK Hynix. There will be the hub in Arizona, of course, Taylor, Texas, Samsung. You know, that facility looks like it finally has enough demand to come online. There'll be other places too, other r and d hubs. Like, it's actually, very exciting to see. I think, by the way, the automated material science that we talked about earlier is a strategically important part of this, and there can be automated labs for material science that are commonly shared between industry and academia. And that way, you're also, like like, you're taking frontier academic research, and you're doing it at the same shared facility that a corporation might do some frontier research of their own, totally cross pollinating ideas. Right? Like, I I think there's a very, very bright future ahead for for The US. That's why I I focus on domestic policy. So I I don't, unfortunately, have all that much to say about the Taiwan stuff.
Nathan Labenz: (2:58:57) Do you do you have a sort of sense of, what are we aiming for? I mean, I know we wanna have some core amount of production capacity domestically. Is our but it seems like we're not going to achieve, chip independence in the sense that we'll be, not importing chips from Taiwan anytime soon. Is there some other threshold between, some bare minimum that we need to, keep the AI lights on and, total self sufficiency? Are there other thresholds between those 2 levels that you think are particularly important?
Dean W. Ball: (2:59:33) Well, I would say a couple of things on that front. So, first of all, I think we actually are trending by the early 20 thirties to, at least be able to satisfy domestic demand with domestic production. So I've you know, I think I feel decent about that. You know, I think the I think 1 part of the goal is that. Another part of the goal is, actually reclaim the lead in frontier semiconductor manufacturing in The United States. That's very important. But another thing that, is underrated and I think probably is an area I'm gonna focus on because I I don't think the current I don't think the Biden administration did enough about this, and I don't think the Trump administration thus far has done enough about this specific thing, which is legacy node because you can shut down civilization. Like, yes, 2 nanometer, and and and 18 a and 4 you know, all these things are very important, very, very cool and important things. But you can shut down civilization with, 45 nano like, we we don't have 45 nanometer production. Like, you can stop civilization if we can't import that stuff. That's a harder 1. That's harder than just the leading edge. There are a range of interesting policy tools to use there. I'll have more to say on this soon. This was stuff I was starting to think about post action plan. I was like, this is 1 of the things that's on my mind. So, yeah, more I'll I'll I probably have more to say about that at some point in the next couple months. But, yeah, that is a big problem. That's a little bit thornier, and it's harder to make pure economic just solution to that. But on the on the frontier side, I think we're trending in a good direction. You know? There's a lot more work to be done. And I think what we need to do is, we need to make sure that we're funding the basic research too that allows The US to make the next generation leapfrogs. US, we need to make sure that, we don't get there's not some leapfrog technology that we missed. And usually, our ecosystem is quite good at that, but, we need to make sure we continue to nourish that.
Nathan Labenz: (3:01:53) Do you have any sort of taxonomy or shortlist of things that you are watching most closely that you think could shake the snow globe? Like, 1 could be a just leapfrog in, chip manufacturing technology.
Dean W. Ball: (3:02:09) Yes. But I not exactly like its secret, but Mhmm. Some of that information comes from analysis and things that I've seen that I don't think should be public. So I will I will polite politely decline to answer.
Nathan Labenz: (3:02:33) Okay. How about the frontier companies? I I guess I'd be interested in your list of, who are the frontier companies? I usually have a pretty short list that I would actually include in that.
Dean W. Ball: (3:02:49) These are the live players.
Nathan Labenz: (3:02:51) Yeah. Yeah. And I I guess I'm interested in, how do they present to the government? How does the how does the administration or the government more broadly understand these different companies? How differentiated, are they in the in the view of the government? And because it was striking that, you hear these things like, oh, Anthropic has really good deep relationships with the government. I don't know if that, was that may be an out of date, statement now. You can tell me if you have any thoughts on that. But they're, often reportedly the sort of, they've got the longest tenured people and the deepest relationships and, the kind of most mature presentation. Okay. I've heard that. OpenAI sort of has maybe a reputation for saying what people wanna hear at different, times. Obviously, Elon was involved in the administration, in a serious way for a while. And then they, seemingly all got, federal contracts, and it happened, right at the same time that Grok was calling itself Mecca Hitler. And I was just like, it's very strange, right, that we have this sort of blanket announcement that we're gonna do a deal with Claude and Mecca Hitler, and we're gonna put them out there, all on kind of the same press release. And it just got me wondering, do people sort of see these things as just, very similar entities or is there a is there a more nuanced understanding of who these companies are and what they represent and, what doing business with them might mean in the future?
Dean W. Ball: (3:04:34) I can't super, speak for other people. I would say, it's it's certainly the case that, a company like Anthropic has has, quite good relationships and a quite good reputation inside, specifically, the intelligence community. And and part of that is because, Anthropic invested, serious resources early on because, Dario personally is so, NatSec AI build. Like, they invested significant resources in, getting their models stood up on the high side, which is to say in classified networks because, it's it's not easy. Like, it's it's nontrivial technical work that has to be done to to get this stuff on the high side. And then, of course, they've, like they also like, you have to and and in order to do that kind of work, you also have to have employees who have top secret clearances. You have to have you know, to the technical staff to be able to do that. You have to, like there's all sorts of work that that they it's not like the other companies haven't done this. Obviously, particularly Google has. But certainly, Anthropic among the frontier AI companies invested early. I personally view the frontier firms I I I still think it's basically GDM, Anthropic, and OpenAI. I think that kinda hasn't changed for, 3 years.
Nathan Labenz: (3:06:00) Obviously, you're gonna go do some other stuff that you you kind of alluded to that, the administration might not do, and you'll kinda try to be upstream or, influence things from the outside. What would you say in, in general, you could kind of maybe list off some ideas like, what is the government gonna fail to do that other people need to pick up? This could be, state level regulation, which, you have an interest in. It could be, things just purely done by the private sector or the, philanthropy sector. But, how would you advise people who wanna make an impact, wanna make the future safer, brighter, better, and wanna just, absorb your wisdom about, what the what is the best way to complement what the government is is gonna do?
Dean W. Ball: (3:06:48) I will reframe it slightly as, things that the government will not be well suited to do. And, also, what's the division of labor kind of? Like like, for example, I am not, in the camp that we need to restrict states from passing, all I think there's I think when it comes to the regulation of AI development, it's pretty unambiguous that we can't have 50 rules. But, I think if a state wants to be totally retrograde and, freeze certain aspects of society of their society in amber through really aggressive AI use restrictions. I won't support that. You know, I'll argue against it if given the opportunity, but, but I I don't necessarily know that it's I think that's up to them. I think that's up to the people of that state. When it comes to, where in general, I think governments are gonna fall down, governments are gonna fall like, governments are not gonna do a good job at defining what good looks like when it comes to things like like AI safety, right, and and whatever else. Like, they're just not. There's an information asymmetry that makes it really difficult. Like, I will tell you, having worked at the White House, I don't know tremendously more about what's goes on inside the frontier labs than you do. And there might be ways in which you know more than me because you probably have relationships with researchers that mine have withered a little bit in recent months. So I think that, I look at a company, for example, like the AI underwriting company, This former Anthropic employee founded that recently got money from Nat Friedman and Daniel Gross. That's a really cool company that is doing, private sector led. It's a for profit insurance company that is well, it's a managing general agent, but that's a technicality, that's trying to do you know, they're trying to build AI standards. Like, I think that's, very cool and plausibly, like like, both a very good business and a very, positive for the world 1. Like, it's 1 it's a total win win. It's, probably a very good business. It's probably really good for adoption, and it's also probably, good for for AI safety. So as a general matter, I hate to say this because I was just in politics and policy, but, I I hope that bright young people, mostly, stay out of the world of policy because the world of policy is gonna move you closer in the direction of 0 sum games, whereas, markets are much more about positive sum outcomes. And so work in the like, there are many, many companies to be founded. My god. There is a there is so much money. I I'm sure there are people doing this, and I just don't know about it. But, world of biosecurity could use some, startups so desperately. Right? Like, some people that are thinking about that stuff really desperately. There's lots of stuff like that, and and I think you'd you'd create you you could build some great businesses in biosecurity and do a lot of good for the world. So, see if it's a company. You know? There's, principled ways you can think of you can develop to think about what's likely to be a company and what's likely to require government regulation. I think government governments will struggle to articulate, what good like, the spec for, what good looks like, right, standards, in the context of specifically catastrophic risk mitigation. That will be hard. And so I think we will probably need, lots of folks in civil society and the private sector to, help us do that. I mean, as a general matter, we are going to like, what we need for the adoption piece, the adoption side of things, which is also like, the thing is is adoption in, regulated industries is also about, lots of things that implicate that ultimately implicate matters of AI safety that, a lot of your listeners might be, primarily interested in. This is sort of the thesis of the AI underwriting company, by the way. It's like this kind of an idea. If you can create things that look like gold stars that are like, yeah. Like, this is, the, this is also the theme of, the the work that that Fathom is doing, an organization that I used to be affiliated with. And more broadly, we're just so pre paradigmatic on so many diff like like, I don't hear anyone talking about like, there are so many ways in which, I feel like agentic commerce in particular is, right around the corner. And I hear so little about that from either a technocratic regulatory perspective or a more conceptual perspective about what to expect it to look like, all that stuff. Are so many sector specific things where it's like, let's really talk about this. Let's stop saying, yeah, AI is gonna be good in healthcare, and let's start saying what that means more specifically and what it requires of us, what kind of institute what what specific kinds of institutional adaptations are necessary to make AI work well in healthcare. I think we're in a maturing sector and that means that it's time for our thinking to get more specific and concrete. And, again, that's a big part of what the action plan was about too.
Nathan Labenz: (3:12:51) Yeah. Yeah. The gig age and future is certain to talk about things where your crystal ball gets real foggy. It's and precious little exploration. We have I recently did an episode in the AI village. You know, there's there's a few of these sort of gonzo experiments that try to shine a little light on it, but it does seem like we're going to put a huge number of AI agents into the economy, in a very short period of time, and we really don't know at all what the dynamics of that are gonna be, and it's just kinda wild.
Dean W. Ball: (3:13:25) And it links up so well with the, with the stablecoin legislation that the president signed a couple weeks ago. It's such a good it's such a there's such a good linkage there of, like yes. And, also, America is pretty darn good at financial services, so there's pretty good chance that, that stuff spreads throughout the world. And we end up making, like we end up creating, like those could be the really transformative applications that are, used all over the world and are really world changing and and and in a positive way and look a lot like, American dominance in AI. It could be that that's 1 of the best areas. That's sort of my view of the situation. So I personally and, I feel like, by the way, this is true of the action plan. Like, action plan doesn't go enough into Action plan doesn't go enough into health care. Action plan does not go enough into agriculture. These are all things that or veterans affairs. What about that? Like, that's health care. It's a huge single payer health care system. It's 1 of the biggest single payer health care systems in the developed world. Like, and, what are we can totally use that to do interesting things, I would think. So, there's there's so much stuff like that that's worth doing.
Nathan Labenz: (3:14:41) Well, thank you for, spending all this time. It's been comprehensive, and I've really appreciated it. Anything you want to touch on that we didn't or any just, final thoughts, things you wanna leave people with?
Dean W. Ball: (3:14:56) No. I think the only thing I would say is that, I am, happy to happy to announce that I will be, resuming Hyperdimensional, my substack, on a weekly cadence. I will be joining the Foundation for American Innovation as a senior fellow, and I expect to have other kinds of institutional announcement institutional affiliations and roles to announce in the next, in the coming weeks. I there's so much to do, so much to catch up on, and I sometimes worry that hyperdimensional will have to go into a twice a week cadence because it's it's, it's that there there's that but there's I have a backlog of things that I've been itching to say, and so much to come soon.
Nathan Labenz: (3:15:44) Cool. Well, we will certainly be following and look forward to having you back to discuss many of them. For now, Dean W. Ball, fresh from the White House, Thank you for your service, and thank you for being part of the Cognitive Revolution.
Dean W. Ball: (3:15:57) Thanks so much for having me. This was fun.
Nathan Labenz: (3:16:00) If you're finding value in the show, we'd appreciate it if you'd take a moment to share it with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions, and sponsorship inquiries either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. The Cognitive Revolution is part of the turpentine network, a network of podcasts where experts talk technology, business, economics, geopolitics, culture, and more, which is now a part of a 16. We're produced by AI podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at aipodcast.ing. And finally, I encourage you to take a moment to check out our new and improved show notes, which were created automatically by Notion's AI meeting notes. AI meeting notes captures every detail and breaks down complex concepts so no idea gets lost. And because AI meeting notes lives right in Notion, everything you capture, whether that's meetings, podcasts, interviews, or conversations, lives exactly where you plan, build, and get things done. No switching, no slowdown. Check out Notion's AI meeting notes if you want perfect notes that write themselves. And head to the link in our show notes to try Notion's AI meeting notes free for 30 days.