Private Governance: Creating a Market in AI Regulation, with Dr. Gillian Hadfield & Andrew Freedman

Private Governance: Creating a Market in AI Regulation, with Dr. Gillian Hadfield & Andrew Freedman

Watch Episode


Episode Description

Dr. Gillian Hadfield from Johns Hopkins University and Andrew Freedman from Fathom discuss their innovative proposal to govern AI through private regulatory markets, which has been introduced as California's SB 813. Their system would separate democratic goal-setting from technical rule-making by having government bodies articulate safety outcomes while competitive private certifiers develop and enforce detailed standards, with companies receiving liability protection for compliance. The conversation explores how this market-based approach could create a "race to the top" in AI safety standards while remaining agile enough to keep pace with rapid technological development. Key challenges discussed include preventing a race to the bottom among certifiers, liability law interactions, and identifying qualified organizations to serve as effective private regulators.

Transcript of the episode is here.

Sponsors:
Fin: Fin is the #1 AI Agent for customer service, trusted by over 5000 customer service leaders and top AI companies including Anthropic and Synthesia. Fin is the highest performing agent on the market and resolves even the most complex customer queries. Try Fin today with our 90-day money-back guarantee - if you’re not 100% satisfied, get up to $1 million back. Learn more at https://fin.ai/cognitive

Labelbox: Labelbox pairs automation, expert judgment, and reinforcement learning to deliver high-quality training data for cutting-edge AI. Put its data factory to work for you, visit https://labelbox.com

Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive

NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 42,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive


PRODUCED BY:
https://aipodcast.ing

CHAPTERS:
(00:00) About the Episode
(04:48) Introduction and Problem Overview
(07:48) Regulatory Markets Concept Origins
(17:14) Current Governance System Failures (Part 1)
(19:28) Sponsors: Fin | Labelbox
(22:42) Current Governance System Failures (Part 2)
(25:30) Private Governance Mechanism Explained (Part 1)
(35:06) Sponsors: Oracle Cloud Infrastructure | NetSuite by Oracle
(37:38) Private Governance Mechanism Explained (Part 2)
(44:17) Liability Protection Framework
(56:39) Race to Top Dynamics
(01:07:24) Red Teaming Implementation Challenges
(01:28:47) Insurance Alternative Approaches
(01:53:51) Moving Forward Conclusions
(01:55:11) Outro

SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...


Full Transcript

Transcript

Nathan Labenz: (0:00) Hello, and welcome back to the cognitive revolution. Today, we're kicking off a short series on creative AI governance proposals. And I'm speaking with doctor Gillian Hadfield, Bloomberg distinguished professor of AI alignment and governance at Johns Hopkins University, and Andrew Freedman, cofounder and chief strategy officer at Fathom, about their proposal to govern AI via private regulatory markets. AI is, to put it mildly, a hard technology for society to effectively manage. The relentless march of capabilities advances, the radical uncertainty about how powerful AI systems will get and how soon, the feverish pace of adoption, and the increasingly intense international competition combined to create huge stakes, but still very little clarity on what should be done. With legitimate worries that even the most tech savvy policymakers could easily get things wrong. And yet while highly prescriptive government regulation of the sort that Europe is attempting with their AI act doesn't seem to me likely to meet the moment, the fact that x AI can credibly claim frontier capabilities even while Grok 4 continues to self identify as Hitler suggests that a laissez faire free for all won't serve us well for all that much longer either. Is there any way to create a governance regime that's agile enough to keep up with AI developments, sophisticated enough to address the most important and extreme risks, and yet not so burdensome that I'll still be able to have my AI doctor? It's a hard problem, but doctor Hadfield and Andrew have a very interesting proposal to harness market mechanisms and hopefully create a race to the top in AI safety standards. It's been introduced into California's legislative process as SB 813, And from what I hear, it does seem to be gaining traction in a number of red states as well. The core idea is to separate the process of democratic deliberation about the outcomes we want and want to avoid from the detailed rulemaking process meant to get us there. In concrete terms, a government body, perhaps the California attorney general or perhaps a newly created AI safety board, would articulate goals like AI systems must not enable the development of bioweapons or standards like autonomous vehicles must be safer than human drivers and then create a competitive ecosystem of private certifiers who develop the safety standards, engage the companies to make sure they're properly implemented, and then report back to the government and public on results. Companies could then choose to work with these approved certifiers and in exchange for meeting their standards would receive some level of liability protection when things still end up going wrong, which given the unwillingness of AI systems generally and the unsettled nature of AI liability law is a serious incentive that would presumably convince many companies to opt in to participate in the system. As a lifelong libertarian, I really like the idea of trying to bring market dynamism to AI governance. And I appreciate that while this idea is new to the public now, doctor Hadfield has been developing such concepts for decades, even working with Anthropic cofounder and policy lead Jack Clark on related ideas as early as 2019. Andrew, for his part, brings invaluable practical implementation experience to the table as well, having worked as Colorado's cannabis czar while the state was rolling out a new regulatory system for legal marijuana. Nevertheless, as you'll hear, I press them on several important concerns. How do we avoid a race to the bottom where companies simply choose the most permissive certifier? How would the liability protections interact with existing tort law? And what exactly are people giving up in terms of their ability to sue? Do we have any organizations that could step up and do a good job in the role of private regulator? And who do we really have to trust to do a good job for such a system to work not just in the beginning, but on an ongoing basis? In the end, there's no silver bullet. Any governance system that we might design does ultimately rely on some number of people doing a good job in key roles. But I do come away from this conversation optimistic that an arrangement of this sort, if it could put the good folks at Farai, Meter, or other similarly tech savvy organizations in a position of some real authority, could deliver much more responsive regulation than the government could muster on its own while also making sure that society is not flying entirely blind into the fast approaching AI future. Coming up soon, we'll have another episode with professor Gabriel Weil, who has a very different proposal to address many of the same core concerns via liability law. So please stay tuned for that and definitely reach out to let me know which of these ideas you find most promising or if there are other proposals that you think would be better yet. With that, I hope you enjoy this exploration of a proposal to harness market dynamism to effectively govern AI technology with doctor Gillian Hadfield of Johns Hopkins and Andrew Freedman of Fathom.

Nathan Labenz: (4:48) Doctor Gillian Hadfield, Bloomberg distinguished professor of AI alignment and governance at Johns Hopkins University, and Andrew Freedman, cofounder and chief strategy officer at Fathom. Welcome to the cognitive revolution.

Andrew Freedman: (5:00) Pleasure to be here. Thanks for having us.

Nathan Labenz: (5:02) I'm excited for the conversation. You guys are working on some very interesting stuff. I'm always looking out for creative solutions to the vexing problems of AI that we have in the governance space. Obviously, we're kind of still flying pretty much naked here through this, you know, rapidly cresting technology wave. And I think you guys have got a very interesting proposal. I understand that there's kind of a 1 2 punch maybe that we'll we'll wanna keep in mind for this conversation. 1 being, like, the general sort of set of ideas. And then second, but also very important, like, this is being introduced now into the California state legislative process with an actual bill that will, at some point, you know, either get revised or get passed or, you know, hopefully, maybe even 1 day could come into law. So, maybe for starters, tell us what you're up to. Like, give us the kind of grand landscape of this private governance notion.

Andrew Freedman: (6:02) Yeah. Let I'll start with there because really so much of the heart of this idea comes from professor Hadfield. I'll talk I'll talk about how Fathom got interested in it. Fathom started just over a year ago really on the notion that AI was going to, for better or worse, break a lot of things, governance being 1 of them. Obviously, a lot of the way that we interact as a society, a lot of the way we interact as an economy. And then we as a society, we're gonna have to figure out how to put it back together. And, generally, tech policy has been led left to tech to figure out tech policy. And this was so much of a broader societal issue. And so how do we start not just a think tank, but an organization that could help rise the ideas up from society that are gonna best fill the needs needed here and then help build them. And that's really what Fathom has based its mission around. The very first thing we ran across when we went out and did tons of polling and qualitative work, meeting with leaders across industry segments and society, was everybody thought that governance was needed here. Almost everyone agreed that the current ways of thinking about of kinda heavy handed up from the top down governance was probably not gonna work for AI, but also simply leading leaving everything to society was also not or leaving everything to, the labs was also not going to solve for society. So that began a journey for us of, okay, if those aren't the solutions, where are the solutions? And turns out that the professor has been thinking about this for a very long time and has some amazing thoughts on there. And there's some other thought leaders, Dean Ball being 1 of them as well, who really thought that there was a third way to start thinking about this that, were not general was not generally brought up in in public. So I'll I'll leave that there of how Fathom became interested in it and toss it over to the professor.

Gillian Hadfield: (7:48) Okay. Jillian. Jillian. Yeah, so I've been thinking for decades in my career about how well our legal and regulatory systems perform. I worked on access to justice for a long time. And then started thinking about the way our legal and regulatory systems responding to technology and globalization somewhere in the mid 2000s, sort of just recognizing that the systems we developed for making law and regulation really starting in the nineteenth century were really no longer fit for purpose. They didn't keep up with the complexity and the speed and the multi jurisdictional nature of the world we live in. AI just ramps that up to several levels. We just had this real mismatch between the way we make law and regulation and the way technology is now the speed with which it moves, the complexity with which it moves. I started thinking about, okay, so how do we need to adapt our approaches to building that regulatory infrastructure for a much faster moving complex and now AI based world? That's when I started thinking about how do we get markets involved more in figuring out our regulatory problem. As a starting point, we can talk more about this, it's really important to recognize that regulation is actually the thing we build in. It's actually not something that is sort of dragging down markets. Sometimes I started writing about this quite some time ago as well. Our markets are built on good legal and regulatory infrastructure, contract, property, fraud, antitrust, all that good stuff frankly that allows people to invest and participate in markets with confidence. It was like, okay, so if we have stuff moving at the speed of very rapidly adapting markets that are producing the technology, how do we get more of that market energy and investment into solving the problem of what's the best way to build that regulatory infrastructure for technology. That led to this concept of regulatory markets, which is the idea that we still need our governments involved in setting what is the acceptable risk level for society, making judgments about what we will and won't allow. But then the technical question of, but how do we translate that into what companies, labs, etcetera, actually need to do? Like from a technical, we need to get more market activity into that phase. So we can talk more in detail about how this all works, but that's really where it came from. And then Jack Clark and I wrote a paper in 2019 when I was on contract and he was a policy director at OpenAI, you know, proposing that this was a model for AI safety. And so we've just been building on that since then.

Nathan Labenz: (10:49) That's interesting. I didn't know that that tidbit that it goes back to 2019 and that you were working with Jack at that time. I don't know if it was Luigi Zingales who said this or I forget where the source of the quote was, but I always remember this quote. Markets are not free or unfree. Markets have rules and some rules work better than others.

Gillian Hadfield: (11:07) Perfect. Yeah, constant refrain. No such thing as a free market. And this I've been doing my entire career. So My PhD is in economics, I did it jointly with a law degree. The focus there is, oh, there's all this institutional structure. Economists assume that markets just, they did assume markets just existed. This was, you know, honestly, after fall of the Soviet Union and the shift from socialist economies to market based economies, and economists just kind of said, oh, just get rid of all that government control over industry and markets will flourish, and they did not because if you didn't have good legal systems for enforcing contract rights and property rights and intellectual property rights and good regulation, markets don't thrive. So fabulous quote, it's exactly the right 1. There's no such thing as a free market. There's healthy markets that are well structured with good legal underpinnings and the kind of regulation that makes everybody willing to invest, participate. Anyway, that's what leads to a vibrant market. We kind of know this around the world because we know that the countries that struggle on the development side, they don't have good rule of law. They don't have good legal underpinnings, and nobody wants to invest there. So so I think that's a really, really important observation.

Nathan Labenz: (12:35) So before we get into the details of the private governance structure and the way that's instantiated in SB 813 and and possibly some variations on that. Let's take a minute and just cover what do you see as the fundamental problem with either the sort of top down approach, and you can kind of characterize top down as you will. I mean, there was not too long ago, somebody like Sam Altman was saying maybe we need licensing for Frontier models. Obviously, he's backed off of that. Something like SB 1047 was sort of, in my view, like, fairly light touch, but still had some elements of top down and that there were, you know, statutory thresholds, which, you know, I think the critics have been at least partially vindicated and that those thresholds do they haven't aged super well and it hasn't been a super long time. So interested in the problems in that, but also interested in kind of the problems on the other end of like, why do we need new rules here? I mean, might say, you know, we've got general rules for commerce, you know, is this really that different? Why should we think of this as being different than any other new product that somebody might bring to market that doesn't, you know and I by the way, I don't really buy that. Of course, like, I spend all my time thinking about this, but I'd like to hear your kind of dismantling of that naive notion.

Gillian Hadfield: (13:53) So is that is that directed to me? From from the point of view of thinking about regulation as a thing we're trying to accomplish for healthy societies, prosperous societies, fair societies, you know, top down governance just becomes more and more untenable if you're setting the detailed rules. It becomes more untenable the more complex technologies are, the more quickly they develop, more jurisdictions they're in, because you're setting rules that you've got to follow everywhere. So I'm an economist, and I'm a big fan of markets, but not for any ideological reasons, but because they're good information processing and discovering engines. Right? They're down at the ground level. They're responding to what's happening, you know, in the trenches, and you need that kind of information to sort of figure out what's the right way to regulate. And the right way to regulate, right? To capture the benefits of promoting innovation and getting efficient markets and so on, but at the same time establishing those ground rules that make everybody feel confident and willing to invest in and participate. So the problem with the top down approach is, governments, it's just very hard for them to have access to that, 1, that kind of information. You know, we do have ways of getting that, and we've developed those over the twentieth century, like with chemists and people who have expertise in biology and forests and clean water and so on. But when you need to move at the speed with which the technology is advancing, it's just that you have this real mismatch. Like, you need this information from the ground level, but then you have a process in our legislatures and courts and agencies that just operates on a different time scale, and it's just very hard to keep up. And, Of course, 1 of the things we've observed in that process over the last several decades is that process has, in many ways, gotten more ponderous. The length of our laws a lot longer than they used to be. Opinions out of courts are a lot longer than they used to be. It takes a lot longer to accomplish stuff and that means you don't revisit it. There's just so much sand in the gears and we don't keep up very well. I think that's the issue with the top down. So what you want to try and do is you want to find a way to get all that intelligence from the ground level into your regulatory system without abandoning the ultimate democratic control. Because we as a collective need to be deciding what's okay, what's not okay. Are we taking this risk with autonomous vehicles or companion AI or various algorithmic decision making. We need to be making those decisions, but can we separate out making those risk adjustments and then the technical process. What do you need to know about how this works? What data you should train on? What, you know, tech med team tests you should do? All that kind of detail. Can we get that? So I think that's the issue with trying to do that all top down from government.

Andrew Freedman: (17:16) The only thing I'd love to dovetail to that is I my my first job in emerging emerging regulatory systems was I was cannabis czar in Colorado for the rollout of of its regulatory system. And that, in so many ways, is a much simpler policy than AI is ever gonna be, and we already thought that touched, like, every area of Colorado law at that time. And the thing the strength of the system to which, you know, I think, meddled success throughout the the nation on it, but the strength of it was in whenever we set up a system that was more iterative and was able to say, like, oh, here's a problem. We didn't see edibles coming up in this way, and now we can, like, change edible rules and get there and and understand. I think the idea that at any 1 time, we can predict where the AI system is going and create good guardrails that will make sense, you know, even 6 months later is wrong. And so the challenges are gonna be huge. The opportunities are gonna be huge, but our ability to predict the future will be very low. And so part of what attracts me so much to this system is it is a way to put independent subject matter experts up front and allow them to continue to make decisions over time about what good looks like. The second part I would put in that is I do think that if industry understands what good looks like according to independent subject matter experts and understand that that is gonna be in some way universally applied to them, then that's the brass ring they're gonna start reaching for. I I do think that when you create top down measures, that tends to become a floor that then goes and throw and is thrown to a compliance department. And the compliance department is, here's all the boxes we gotta check-in order to say we technically meet this, but we're not gonna get in the way of the business unit, which is out doing its own thing. And so, again, I I that structure is, I think, gonna do more to make sure that lawyers have a a job in the future of AI than it is to make sure that we're and both Jillian and I are lawyers, so no no shade thrown there, but then to ensure that what is happening is actually in the best interest of the public.

Nathan Labenz: (19:24) Hey. We'll continue our interview in a moment after a word from our sponsors.

Nathan Labenz: (19:28) If your customer service team is struggling with support tickets piling up, Finn can help with that. Finn is the number 1 AI agent for customer service. With the ability to handle complex multi step queries like returns, exchanges, and disputes, Fin delivers high quality personalized answers just like your best human agent and achieves a market leading 65% average resolution rate. More than 5,000 customer service leaders and top AI companies, including Anthropic and Synthesia trust Fin. And in head to head bake offs with competitors, Fin wins every time. At my startup, Waymark, we pride ourselves on super high quality customer service. It's always been a key part of our growth strategy. And still, by being there with immediate answers 24/7, including during our off hours and holidays, Fin has helped us improve our customer experience. Now with the Fin AI engine, a continuously improving system that allows you to analyze, train, test, and deploy with ease, there are more and more scenarios that Fin can support at a high level. For Waymark, as we expand internationally into Europe and Latin America, its ability to speak just about every major language is a huge value driver. Finn works with any help desk with no migration needed, which means you don't have to overhaul your current system to get the best AI agent for customer service. And with the latest workflow features, there's a ton of opportunity to automate not just the chat, but the required follow-up actions directly in your business systems. Try Fin today with our 90 day money back guarantee. If you're not a 100% satisfied with Finn, you can get up to $1,000,000 back. If you're ready to transform your customer experience, scale your support, and give your customer service team time to focus on higher level work, find out how at fin.ai/cognitive.

Nathan Labenz: (21:20) AI researchers and builders who are pushing the frontier know that what's powering today's most advanced models is the highest quality training data. Whether it's for agentic tasks, complex coding and reasoning, or multimodal use cases for audio and video, the data behind the most advanced models is created with a hybrid of software automation, expert human judgment, and reinforcement learning, all working together to shape intelligent systems. And that's exactly where Labelbox comes in. As their CEO Manu Sharma told me on a recent episode.

Andrew Freedman: (21:51) Labelbox is essentially a data factory. We are fully verticalized. We have a very vast network of domain experts, and we build tools and technology to then produce these data sets.

Nathan Labenz: (22:04) By combining powerful software with operational excellence and experts ranging from STEM PhDs to software engineers to language experts, Labelbox has established itself as a critical source of frontier data for the world's top AI labs and a partner of choice for companies seeking to maximize the performance of their task specific models. As we move closer to superintelligence, the need for human oversight, detailed evaluations, and exception handling is only growing. So visit labelbox.com to learn how their data factory can be put to work for you. And listen to my full interview with Labelbox CEO Manu Sharma for more insight into why and how companies of all sorts are investing in Frontier Training Data.

Nathan Labenz: (22:47) So let's describe the mechanism. I, I'll let you do it. There's kind of 3 tiers, but, yeah, floor is yours. Tell us what

Andrew Freedman: (22:56) you're Let me take a first hack at it and then have Jillian clean up everything. So the most what I would say, full instantiation of this idea is in senate bill 8 13, but this is, I think, broadly gonna be a conversation we wanna have many times over many different ways. And I also think SB 813 requires quite a bit of revision, so I don't wanna get too in the weeds there. But there's 2 ideas there. 1 is how do you set up something like a regulatory markets mechanism to be able to identify third party auditors, private side regulators, verifiers, certifiers who really know what good looks like, what best practices are at all times, and have the ability to prove to both companies and and the government that they can actually verify claims and that companies really are meeting these best practices. They would scope where those where their expertise lies in some way, saying, you know, we are maybe AI as it pertains to chatbots or automotive vehicles, and these are the sort of safety parameters we're looking at. And a AI, either developer or application or a deployer, would come in and say, know, we're worried about the risk that we are gonna be taking on by utilizing it the way we're utilizing it. So we wanna be certified that we're meeting best practices, and they enter into a a process to be certified by these third party groups that they're meeting best practices. And if they can show that, the Senate Bill 8 13 would say that should be proof on the back end that if something bad does happen, that you've met a standard of care, that you've met some form of duty to the public, and should should count as, liability shield is probably too strong of a word, but evidence in court that you did the best you you should be doing in your meeting that duty of care. Meanwhile, the third party auditors, certifiers, verifiers at that point are constantly going back to to the California government and saying, here's proof that when we certify people, they're doing better in the world. Our certified AI, cars are getting in fewer crashes. Our chatbots are causing fewer issues amongst teenagers. We are beating not only maybe the status quo of how it would be without us, but we are, in fact, performing higher than other certifiers in this area. And therefore, there's some race to the top to be able to show we should be able to keep our ability to certify, against somebody who cannot prove that they're doing as good of a job in in this field.

Gillian Hadfield: (25:35) So Andrew's given you the kind of the version of where, you know, maybe a particular implementation of it, how it's and I think there's because there's there's a lot of questions about how do you get to this. It's it's a real transformation in the way we approach regulation. And when I first started working on this, as I said, before I was, focusing on AI, so a book I released in 2017, I was just talking about, yeah, we're to need to change the way we approach regulation and add this kind of tool to our toolkit because most technology is moving very fast. We have very decentralized supply chains and so on. I'm just going to talk about the more abstract version of this and compare it to that idea of the the, you know, the top down version. So so if you think about just the sketch of what, you know, what, you know, the caricature, the cartoon of regulation is government sets rules. Companies that are regulated by government have to, you know, follow those rules, and then government monitors to see if they're in violation and, you know, finds them or takes them out of business or something like that. So it gets called command and control or prescriptive regulation. And people have been working on design of regulation to be more agile and adaptive for quite some time. And 1 of the developments there has been the idea of what's called performance based regulation, which said, okay, so under, let's just take the example of pollution control. The command and control version of that is the government says, here's the particular scrubbers you have to install into your smokestacks. Right? It's a very 19 seventies version of a regulatory problem. But here's the technology you have to adopt in order to achieve what we think as the government is the acceptable level of pollution. A performance based approach to that says, okay, government's going to say, here's the acceptable level of pollution coming out of the top of the smokestack. You factory figure out what's the best technology to do that. Different companies, different factories could use different technologies. Companies could arise that would say, Help to develop that technology. Here's a more effective, cost effective 1. To be in compliance for the government, you had to reach those output goals. The way a regulatory market structure works is, okay, let's take that idea of government setting the outcomes. Here's the acceptable level of accidents for autonomous vehicles. Want to see uplift, any uplift for capacity to develop bioweapons falls below a threshold level. It doesn't have to be a threshold that's set in numerical terms. It could be qualitative, judged by experts. Our tort standards are actually outcome based. We want to see companies take reasonable precautions to prevent harm. You say to instead of just saying to the regulated company, you figure it out. You then say, actually what we want is to develop a whole sector of independent firms that are engaged directly in the project of figuring out the best way to achieve those outcomes. So you have licensing that happens or oversight that happens. Governments will license what I'm going to call regulatory services providers in different contexts, in different pieces of legislation. There's other ways of thinking about how we characterize them, but I'm just going to call them regulatory services providers. They are licensed on the basis of demonstrating that their approach, their technology, their rules achieves that outcome. And then the companies that you're trying to regulate, I call the target companies, they select a regulator from those in that market, right, from those approved regulators. I mean, in the version that we were originally proposing in sort of, again, in the theoretical framework was, okay, that's mandated. Those companies have to select regulators. So in the pollution context, can't come up with your own approach but you can select a company that's been approved for this. And a key idea there, this is the market idea, we actually need a vibrant market where we're getting investment, we're attracting financial capital, human capital to the project of what's the best way to achieve this outcome. Is it red teaming tests? Is it review of the data? Is it embedded officials observing the process? We actually don't know the best way to regulate to achieve our goals with respect to AI. So I think the basic structure is government sets outcomes, you have an independent sector of companies that are specializing, and they could be nonprofit companies by the way, it doesn't have to be for profit, Nonprofitprofit companies that are specializing in developing the technology of achieving that goal. Then the third component is, of course, your target companies that you're trying to regulate. Andrew was appealing to the fact that actually we're pretty familiar with lots of private actors playing a role in our very complex regulatory systems today, certifiers, auditors. I mean lawyers play a role in that sense. They have professional obligations to keep their eye on what's happening inside the firm to make sure that it's law abiding. And so we have accountants who are doing oversight roles like that. Part of getting to that ideal version I just described is to say, okay, how start do to move our existing markets more in the direction of providing greater oversight and providing more of the substantive content of what is it the lab actually has? That's what they want to know. What do we have to do to limit our liability or to be in compliance with government requirements?

Nathan Labenz: (31:49) So how similar is this to other things that exist today? I'm not aware of any major sectors that have such a market today. I'm also not quite sure how core to the idea of the liability protection is or if that is sort of, you know, 1 of many carrots or 1 of many deals that could be made. And then I'm thinking, like, how similar is this to, for example, things like the auto industry where, you know, obviously, there's a lot of regulation there. There's a lot of standards. Do we in fact have even if it's not necessarily through this sort of market structure, do we in fact have a similar deal already in place with car companies where, like, as long as they're hitting certain standards, even if they were maybe just prescribed by an agency or whatever, that they do in fact get liability Yeah. Protection from that. So, yeah, several compare and contrast points.

Andrew Freedman: (32:42) The the best I can say on it is this smells a lot like other things, but it's it is novel in in some ways. And and the the novelty, I think, is important in a couple of ways. But the 1 I like the most to compare it to is Underwriters Lab UL. If you pick up, you know, any consumer electric good in your house, it will have a little stamp that says UL. And, that company was started during a world's fair in, I think, like, late nineteenth century because they didn't want, you know, their entire fair to go up in flames, and so they brought in an independent subject matter expert group to be able to inspect, all of these different shows that are gonna happen at all these places and make sure that there wasn't a fire liability there. And that group kinda took off, right, and found their way into a number of different arenas to be able to say, like, we have a viable we we have a really valuable thing we can add to you. Right? You don't have to trust every consumer product that comes your way that there's actually maybe you should just require those consumer consumer products to go get this UL stamp. And about 100 years later, UL started to get written, you know, straight and codified straight into law. I do think that there is a a need to to make sure that public good is more directly instantiated into these private governance worlds. Meaning, like, I don't believe that if we let it go, we will find that there's just gonna be enough verifiers, auditors, and ecosystem out there that all have enough of a true north pointing to what is societal good to be able to solve this problem for us. Meaning, that leaving this to the private side without any sort of government accountability to solve will probably do more to create things that cover the butts of the labs and less to make sure that what's happening is actually is actually doing the most to protect public good. So I do think that, you know, in particular, Jillian's race to the top mechanism that she's putting in that really requires some amount of accountability and and sets the goals of these companies really via public legislation is a vital aspect to it. I also believe we don't have 100 years to wait for these things to kind of organically grow up inside these systems and find their ways to it. This has to be we have to take the lessons from a UL and say, how do we supercharge that? How do we make sure it's accountable? How do we make sure it won't doesn't get captured by industry? And how do we make sure that its true north is pointing in the right direction?

Nathan Labenz: (35:07) Hey. We'll continue our interview in a moment after a word from our sponsors.

Nathan Labenz: (35:11) In business, they say you can have better, cheaper, or faster, but you only get to pick 2. But what if you could have all 3 at the same time? That's exactly what Coher, Thomson Reuters, and Specialized Bikes have since they upgraded to the next generation of the cloud, Oracle Cloud Infrastructure. OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high availability, consistently high performance environment, and spend less than you would with other clouds. How is it faster? OCI's block storage gives you more operations per second. Cheaper? OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking. And better, in test after test, OCI customers report lower latency and higher bandwidth versus other clouds. This is the cloud built for AI and all of your biggest workloads. Right now, with 0 commitment, try OCI for free. Head to oracle.com/cognitive. That's oracle dot com slash cognitive. It is an interesting time for business. Tariff and trade policies are dynamic, supply chains squeezed, and cash flow tighter than ever. If your business can't adapt in real time, you are in a world of hurt. You need total visibility from global shipments to tariff impacts to real time cash flow, and that's NetSuite by Oracle, your AI powered business management suite trusted by over 42,000 businesses. NetSuite is the number 1 cloud ERP for many reasons. It brings accounting, financial management, inventory, and HR all together into 1 suite. That gives you 1 source of truth, giving you visibility and the control you need to make quick decisions. And with real time forecasting, you're peering into the future with actionable data. Plus with AI embedded throughout, you can automate a lot of those everyday tasks, letting your teams stay strategic. NetSuite helps you know what's stuck, what it's costing you, and how to pivot fast. Because in the AI era, there is nothing more important than speed of execution. It's 1 system, giving you full control and the ability to tame the chaos.

Nathan Labenz: (37:24) That

Nathan Labenz: (37:25) is NetSuite by Oracle. If your revenues are at least in the 7 figures, download the free ebook, Navigating Global Trade, 3 Insights for Leaders at netsuite.com/cognitive. That's netsuite.com/cognitive.

Gillian Hadfield: (37:45) Yeah. Think it's it's, it's exactly right to say this is novel, so you can't point to an existing here here's the full on, full scale model already implemented in this industry. This is this is a proposal for something we don't have yet that I think we we need. So it's it's about being innovative in our legal and regulatory technology in the way that we are innovative in the underlying technology. But there's lots and lots of things that kinda come up to the doorstep. So Andrew's emphasized standard, like, you know, entities that perform the function of ident you know, we're gonna create standards and requirements and very specific technical requirements, And then those are gonna get played out into the market. Right? That's a market in the sense that Underwriters Laboratory and other entities, we have lots of standard setting organizations, ISO and so on, those are nonprofit companies that are in the business of creating standards and sometimes very, very technical standards. And then they can be, oh, they're adopted because they have a good impact on the market share for that company. Or they get implemented and picked up by government that says, okay, if you wanna be in compliance with the California regulations on building farm equipment, you have to have followed the standards from this organization. 1 of the examples I like to give of something that gets kind of close, sometimes it's helpful to think as a model, actually the regulation of medical devices and quality production in medical devices where governments have said, and it's actually a consortium of 5 countries, Canada, The US, and I think, well, I'm not going remember the other countries, but it's a consortium of about 5 countries that have said each of the countries is going to choose their own standard for quality control in medical device production. And The US has an FDA standard. Canada uses the ISO standard, the ISO being a nonprofit standard setting body. But then collectively, they have produced an approved list of authorized certifiers of compliance with those standards. Then they have a rule that says, well, if you've been it's a single audit, if you've been approved by 1 of these approved, if you've been certified by 1 of these approved certifiers, then you can sell your medical devices in all the countries in the consortium. So that's some example of, say, we've got pieces of this. I always like to emphasize too that this is part of the development of regulation. Often the history of regulation in a new area starts off in the private sector, so securities regulation starts off with private organizations, stock exchanges, right, saying here's our rules. If you want to participate on our stock exchange, you have to engage in this kind of disclosure. The disclosure helps bring more people to the exchange. And then that gets picked up by government. And it sort of develops on this private basis and gets picked up and integrated into government. Today, regulation of financial transactions has important roles for these private entities. For example, FINRA, Financial Industry Regulatory Authority, think it is, is a private membership organization. But then it's overseen by the SEC. The SEC actually approves the rules that FINRA uses to oversee its members. So there's lots of examples of this public private integration in our current regulatory regime. They're much more complex than that kind of cartoon of what I was giving you earlier of like the command and control government sets rules and companies have to comply. It's really a very complex system and overlap. This kind of takes it to the next level and it says, let's really lean into that outcome based role for government and try to get more market activity around what's the best way to achieve these regulatory objectives. I think it's also your question, Nathan, also went to liability and carrots and so on. And so I think it's important to keep these things kinda distinct. Just like, what's the model? Like, what's the regulatory structure we wanna get to? And then to think about how do you get there. So like in the original, proposals about this, as I mentioned, it was like, well, we just mandate it. Just like we mandate compliance with securities laws or health and safety rules or automobile safety requirements, We can just mandate you must purchase the services of an approved regulator. But of course, it takes some time to get there. We need to actually evolve this market. We need to build this market. We don't have lots of players in the market. Be very hard to turn around tomorrow and say, you must buy the services of an approved regulator in some of these AI domains because you just don't have the market there. So the vision behind something like SB 813 is to say, can we move ourselves towards that? So that's where you start to think, oh, well, just like there's a market based incentive to say, get the UL certification mark on your product, can we create an incentive that says, well, you don't have to come and participate in this structure. You don't have to purchase these services. We're going to create that market. We're going to create a mechanism for government to say these are approved providers of this service, but we're to create an incentive for that because if you do that, you will have the capacity to demonstrate that you've met your compliance requirement. Like if you get sued, you'll be able to say, oh, but I was following the program. I was in compliance with the requirements of this approved regulator. The state had said this is an approved oversight body, verification body, certifier. So that could just be an argument in your tort case or it could have a formal role in your tort case. You've actually got a legal benefit that says you're entitled now to a presumption that you were in compliance with your tort liability duty because the government started off by saying, oh, we're only gonna approve those entities that are actually able to demonstrate that if you did what they required, you're in compliance with your tort duty.

Andrew Freedman: (44:26) I I will say

Nathan Labenz: (44:27) Yeah. Go

Andrew Freedman: (44:27) ahead. S b 8 13 and some of these it's, like, almost, 2 complex ideas put into 1 that that do have interesting ways they play off each other. Some things I like about TORT being the backdrop for why you would want to enter this is, 1, it does force the market to decide where the risk is. Right? And some players get to come in and say, you know, let's say we're certifying for something that that the leading labs are just like that is not a thing we think we'll ever get sued on, and it's not a problem we think will ever come up, then that part of the market holds no value. Right? And so there's a little bit of where do you actually think the harm is gonna come from that forces everybody to get a little real on that. Right? And so instead of there being 10,000 cases and they're all kind of edge cases and it's it forces people to start saying, like, where where's the brunt of where we really need to focus in order to work on that? So I like that part of it. The second thing that's a little bit elegant on on that is TORD as a backdrop has this kind of funk feature of being both national even when it's not federal. So the common law just applies everywhere in The United States. And for those, like, kinda unfamiliar with tort law, it's it's part of kind of the it is brought over from England, and and it was, this idea that it's kind of what makes us all whole in the background of everything going on in society. And it differs from state to state, but there's just a lot of commonality across states. And so meeting your duty of care in 1 state can be proof that you're meeting your duty of care in another state. And so at a time where I think there's a lot of fear that proposals are going to be creating a patchwork between the states, these 2 ideas combined can you could imagine a world where it creates national, maybe even international private side regulators who provide a very real good across different state across state lines and, again, even internationally, that there's a carrot there that doesn't complicate compliance. It actually starts to really centralize and focus where we should be worried and how we can meet compliance goals.

Nathan Labenz: (46:37) So can you calibrate me on what? I mean, I guess this could this could obviously be set at various levels. Right? And I understand that there's sort of a distinction again between, like, the sort of more diffuse academic layer of ideas and the, you know, specific statutory proposal that's on the table in California. How much would the public be giving up in terms of its ability to sue if SB 813 were to go into effect? Maybe calibrate me on, like, how does that compare to pharma or auto? Yeah. Like, I generally have the sense that, like, if I get into a car accident, I can't sue the carmaker, but maybe in some cases, I can. I don't know exactly. You know? There's probably some know, there's always a carve out on something. Right? And similarly, like in, if I have an adverse reaction to a drug, I probably can't see the drugmaker unless maybe they sent me a tainted pill or something. Right? So I don't quite understand what the limits are even in the world that I have it today with products I'm very familiar with. I'm not sure how that translates to AI, where, by the way, just to complicate things a little further, you know, we've got very familiar things like an AI might drive my car or it might make me a medical diagnosis and recommend a treatment, or it might become my romantic partner. Then, you know, I have no idea how to even think about, you know, what I would be you know, how does 1 taxonomize that? But we can maybe leave that for part 2. First question is just like, what is the trade that s p 8 13 is proposing? How does that compare? And, you do know, you think it is the right balance, or how would you maybe revise it if at all?

Andrew Freedman: (48:09) Yeah. Great questions. First of all, not only do you not know that, but literally no 1 knows it. So far in the world of we're kind of I think we're we're kind of skating to where we believe the puck is going here a little bit. It's so far in the world of technology. It really has been fairly shielded from tort law. It comes up, but in in limited situations. And I believe, and I think a lot of people would say, and I think even tech companies are starting to understand that the agentic nature of AI, AI is being more than an algorithm that's gonna say, you know, if as long as you spit in this information, this is the information that comes out on the other side. But in fact, being an actor in the world is going to bring it into a world of liability that I don't think tech has been in before. And the nearest proof we have to that is the Character AI case, which just survived a motion to dismiss, but, you know, did, is the chatbot example of being open for tort law. My guess is, and I I think it's pretty reasonable to say that the hundreds of judge of state judges across the country are gonna find various levels of liability for developers, for deployers, for applications, for and so the kind of the full stack of, like, people involved in AI will have to start worrying about liability in a way that that they didn't. And consumers, on the other hand, will have options for suing, in cases where something bad happens to them that they can say, you know, it really was, you know, all the way back at the model level. Where 8 13 sits currently is that this is a rebuttable presumption to get kinda nerdy about it. So Please. That It's encouraged. This counts as evidence, but but if you can come in with any sort of other evidence that they didn't meet it so, okay, if you really want me to get nerdy, here's the nerdy part. 1 of the elements of tort law is always gonna be, did you meet a duty of care? And there's lots of different ways of cutting up duty of care. Sometimes there's something called strict liability where nothing you know, if it causes a harm, it's your fault versus was there negligence? Was there gross negligence? Did you clearly do something intentional? There's a lot of different theories of duty of care that come in. And then the question is, whatever you fall in, did you reach that? None of that has really played out in AI at this point. And so ours is a fairly light touch as it sits right now and says, 1 of the ways you can think about it is is was the comp was the was whoever wanted to go get the certification, were they acting negligently? And our thing would say this test should stand as a good amount of proof that you were not acting negligently. If you can come in with debate it with with proof that they were acting negligently, that that can counteract that in trial, I would say we're very open for where this where to move that needle based on stakeholder input. I think our initial goal is what's enough to kind of bootstrap this regulatory market system to that's gonna start getting people involved and start coming to it to the table on it. I do think tort law offers a lot of what I call potential energy, meaning there's a lot of places where harms could, you know, make people, make companies do the right thing. So far, it has not come out as kinetic energy. Like, so far, that all remains theoretical 10, 15 years down the road. I do think SB 813 does a lot to kind of say, like, you should think about it now, and then you should go reach for best practices now to be there.

Gillian Hadfield: (51:42) Let let me let me, sort of be the law professor here and just sort of the the you know, much more up in the abstract thinking about tort law. Because my answer to my students when they would say, well, can you sue? I say, well, you can always sue, especially in tort. Always You sue. That's actually part of the tort system as part of our common law system. The fact that it is entirely court based, judge based, it only evolves up out of the cases that people have brought. There's lots of things that can happen once you get there, you can always sue. You can always get in there and claim, I was harmed, The defendant is the 1 that caused my harm, and they let's just stick with standard negligence. I was harmed because they didn't take reasonable steps to prevent this harm to me. It's not that complicated. It much more nuanced in case law and so on. But that's also part of what people think of as like this is the way we evolved our law, like through the nineteenth and twentieth centuries. Right? You evolved the law because the courts were there and people could file their suits and so on. The part that you were asking about, Nathan, about the relationship with, okay, now we have some regulate. We have automobile regulation. We have pharma regulation. What's the impact on tort there? It varies from place to place, but you don't generally get a barring of the potential to bring and potentially survive and prevail in a tort claim. If you're selling an FDA approved drug, you can still file your claim that said, but the company did something that doesn't meet the state standard for what's required. You can definitely sue the manufacturer of the automobile, right, even though they're in compliance with whatever requirements, regulatory requirements. What courts do in those cases, and then they take all that into account. And they say, well, you know, we think the reasonable steps to take, we're getting FDA approval. Or the reasonable steps to take, we're complying with federal law. So I think all of that is still available. There's no sense in which you're closing the door to the capacity of courts to participate in structuring this. We do have a few cases. Certainly there are cases, maybe lots of them, I know of a couple, where you have government that comes in and says, no, you cannot file a lawsuit here. So like, I'm pretty sure with vaccines, right? We have like, you cannot file a tort lawsuit, but we also have a compensation fund for injuries from vaccines. I did work long ago, back in the early days of thinking about how our legal systems were working on the September eleventh Victims Compensation Fund, which was created by Congress for those who were killed or injured in the September eleventh attacks, and that came with, and you can't sue the airlines, you can't sue the port authority running the World Trade Center and so on. Actually the project I was working on is how do people feel about that? The fact that they couldn't have access to courts and they could only go through this compensation mechanism. I don't think we're thinking about anything kind of like that here. It would change the way the tort case works, but it's really important to emphasize that that's because it's like 2 alternative ways of making sure people comply with what they're supposed to do in tort, which is take reasonable steps to prevent harming others. And it's like, okay, you could litigate that, and in lots of different courts, lots of different cases have different ways that courts and juries end up supplying the content to that. It happens at the end of a long process, an expensive process, and by the way a process that does not give everybody access and doesn't work perfectly in any sense at all. Or you could try and kind of pull that back to an earlier stage and say, oh guess what, we're going to try and establish up front what it means to comply with that duty and we're going to give you an oversight body, independent approved oversight body that will come in and look. Okay, we're not going to wait for accidents to happen. We're not going wait for people to get harmed and then long litigation to go through. We're going to move that process of deciding what did you have to do in order to do the right thing. Move that move that closer to, you know, when we're releasing and observing our products and not taking the tort law approach all alone of wait till something happens and then go through litigation for those who could afford to get into a litigation over it.

Andrew Freedman: (56:14) Can I can I just say 1 other thing that I I think does get lost in this? Because I do think you end up thinking about, like, the sympathetic person that was harmed and, like, well, what have you given them as a remedy? But the overall goal should be fewer people harmed. And so I would much rather a system overall that says, can you prove that you're harming fewer people? Then then we should reward that behavior. Right? If you if you can, there should and there should be some front end. I'd much rather there be fewer cases simply because there's fewer bad things happening. But, yeah, I that overall, I think, is hard to remain. That should be the true north.

Nathan Labenz: (56:48) Yeah. For certainly some subcategories, that seems relatively clean. Like, I've seen some of these graphs put out by, like, Waymo and Swiss Re, where they're just like, here's the level of accidents and injuries with human drivers, and here it is with Waymo. And it's like, okay. Let's all move to Waymo. I think that seems pretty, pretty clear. I guess I wonder how you know, to sort of red team the bill, which has become a meme in this space.

Nathan Labenz: (57:15) Yeah. How do we create a

Nathan Labenz: (57:19) race to the top? And how do we avoid all sorts of shenanigans when it comes to and sometimes these are just like legitimately very hard questions. But, you know, what gets categorized as what? You know, you said a second ago, nobody knows, you know, in response to, like, some of these fine grain liability questions. A joke that I've recently made is what is an AI agent? Nobody knows that either. Yeah. Right? So it could be anything from like a workflow that exists in Zapier to, you know, something that is calling on senior citizens on the phone and like not, you know, explicitly instructed not to identify itself as AI, which, by way, is something I have done on existing commercial platforms, not actually calling seniors, but, you know, demoing that that can be done on existing commercial platforms. So all of that, you know, right now is kind of getting swept up into AI agent. And I guess I wonder, like, with so much influx and so so little clarity on even, like, what counts as what, you know, and then the space sort of pre paradigmatic on on taxonomizing itself, how do we create the right incentives to actually have a race to the top? And how do we avoid situations where somebody's like, well, I'm the AI agent regulator. And then they're sort of, you know, lumping a lot of things together or, you know, doing some sort of weird bundled trade that in fact, like, I mean, you know, I guess another data point on this, I briefly was in the financial services industry in the run up to the mortgage meltdown. And I just saw very, like, up close and personal how the credit rating agencies had just been, like, totally captured and, you know, were basically worthless at that point. So how do we avoid the sort of credit rating agency problem? That was maybe similar in a way too. Right? I mean, they there were all these exotic products at the time. And now we're you know, if there's there's 1 thing you can say about AI, it's like a explosion of exotic products. So what are the key points in terms of creating a race to the top dynamic that is real and durable as opposed to, you know, getting ourselves into 2007 credit rating scenario?

Gillian Hadfield: (59:41) Can I take the credit rating point? Okay. Because we because Jack and I actually discussed the credit rating agencies in the 2019 paper because it's it's everybody's oh, but it's gonna be like that. Really important points about the credit rate. So credit rating agencies, their market demand was created by government because government said, you have to go to these, you have to be rated by these credit rating agencies in order to be able to issue bonds and so on. But at the same time, government immunized the credit rating agencies from any liability for the ratings that they gave. So there was 0 government oversight of the credit rating agencies from the point of view of how well they were doing their job. That's just completely different from saying this regulatory markets approach, which says no, what we're trying to do is move the role of government to being, okay, we're going to have oversight over these regulatory services providers. The ones who are figuring out what's the best method for making sure we don't get uplift in bioweapons or we have safe AI companions. You absolutely need a government, there's a government role there and it's government oversight. We're shifting the role of government from the detailed oversight, which is almost impossible for them to do, of what the labs are doing to the oversight of what these regulatory services providers are doing. The race to the top could be coming from something like, let's suppose the standard that government says, and then it's peering in and it's regularly looking and it's saying, look, we're going to yank your license, we're going to yank your approval if you don't meet this standard. You could imagine a standard that was you need state of the art protection against providing the capacity to build bioweapons to people without anything more than maybe high school chemistry. Now you have these competitive companies that are in this business who have an interest now. They have a market interest in communicating to the government, Oh, look at what we've figured out. Here's how we can reduce that risk and here's how we can demonstrate that to you. By the way, our competitors over here in the regulatory services market, they're trying to pull the wool over your eyes that this is all we can do, or we've done a good job, or whatever, because company A has an interest in increasing its market share and demonstrating it can do better. So I think there's a race to the top there. That can actually move our standard. Then there's a race to the top of saying, okay, what's the most effective way? What's the cost effective way? Part of what we're facing with where our AI governance is right now is we've defaulted to a lot of process based regulation. You know, check this box, put this oversight process in place where we didn't actually test whether or not that works. Like do we know that it works to have these logs and have these officials in place and so on? And so again, the race to the top is get government oversight that says here's what we want and, companies that are competing to achieve that standard. I think it's really important to recognize the difference with the credit rating agencies as you're pointing out, a really key example of what we've seen that we saw as a big failure of this private role. But it actually wasn't this model because it did not have oversight of those entities. I'm sure there's some regulation of credit rating agencies, so I don't want anybody following up to say, oh, here's all the laws I have to follow. But they definitely were immunized from liability for their ratings.

Andrew Freedman: (1:03:40) Couple of other things that I think are just important as guardrails for these private side regulators. 1 is I do think that the government has to be super involved in the finances of them. Right? There there's 2 things. 1 is make sure that they aren't seeded, funded by the labs themselves, that there's some independence there. And then 2, that they can actually afford to deny certification and continue existing, right? I do think that that's some stress testing that would fall on government in order to make sure that this is right. Right? If you are like, hey, here's some great processes. By the way, if we don't certify 4 out of 5 labs, we can't continue to exist and so we're gonna figure out how to certify 4 out of 5 labs, then this is trouble. Right? And that's a deep analysis that actually has to happen from government. The second part is I do think government does have to get real about what outcomes it's expecting. And as you mentioned, Nathan, like part of that is in some places that's easy, in some places that's hard. Right? In some places you can say, well, we have a very clear human analog to what what's going on right now. And so if you can't prove that you're safer than humans in this way or safer than the average human, then that's bad. Right? And but some places it's going to be completely new stuff, like just new harms, new ways of thinking about harms, places where we don't have incident reporting systems or places where the first incident is so bad as to kind of be so catastrophic that who cares that you followed some rules on the way there, that that outcome was just totally not acceptable on any level. I do think that that is the there is some crawl, walk, run parts of this model. And 1 of the reasons I like it being so voluntary at the beginning is the government sets outcomes in certain places, and the the companies get to decide if they think that that amount of liability protection or housekeeping seal of approval or whatever it is is worthwhile to go get for right now, and the conversation makes sense at the beginning. There's gonna be some places it's really gonna have to work out over time. Are there outcomes that the government can set for us where this makes sense to bring in this private regulatory model? Or does there have to be a different solution? Right? Is that issue kind of so major and in such a different place that actually the regulatory markets don't solve for this problem and it needs to be put into a different category?

Nathan Labenz: (1:06:11) Yeah. I can definitely see some, you know, bio risk things in particular being sort of just sort of beyond the scope of what, you know, any sort of liability framework. I think we you know, I I recently was on another episode saying, like, hard to go to the Wuhan Institute of Virology. Whether or not, you know, it ultimately came from there is another question that I'm not taking position on. But, with, you know, 10000000 plus dead globally, it's hard to go there and, like, sue them for, you know, damages. Right? There's just no it's it seems to be sort of a I mean, you may have a different point of view on that, but to me, that does seem like a sort of order of magnitude different thing that's just like, as the externalities become so big, you may just need a Yeah. Totally different regime.

Andrew Freedman: (1:06:53) I I honestly think yeah. That that is 1 of the amendments that we would love to see in something like SB 813, which is, like, there's a harm that's so big that it shouldn't fall within this. Right? And that that's, I think, part of, like, what is the exact right landscape of how we start this program up or this way of regulating up, where it doesn't have to accomplish everything, but it's accomplishing some very real things at the beginning. And we can grow it and mature it into a way to to accomplish a large portion of things. But certainly, there's always gonna be the sorts of edge cases, especially when it's attached to really large harms, that have to be handled otherwise.

Gillian Hadfield: (1:07:30) Yeah. I I think it's really important to emphasize that it's just, it's 1 tool in the toolbox. I think it's something really important that we're missing in the toolbox and that we will need and we will need, you know, more in some areas than others. But it is not it displaces all the other complex ways in which we, achieve, you know, safe, fair, stable market societies and so on. So we already have a very complex, we just don't see it, a very complex set of systems that interleave and overlap and tort law and standard settings and corporate, you know, corporations have their own incentives to create safety. Just lots of, There's the press, there's oversight, there's just tons and tons of stuff, there's international. It's going to be a part of that complex system. Then I think the other point we maybe haven't emphasized enough, but it's been there, part of your questions, Nathan, is what you had asked earlier, I'm not sure we answered, what's different about AI? Like, why isn't it just like any other product? We regulate cars, we regulate drugs, it's another product. AI is not a thing, right? It's not a product. It's a general purpose technology that I think is going to transform the way we do just about everything. So it's going to end up impacting and requiring regulation on a ton of dimensions. We have a very complex regulatory landscape and it's like you mentioned education, health, justice, logistics, city management, bioweapons, AI companion. I mean, it's just the the press. It's just gonna be in everything and so we're going to have different regulatory goals and different regulatory methods in all those places. We don't want to think, oh we've regulated AI. It's like can we build a vibrant, robust, agile mechanism that helps us identify where to regulate, how to regulate. This was a point that I think Andrew was going to earlier, another way in which building this kind of ecosystem can help to surface where are the problems? Where is the need to regulate? Where is their demand for regulation? This could be from enterprise purchasers who say I can't integrate that chatbot into my customer service because I don't know if it's going to make stuff up. So you know they say and that could cause me harm because it could cause harm because I'm giving advice to my customers or it's going cause me harm because they're going to think I'm selling something I'm not selling. So you know I can imagine that enterprise starts to come in and say, here's the kind of protection that we need in order to drive adoption. I think this is going be a critical part of, we talk about risks and harm, but we also want to be talking about like why are we building this in the first place? Hopefully the reason we're building it is because it can make everybody better off. So we need to be figuring out how can we respond to the ways in which the market can tell us. Here's the concern people have. Parents have this concern about their kids in school using AI. Or, I mean, as the tort system has surfaced, they have obvious concerns. I mean, quite very sad concerns about the way the AI companions are impacting their children. And so that's a bottom up kind of process. We want a way for the market to be able to respond to what the market can tell us about what concerns people have, what problems people have found, what stumbles we've identified. That's why the top down approaches in regulation are so complicated because that's what markets do for you. They sniff it out at the ground level and tell you, here's where an opportunity is, here's where harm is, here's where risk is, here's where there's a demand for something different than what we've got right now. And I think that includes again, in this very general purpose, there's just simply no way to sit in a boardroom and get out the whiteboard and say, okay, here's the list of risks we need to worry about from AI, here's the rules we could put in place. I just think that's a fallacy of lawmaking and regulation.

Nathan Labenz: (1:11:48) I wanna push a little

Nathan Labenz: (1:11:50) bit more on this, like, race to

Nathan Labenz: (1:11:51) the top versus race to the bottom. And then I've got a few kind of objections or, you know, other red teaming from other perspectives besides my own that I wanna throw at you as well and get your reaction to. It seems like on the race to the top question, I mean, I think that that you make a great point about the demand from enterprise. Enterprise in general, you know, rightfully, like, wants to use this technology, but also, you know, wants to cover their butts. And that seems like a force for good. That also seems like it is gonna be more like the you know, those folks are gonna be more oriented toward kind of known unknowns, you know, than unknown unknowns, I think. So, you know, putting my sort of AI safety x risk hat on for a second and focusing on the frontier developers with the sort of, you know, the the ones that are pushing things forward as fast as possible and, you know, really getting into uncharted territory. I wouldn't say that it's I don't think it's a fair description to say that they all are trying to do the least they can. I think on the contrary, like, we're relatively fortunate compared to counterfactuals. I can easily imagine in terms of the people who are running these frontier developers and how, you know, responsibly, I think, again, relative to alternatives, I can easily imagine how responsibly they're acting. But nevertheless, if I'm just like,

Andrew Freedman: (1:13:11) you know,

Nathan Labenz: (1:13:12) real politic or, you know, my my cynic, you know, follow the incentives sort of analysis would be like, the frontier developers are gonna wanna do the least that they can. So if they have a menu of options in front of them, they're gonna choose the most permissive, least costly 1. I'm not entirely sure if, like, money is supposed to be flowing from the labs to the regulators, the the MROs in SB 813 parlance or if the money's kind of coming from some other place. But if there's any correlation between, like, who gets picked and, you know, they pick you and that's how you get paid, like, then there's sort of this incentive to try to be the 1 that gets picked, which that all seems like a sort of race to the bottom type dynamic. And then it seems like, at least again in the 8 13 scenario, we're really relying on the attorney general to be doing a great job. Like, they're the ones that have to approve these organizations in the first place. They're the ones that have to keep a close eye on them. If they take their eye off the ball, everything can kind of race to the bottom probably pretty quickly.

Nathan Labenz: (1:14:21) And, you know,

Nathan Labenz: (1:14:23) that's we have, you know, sort of a challenge there obviously. We've seen recently in our country, you know, how, 1 administration to another can bring about like dramatically different attitudes and, you know, personnel and decision making. Right? So from 1 California AG to the next, I could imagine going from a great scenario where, you know, you've got the crack team that's doing exactly what you'd want them to do to somebody that's just focused on other things or, you know, even more problematically, like, you know, prone to being lobbied by companies. I mean, we haven't even you know, there there's a whole political economy of, like, how people are kind of channeling messages and what you know, I mean, whenever don't need to tell you about the the complications of the political economy of this. But is it right to say that in the s p 13 world, like, we're really putting a lot of trust into the AG?

Andrew Freedman: (1:15:16) Yeah. I I think not only right, I think probably has to be changed a little bit to to put more of a commission structure of who like, some expertise into the government to be able to say, you know, have you rightly scoped your outcomes? Do you know do you actually have the ability to track the outcomes that we say are important and, you know, what on an ongoing basis? I I don't think it's right until essentially there's a little bit of fear of god moment for the people who would be who would be these private side regulators that, like, it can really disappear very quickly. Right? Your ability to to be a certifier of in in this nature. If you have you know, if there is a complaint out there that says that, you know, you definitely bent your rules for in order to make sure something went through, that that can be fully investigated and that there's the money and resources to fully investigate. I will say in a in an ideal world, this gets passed in a couple of states or a couple of different governments, and there's multiple people who are looking after and giving the seal of approval. And that if 1 drops off, that that's a signal for you know, if if the you know, if a difference if state z is like, hey. This something doesn't feel right in this, and so we are we're withdrawing from our you know, the the licensing of this private regulator, then that should kick off a whole bunch of other people starting to go in there. And if it becomes an international group that there's countries that are actually also watching over and diving into this business. What I like about it is it's this layer of this is what the government should be doing, is looking into these private side regulators and really up into their business. Are they qualitatively and quantitatively showing that they're making a good difference in the in in the world? And if they aren't, there should be enough competition that we are able to withdraw from 1 person, from 1 group, and give power to another group, and we should be held accountable as lawmakers. You know, there's some point where there's, like, turtles all the way down. That does there's a moment where it stops, where, you know, you're trying to create the best scenario for government to hold these groups accountable. And if the government is simply not interested in holding those groups accountable, you know, redundancies aside, there's a problem there. I will point off that point out that exists with every regulatory structure. Right? There there's just a moment. I think what this does is really brings it out in the open and allows the public to see, okay. You guys have allowed these people to be certifiers for years. When they put out their numbers about how they create a better world, it is laughable. Meanwhile, look at this certifier that's doing this other thing and actually creating a better world. So there there's at least some way of maintaining that race to the top that doesn't include public accountability. I my last and final point is, like, I do come from a world where I see all ceilings become floors, where, you know, the best in in intended government regulation just becomes yet 1 more way of checking the box and sticking away from stuff. I don't know of another structure that is more set up to do the opposite, to actually create a qualitative race to the top, and continue to iterate that system as go as technology grows.

Gillian Hadfield: (1:18:28) So I think there's there's the reality of how do you get to this model. Right? The ideal that that sort of I was describing, Jack and I were describing back in 2019 and been talking about since. So there's the sausage making of, well, you know, gotta do it this way, that way, in this legislature, this process, here's what we think is achievable today and so on. But on this point about the race to the bottom, this system is only as good as the capacity for your government to have oversight of these private actors, whichever they are, whatever category we're putting them in, regulatory services providers, independent verification organizations. That government regulation has to have teeth in it, Just like our existing regulation is only as good as our capacity for our government, the IRS or the FDA or the Securities Exchange Commission to actually create good rules and enforce them. So I always like to think of this as the proposal here, sort of in the grander scheme of things, is to shift government effort and expertise into the task of overseeing these private regulatory bodies. And so I think of that as a pretty muscular thing and the model is only as good as your capacity to do that. That's your backstop against a race to the bottom. That's your backstop against, oh, come on over here. You're not going to have to do very much to comply with my system. So absolutely, the key design feature is how do you address that? Now, course, regulatory capture and so on is a problem throughout our regulatory system. The whole term is based on the idea of corporations capturing government. So we always need to be comparing this proposal and what we think we could achieve if we put the resources into it and got the design right relative to what we can achieve in the absence. What's being proposed here are real methods of getting that kind of appropriate and effective government oversight. I think just going back to the conversation about the various domains here, right, like the expertise you'll need in government to oversee the domain of autonomous vehicles will be different than the expertise to oversee the domain of companion AI, to oversee bioweapons risk, financial stability risk. That's the complex regulatory regime we're in. We're just trying to change the role of government in that but I don't think anybody should think this is a 1 and done. Did you fill out the right forms? Have you shown us something that looks plausible and then we'll leave you to it? No. This is a way of actually getting us away from the world we are currently in in AI governance, which is we have defaulted to corporate oversight, in fact self governance, kind of throughout the like saying, Oh, we have no idea what you're so the labs will tell us what red teaming tests to do, and we'll have limited visibility into that. Or we're gonna kick it over to industry standard setting bodies, which are corporate funded and lots of, participants from there. Government has basically been defaulting on, I think, its central role in saying, hey, this is what we want from these domains. And a reason for that is because it's technically so challenging. So this is a proposal that's trying to deal with the technical challenge without giving up on, in fact, making more muscular the democratic role for governments, governments should be telling us here's how much risk we're willing to take. And we're not there right now. So, you know, at the end of the day, the race to the bottom protection is you actually have government regulating. It's just regulating in a different way than it, you know, conventionally does and which it's actually not able to effectively do right now. So that was like, did you say, Nathan, we're kinda naked on this? I think that's right. I think that's

Nathan Labenz: (1:22:51) or another way to put it, and he wasn't talking about this at the time, but friend of the show and, research, partner of Fathom, Dean Ball, once simply put it to me, republics require virtue. And I think that is a you know, it's a good reminder that you can always kind of poke a hole and say, well, what if somebody know, what if the person in that seat is bad and who's gonna monitor the monitors and whatever? But at some point, this is an institution that is gonna be populated by people, at least until there's maybe some AIs taking over key roles.

Gillian Hadfield: (1:23:23) Regulated AIs take

Nathan Labenz: (1:23:24) Future speculation. That's right. But, yeah, somebody's gotta actually be trying to do a good job at some point in any given system or it's gonna go to hell. So there's kind of no way around that.

Gillian Hadfield: (1:23:35) I I think but so, republics require virtue. Okay? They also require visibility. And I think this is 1 of the things that we are in a serious, state with, which is, you know, I think this is pretty much the first time in history we've seen such a massively consequential technology with a general purpose capacity built really almost entirely inside private technology companies which have this ring around them, a legally created fictional ring around them that says anything that happens inside stays inside and it doesn't get out unless they choose to let it out or the government comes in and says you've to let us look. Right now I think governments are just in an impossible position to be able to effectively regulate because they do not have visibility. Another feature of this is starting to say, okay, we need to get increased visibility into what's happening. Again, who's our partner in that? An independent sector of entities that are in the weeds. Right? We already have some of these companies and nonprofits starting to emerge providing red teaming services or checking on developing technology to check robustness of systems or hallucinations in systems. We really want to lean into those startups, that sector to say, let's make this a powerful sector. Let's create increased market demand for that, let's attract investment into this, and then that's the partner for government that gives increased visibility for government into what the heck is going on. Because right now, governments are just kind of at the mercy of what have the labs chosen to share with us? I'm not beating up on the labs, okay, if you're gonna structure them as corporations, and that's the protections we give corporations, and that's the way they're gonna behave. I'm an economist, they're gonna engage in profit maximized behavior. That's what gets us this technology in the first place, but it's it runs headlong into what we need for regulations. So, yeah, virtue and and visibility. So that that seems like another

Nathan Labenz: (1:25:52) like a different law though. Right? Like the because we couldn't expect people to be well, maybe we could try to incentivize them to, but if we really want visibility, we might just have to mandate it. Do you have thoughts on, like, how visibility should be mandated? There's a connection there to, like, whistleblower protections as well. And I was also gonna ask about the mechanism of how you think money should flow in this system. So I'm Yeah. Rapid firing questions at you. But

Gillian Hadfield: (1:26:17) Yeah. Let let let me let me do this, then, Andrew, know you got something to say, so come to you. So so so first of all, and, Jack and I talked about this in this 2019 paper. I think that because what you you so if you have a private entity that you've contracted with to provide regulatory services, like to give you that oversight, so first of all, dollars are, in my sort of vision of this, dollars are moving, because you need to get dollars into the business. Mean, I've just been chatting with some of the nonprofits that are engaged in, say, doing red teaming under contract for the labs. Know, surprise, surprise, they're just finding they need more resources. It's a bigger job than a few very virtuous people can do. You need to get dollars into that. That is part of the flow and attracting investment in this is like for me, 1 of the number 1 reasons to do it. Then there's the fact that I actually think you will be able to get much more fine grained information transfer between 2 private entities under contract. We see companies engaging in, you know, go into joint ventures and collaborations and they share within ranges detailed private because they have confidence that their confidentiality and IP protections and so on will keep that information private and they're not sharing government. So I think you will see more visibility going into a private regulator than into the public regulator. But then the public regulator has complete can sort of set whatever standards it wants for its oversight of those private regulatory agencies and say, okay, you need to show us your stuff. You've got to show us the results of what you've been learning. The government may not end up getting into the weeds on all of the information out of the labs. But I think you start structuring those information relations. So I think there are ways to improve the visibility. And you increase the visibility because, again, you've harnessed that incentive of this independent sector to say, here's what I need to know. And if you want to be certified by me, you're going to have to share that information. Here's what I need to know in order to be able to fulfill my duty to the government to say I can demonstrate that my approach, my technology achieves your government goal, that target for regulation.

Nathan Labenz: (1:28:54) Okay. Here's a series of questions that I've either gathered or, you know, kind of had posed to me by others. I just had Matt Perrault, who's the head of AI policy at a 16 z on. He is very focused, and a 16 z is very focused on advocating for little tech and just trying to make sure that there's, like, a place for startups. His concern about SB 813 in particular, and I think, you know, probably would abstract to the sort of more general concept is what if the rules become so onerous that only the big tech companies can do them, then the big tech companies get the benefits and the startups can't get in to that beneficial regime. And then it becomes very hard for them to compete or raise capital because they're on this, like, disadvantaged legal basis as compared to the big tech incumbents. 1 answer might be, if that's the way it plays out, it might be, you know, so be it or that's cost worth paying. But I don't if you wanna bite that bullet or if you think there's a way No. I definitely don't wanna bite

Andrew Freedman: (1:29:57) that bullet. Yeah. Honestly, I don't know of a structure that can more scale to provide solutions that are different for little tech than are are available for the Frontier Labs, meaning like you could well envision and maybe like some guiding language within an 8 13 would be helpful here, that there is a specialty lane for models or applications or deployers that are smaller scale and pose a less immediate risk in these ways. And that way have a much lower burden of what it shows to meet best practices environment that they're in. And therefore, you know, should still they can still go get whatever seal of approval the system ends up creating out and being watched after, but that it is it is of a level that makes sense for them, either because they're selling to enterprise or because they do believe that they introduce some risk into the environment that they need to look after. But that instead of it being, say, like a SOC 2 where it's the same for everyone across and everyone has to that there suddenly is some way of actually creating gradation and saying, this is actually what the best practices should look like for a 10 developer group who is looking for a limited application that goes out this way. It's just so much different than, you know, you're gonna be in 10,000 vehicles tomorrow, and we need to make sure that you know how to obey traffic signs or stop before a little girl drops a ball in the street. And the problem with, say, like, a top down is is there's no way to account for that. Right? You can try to write it into legislation and try to bifurcate it today. That's gonna make what however you've done it, whether it's by flops or whatever, it's not gonna make any sense tomorrow. Whereas, you know, groups that are specifically looking to to meet this moment where it's at can can really change and and be really flexible to that moment. And then I would also argue, you know, for the little tech world, they are in an also impossible position because with a no regulation world, because the only people who can prove that they'll they're gonna be safe for a fintech to be able to come in are gonna be the big guys right now. Right? They're the only ones that can go and do the amount of independent certification and work and long standing work. A little guy deciding that they have a way to forever change the baking industry at this moment has no way of proving to the baking industry that his stuff should be trusted. And so I would argue that there's a way of creating this that actually is a massive value to little tech. And then I'd also argue that the flip side is the other alternatives are not gonna be able to become as bespoke towards the needs of little tech as a solution like this could become.

Gillian Hadfield: (1:32:43) Yeah. So I think this is actually at the very heart of what, for me, has been sort of driving my thinking about how do we get more markets into solving our regulatory problems for decades, frankly? Because markets have the capacity to be differentiated. You've cars for your middle class worker, you've got fancy cars for your execs, you get differentiation in markets. And so I think that's a key feature of saying, oh, let's try and unleash some market effort here. And if you have venture money that says, hey, we really want to build the little tech world, let's put money into funding the right kind of regulatory infrastructure that serves that need. That's a regulatory puzzle, right? How do we do that well? How do we do that efficiently? I think that's a key reason for trying to recruit more markets, regulated, overseeing markets, right? So don't lose sight of the fact that we're not just abandoning it to the private sector. It's only with that muscular government oversight. But I think the other thing is that when I started thinking about this set of ideas, was fundamentally driven by the fact that our legal systems, our regulatory systems have become far too expensive, far too slow, and onerous. And that we've leaned into a set of techniques for regulation that on the 1 hand are very expensive to comply with. Lots and lots of process based stuff, front end process based, with very little demonstration that those process based protections actually achieve what you're looking for. So in some ways the very problem you're trying to solve is that we've built an incredibly expensive regulatory regime that absolutely only our biggest companies can really afford to comply with. That is a massive drag on the startup sector innovation. That's a key reason that we need to be adapting and innovating in our regulatory methods. So if you think about the general data protection regulation, for example, GDPR in the EU, lots of process, this definition, that definition, you got these logs, etcetera? And in fact, there's pressure right now in the EU to say, Oh, how can we modify this because it is too much of a drag on the innovative startup sector? I actually think it's precisely a mechanism like this. Goal of it is to say, how do we build more efficient, more effective regulatory regimes and move away from kind of the top down thing that frankly lawyers in a room are gonna create. Beat up a lot on lawyers in my book, or at least the way in which our profession has kind of failed to rise to the need of societies for greater innovation in what we produce. We do not need more words on paper. We need more smart approaches for regulation. How are we gonna get there? That's what we're trying to do. That's the path we're trying to set us off on. There'll be a lot of hiccups and bumps and wrong turns and dead ends but I think it's absolutely critical. We need to be making this shift and we need to be making it 10 years ago. And the only thing that's happening is AI is ramping up faster and faster and we are still like, you know, got our shoelaces tied on the starting line. We are not getting there and we need to get there.

Nathan Labenz: (1:36:33) I do love the fact that this proposal creates an opportunity for people to come up with new ideas and enter into the ecosystem on an ongoing basis. And also just the fact that like, I know this maybe still needs to get a little bit worked out in SB 813 in general, but there seems to be sort of an implicit, if not, like, sunset clause, which is something I always advocate for in law and never seems to happen. There's at least a sort of ongoing subject to renegotiation or reevaluation of a lot of things in this proposal. And I think that is really great too because it at least gives it a decent chance to age well, which is, you know, my constant joke about AI content. AI content does not age well. AI, you know, regulatory proposals generally do not age well, but this sort of meta structure that allows for sort of new entrants and ongoing revision seems like it has a better chance of aging well than than just about anything else I've heard. 2 more different angles of

Nathan Labenz: (1:37:36) red teaming the proposal. 1, maybe the most different or the most like almost opposite direction, although you may see it a bit differently, is from another law professor, Gabe Weil, who has, as I'm sure you're aware, been advocating for the idea of sort of an expanded notion of liability. I haven't studied his stuff in-depth yet. I'm gonna do an episode with him before too long as well. But the general, you know, sketch of it is there are some potential harms, problems, you know, catastrophic, existential, in some cases, may be that are so bad that we need a way to deal with them before they happen. And so his proposal is basically to expand liability to encompass near misses. So if you were acting negligently and nothing really bad happened, but it came close or could have or whatever, then you could still be sued and liable, you know, for harm even though, you know, maybe you got lucky. Mindfully being mindful that I haven't probably described his position quite right because I haven't done the full study yet. Any reactions that seems like we probably can't do both of those, right? We're, those are pretty 2 different directions at a minimum it seems.

Gillian Hadfield: (1:38:49) Am I going? I'm going. Okay. So again, go back to the idea that our regulatory regimes ecosystem is a whole bunch of different threads. And liability, which, usually when people use the term liability, they're thinking about litigation based, court based regulation, which is always after the fact, it's a big process, it's got lots of virtues because it can be like that bottom up reactive. That's a good thing about having a strong litigation regime. I'm not anti litigation or anti courts as tools in our toolbox for getting people and companies to do the things we want them to do the right thing. So 1 approach would be, yeah, so you could say, well, we don't wanna wait for catastrophic harms to happen. We should cover near misses. Maybe that's a fine amendment to make into tort law. I'm not enough of a tort scholar to even know like what's our existing doctrines on how close do you have to be to cause Can you just create a risk or do you have to actually get the harm? So I don't want to go into the details on that. But the fact of the matter is that these are the types of domains where we actually don't generally leave it to litigation. Like we don't say, well, we're gonna rely on the tort system to handle the risk of nuclear facilities blowing up or creating fallout for communities. We actually don't rely like think about pharma, right? We started early with pharma. We have an FDA that says you cannot put a drug on the market unless you've demonstrated safety and efficacy and gone through a fairly lengthy approval process with our regulatory. You still have a backup of liability. You can still sue for harms caused by drugs on the market that have been FDA approved, but we haven't put the whole thing out there. I certainly think if we're thinking in the domain of catastrophic risk, by which actually I think we want to include lots of things like collapsing our markets and bunging up our financial trading systems. There's a lot of economic stability risk that I think we don't pay enough attention to as potentially catastrophic. But I don't think I would be focused heavily on, you know, well, let's just deal with this by making tweaks to the tort law regime. Maybe we make those as well. But first line of defense for me would be, no, I think I want some oversight on whether or not there's bioweapons risk or these trading agents could collapse or could cause kind of the equivalent of massive bank runs or just crashes of the stock market. I mean those are very costly things and I don't think we just wanna handle all of that through back end litigation.

Andrew Freedman: (1:41:47) Yeah. I hate to beat up a straw man because I because I don't know enough. I 1 of the things I will say is, like, also be careful of the downside of that stuff, which is how much is that just gonna mean that near misses are not reported Right? How much of that is just gonna mean that, you know, you have to you have to create your corporation now in such a way that, like, nobody knows the full picture other than, you know, a few trusted people and everybody's in their own little silo. You know, I imagine a lot of tech organizations are already kind of, you know, it takes Trending that direction. 1 yeah. It it is this is really can push towards much more siloing. And, you know, if we didn't know, then there's nothing you can sue us on because we didn't know that there was a near miss there. Right? And you're asking people to not go, you know, do the red teams and go do the hard work because the more they know, the more they're, you know, potentially liable on the back end. So there there's there's some dime downside that I do wanna be now maybe that's thought about in this proposal, so I don't wanna but I'd also argue, you know, I I really do hope we end up seeing things that allow people to be proactive in this space and get rewarded for being proactive and not just, like, after the world's burned, there's there's a way to go and sue for it. Right? And so I hope that that is, you know, brought into that strategy as well.

Nathan Labenz: (1:43:09) Yeah. Well, stay tuned, everybody, for another episode where we'll get the full story on that and I can be properly educated. And it does even strike me just in listening to your responses there though, that I was maybe too quick to just see like liability protection and liability expansion and see those as incompatible because plausibly you could expand, but then you could also afford some protection, you know, for compliance and it doesn't necessarily in the end seem like those are so diametrically opposed. Okay. A third 1 is, and this 1 is maybe closest to your own impulses, why not require insurance and kind of put everything on the dollar scale. Insurance companies are quite, you know, they're presumably the best organizations we have for kind of calibrating to risk. There that would also bring a sort of pricing mechanism to the risk that I don't see quite maybe you'd see a way that it emerges from the sort of structure that we've been describing, but it's not I mean, with insurance, it's like quite clear how the pricing mechanism works. So yeah, instead of like this whole thing, why not just say everybody's got to have insurance? If you don't have insurance, you can't drive.

Andrew Freedman: (1:44:25) So I think I won't speak for Jillian here, but I will say for me, I've been most bothered in my, like, nerdy capacity by how we've kind of waved a magic wand with insurance and said that it can solve problems without diving into what it is that insurance does in order to solve those problems. And so for the insurance market to help be a rational market here that does actually properly price risk and then properly charge against that risk, that includes it being able to properly know what risks exist and how to best mitigate risk. And that doesn't exist absent valid third party certifiers going in and doing that work and racing towards it. And so I would actually say like the scaffolding, it's easy enough to say like insurance knows how to price risk. I would argue probably don't here, right? Like that they not even that they probably don't. They absolutely do not know how to price risk here. And so you could require them to come into the market and they're like, great. We're gonna everything has to be self insured and it's, you know, an astronomical amount. And we don't really know. We don't we're not any better than anybody else right now at coming in and doing that, that there has to be some scaffolding of information, of knowledge base, of what does actually decrease risk in the system and how do we actually know risk that comes from a valid source in order for there to be a rational insurance market.

Nathan Labenz: (1:45:58) I think people would just argue, though oh, sorry.

Gillian Hadfield: (1:46:01) Oh, sorry, let me just add on that. So, know, yeah, so emphasize, insurance, you need to price risk. So you need lots of structure that is determining risk. When insurance companies are insuring against liability risk for automobile accidents or construction work site accidents let's go back to pharma, right? They're doing that against a backdrop of a ton of structure that defines the risk. Like we've got lots of history of automobile litigation and liability. We have tons of codes that govern how you run a construction site. We've got all this regulation around drugs. Insurance has got a lot of structure to price on the basis of, and we don't have that right now. I think there's this idea that insurance companies, because they do risk, will come in and say, Oh, and they will magically solve the problem of where is the risk and what should have been done? But notice that we've just kind of reinvented the problem, which is what do the companies need to do? We don't know what that is right now, and I do not think insurance companies are going to become our AI regulators. I mean, that also would, again, we would not have any oversight of that. That would be insurance companies saying, how much risk are we going to allow for bio risk or whatever? They have to build on something. My institute that I ran at the University of Toronto up until last year just released a report. We had lots of discussion about insurance. Insurance is a nice complement to building a regulatory market and building regulatory technologies. It's actually another 1 of carrots that you can use for, hey, you implement this regulatory method, then you can get insurance. But it's got to be that kind of partnership and that's the kind of products that we're starting to see emerge. Armilla and Lloyd's. Armilla is a company that creates this regulatory technology. It's been 1 of my go to examples of a startup in this domain. Through a partnership they said, Oh, if you use our approaches, we now have an arrangement through Lloyd's of London, I think, or other insurers as well. You can get insurance. Right? But the insurance companies aren't going to become our AI regulators and it wouldn't be appropriate for them to be that. So think just a basic, just mandate insurance and it'll all work its way out, I think, just is not a realistic view of the way insurance markets work, regulation works, or democracy works.

Nathan Labenz: (1:48:44) The democracy point, I think, is strong. I think that at the end there, though, you were sort of getting to what I think the advocates for the insurance idea would say, which is like, first of all, just this market is going to be massive. So this isn't like a niche corner of the insurance market that insurers would say, forget it. It's not worth our time to figure that out. It seems like if you're an insurance company and there's AI happening and it's touching everything and the risk is massive, you would presumably want

Gillian Hadfield: (1:49:11) to But the risk play and then of what? Right? Like what do you insure against? You insure against liability risk, which means you think that courts are going to be able to impose requirements and standards, or you risk your risk of compliance, regulatory compliance risk, but then that requires government to have regulatory structure. So we can have bad things happening, but that doesn't mean risk for the companies unless they have liability attached to that or whether tort liability or regulatory compliance liability. It's not a massive market unless there are requirements that they're going to be held to either through the courts or through government.

Nathan Labenz: (1:49:55) I think the notion is mostly liability risk and also that it would be a lot of these same organizations that the insurance companies would turn to to try to help them get a handle on it. So it would be, you know, who and I guess I'm interested to know, you know, for the future of, like, Fathom, are does Fathom sort of envision itself being a being 1 of these, like, private regulator entities. What other organizations are you looking at in the world today and saying, like, these guys, you know, seem like they could step up into this role. And then, you know, in the insurance context, it would be like, those same candidates would maybe be the ones the insurance companies would go to and say, hey. We need we'll pay you to help us figure this out. And we might, you know, insist on companies like going through your audit process or whatever if they want to buy the insurance. I think the hope is actually to end up in a pretty similar spot where experts are defining standards and also kind of conducting audits. But it's a it's less, like, concentrated through 1 AG or 1 commission and more concentrated in, like, this sort of global insurance, you know, market, which in theory, at least has, like, you know, a lot of skin in the game.

Andrew Freedman: (1:51:07) I would love to answer the Fathom question first, and and and so Fathom is interested in showing proofs of concept here. Like, we we think this is enough of a novel idea that and the way in which the marketplace starts will be an important market for success. And so we would love to see some proof of concepts out and then help those that are very interested in doing this work be very successful at doing that work. We remain a nonprofit. Any interest we have is nonprofit related and philanthropically funded, and, there's no endgame here for for there being an equity player or any of that. But I do think it is 1 of those things that if we don't show what good looks like, people are just gonna constantly being like, I I don't have time to listen to a 2 hour podcast right now on this. And so

Nathan Labenz: (1:51:52) I don't know who has time to listen to us, honestly. It's great.

Andrew Freedman: (1:51:54) I listen to it, but yeah. But but we do need to start showing. Also, like, it's complicated, and we do wanna work out the kinks by having real world examples on it. So I think in the near future, you'll see us trying to show what proofs of concept we're looking working with partners who are actually the technical people in here. Right? And so I don't know if I have permission to share technical people that we are giving grants to to see them work this this work, but I think in the near future that it'll be clear that we think there's some great technical minds working on this right now that we're giving small grants to to to try to see them do this work well. I I I will just again come like, the insurance is making a bet in this world at the end of the day. And if they're making a bet based on no different risks, they they you could easily imagine a world where they go, okay. Well, that's such a big catastrophic risk that if it happens, there's no 1 to sue on the back end. And so we'll insure against that risk because, like, the chances that we have to actually be there at the end of the day to pay it out are actually very low. Society will have fallen apart, for example. There there just is not I don't think there's a magic to the governance of how insurance works that they're gonna be able to bring in the best third party validators of risk and have them do the best work. Right? Like, their focus is gonna be on, okay. What's the greatest tangible next risk that's coming up that could act that we could actually be on the hook for here. And if and to the extent they can't do it and they're taking a guess, they're taking the same guess as everybody else is taking here. And so I get it. Like, it feels like a thing that once you start mandating it, the scaffolding will fall in place. But actually think thinking through how is that scaffolding actually governed so that we are actually creating the best third party certifiers is the most important question and making sure that those people are actually accountable to society rather than to any perverse economic motive on the back end.

Nathan Labenz: (1:53:56) Do you wanna offer any closing thoughts?

Gillian Hadfield: (1:53:58) So I I we need to get moving. We need to get innovative. We want as many people in the conversation and poking at the model and coming up with new ideas. Mean, that's a, again, a reason you like markets is because you need lots of different minds, lots of different perspectives, lots of different knowledge. You need that conversation happening, but we need to get going. I think we can't really stand back and say, well, let's design this perfect structure. It's like we need the MVP of new approaches on regulation and let's get started. I think SB 813, whatever form it ends up in, and it will continue to evolve because of course it's like, here, let's throw this out here. Oh, wait a second, we need to change this, we need to fix that. The key thing is we need to get down this pathway. And so I think that's the key message I would go away from. Markets can help us in addressing this. They need to be overseen by governments. Governments should be deciding what's acceptable risk and we need to get going.

Nathan Labenz: (1:55:06) Perfect. Doctor. Jillian Hadfield and Andrew Freedman, thank you both for being part of the Cognitive Revolution.

Andrew Freedman: (1:55:13) It was such a pleasure. Thank you for having us.

Gillian Hadfield: (1:55:16) That was terrific. Thanks.

Nathan Labenz: (1:55:18) If you're finding value in the show, we'd appreciate it if you'd take a moment to share it with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions, and sponsorship inquiries either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. The Cognitive Revolution is part of the Turpentine Network, a network of podcasts, which is now part of a 16 z, where experts talk technology, business, economics, geopolitics, culture, and more. We're produced by AI Podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at aipodcast.ing. And thank you to everyone who listens for being part of the cognitive revolution.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.