Governing Frontier AI, with CA Senator Scott Wiener, Author of SB 1047

Governing Frontier AI, with CA Senator Scott Wiener, Author of SB 1047

Dive into the world of frontier AI development with California State Senator Scott Wiener's intricate detailing of SB 1047 bill.


Watch Episode Here


Read Episode Description

Dive into the world of frontier AI development with California State Senator Scott Wiener's intricate detailing of SB 1047 bill. This podcast explores the delicate balance between accelerating AI innovation and ensuring stringent safety protocols, including potential implications for advanced AI models. Learn about the debates on government involvement, the need for independent third-party testing, and Senator Wiener's commitment to a measured approach to AI regulation. Join us for a thought-provoking discussion on the complexities of legislating emerging technologies.

SPONSORS:
Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive

The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR

Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist.

Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/

CHAPTERS:
(00:00:00) Introduction
(00:05:47) Senator Scott Wiener
(00:09:38) AI policy
(00:13:56) AI worldview
(00:16:20) Sponsors: Oracle | Brave
(00:18:28) Burden on Developers
(00:26:10) Developers reaction
(00:28:43) Big tech companies opposing the bill
(00:31:09) Open source models
(00:34:57) Sponsors: Squad | Omneky
(00:36:45) Illegal activities
(00:41:15) Penalties
(00:44:33) Third party testing
(00:48:33) Transparency requirements for model development
(00:53:07) Closing thoughts


Full Transcript

Full Transcript

Transcript

Nathan Labenz: (0:00) Hello and welcome to the Cognitive Revolution where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week we'll explore their revolutionary ideas and together we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz joined by my cohost Erik Torenberg. Hello and welcome back to the Cognitive Revolution. Today, I'm speaking with California state senator Scott Wiener about SB 1047. His proposed legislation to establish safety testing and risk mitigation requirements for advanced AI models, which the bill defines as does the Biden executive order as models trained with 10 to the 26th flops of compute in 2024 and other models with similar capabilities, even if they require less training compute to get there in the future. It's an admittedly complicated definition, the ambiguity of which reflects the general uncertainty about how this technology will evolve going forward. In our conversation today, senator Wiener begins by sharing his enthusiasm for the potential of AI as well as the story and motivation behind SB 1047. He explains how he's attempting to take a light touch approach, not to eliminate, but at least to minimize risk without harming prospects for continued AI innovation. He addresses some of the misconceptions he's seen in the online discussions surrounding the bill, and he responds to various good faith objections that have been raised. For a more thorough analysis, I definitely encourage you to listen back to our previous episode where Nathan Calvin from the Center for AI Safety Action Fund and Dean W. Ball from the Mercado Center joins Steve Newman and me to discuss the potential benefits as well as the drawbacks of this bill from a generally pro technology point of view. I think that episode holds up very well in light of today's conversation. So where do I come out on SB 1047? Personally, I often describe myself as an adoption accelerationist, hyperscaling pauser, which means that like senator Wiener, I really want to see us get the incredible value of things like primary care AI doctors. And by the way, I'll soon have an episode on Google's new Med Gemini model, which shows just how close we are to that exciting reality. And yet at the same time, I do believe that we are playing with a new kind of fire in AI and that it's critical that we exercise caution on our way to superhuman AI strategists and scientists. SB 1047 is very much in that spirit. And for that reason, I am inclined to support it. While it's true that the small number of companies pushing the frontiers today seem to be proceeding responsibly without being forced to, and while I, as a longtime libertarian, absolutely recognize that a new government agency could become a bureaucratic nightmare, it seems to me that the stakes associated with frontier AI development are high enough and the competition is fierce enough that this bill constitutes a prudent step. Of course, there's more that government could and perhaps should do with respect to AI, and I did ask a number of questions about other possible rules meant to mitigate, for example, arms race dynamics between leading developers. But in the end, I came away from this conversation quite sympathetic to senator Wiener's point that you can't do everything in 1 bill and that the focus of this bill is on ensuring that serious testing is done and safety mitigations are applied before models are released. As you'll hear, 1 change that I would still love to see and which would turn me from a somewhat cautious to a very enthusiastic supporter of the bill is stronger support for independent third party testing. I believe it's critical that independent experts who are motivated solely to find the truth have the opportunity to thoroughly test frontier models during the training process and to be able to report their results directly to the government without fear of loss of access to the models or other retaliation from the developers. This is a tricky balance to strike with open questions including who should be trusted to test, what sort of access should they have, and what remedy should the labs have if testers begin to behave in bad faith. But as challenging as those questions are, evidence is mounting that a government requirement for third party testing is needed if we are to achieve senator Wiener's stated goal for this bill. For 1 thing, my experience on the GPT-4 red team, and if you haven't already heard it, I definitely recommend my did I get Sam Altman fired episode for the full story of that experience, shows just how precarious an independent tester's position can be vis a vis the frontier developers. And that's assuming 1 can get access in the first place. My background conversations with the leaders of several different AI safety review organizations suggest that access today ranges from nonexistent to extremely limited. And recent media reports also indicate that even the UK AI Safety Institute has not been able to get advanced access to frontier models. That's despite voluntary commitments from the labs at last year's AI Safety Summit. That, I believe, needs to change, and an amended SB 1047 would seem a natural way to make that happen. In the end, whether you support or oppose this particular bill, I would hope that everyone can recognize that the governance of advanced generalist AI systems with their jagged capabilities frontiers and their many emergent and often quite surprising behaviors is an important and complex topic with no easy answers. With that in mind, I definitely appreciate the care with which senator Wiener has crafted this bill and the open minded approach he brings to the legislative process. As always, if you're finding value in the show, please do take a moment to share it with friends, post about it on social media, or leave us a review on Apple Podcasts or Spotify. And if you have any feedback, you can contact us via our website, cognitiverevolution.ai or by DMing me on your favorite social network. Now here's my conversation on regulating frontier AI development with California state senator and sponsor of SB 1047, Scott Wiener. California state senator Scott Wiener, welcome to the Cognitive Revolution.

Scott Wiener: (5:51) Thank you for having me.

Nathan Labenz: (5:53) I'm excited for this conversation. So you are the sponsor or, I guess, as it's known in California parlance, the author of the proposed legislation SB 1047, which attempts to get the government's arms around this question of what is going on the frontier of AI development and has certainly caused a lot of consternation and then attracted a lot of interest lately. So I'm really interested to get your story on how you got interested in this, what you aim to do with this legislation, and then we can get into the weeds on some of the concerns that people have and some possible ways to even make it better. How's that sound?

Scott Wiener: (6:28) Sure. It sounds great. Thanks for having me, and thanks for talking with me about the bill. Yes. I have the honor representing San Francisco in the state senate and humbly proud that San Francisco is the beating heart of AI innovation. So much amazing inspiring work is happening in San Francisco. Because I am a San Francisco I am surrounded by some incredibly brilliant AI minds. And so and not just folks at the senior levels, but frontline technologists who are doing the work every day. I'm thinking about I've had a great opportunity to talk about policy issues surrounding AI. And probably about a year and a half ago, in a number of different settings, folks in the AI world started talking to me about safety and the need for policymakers to try to get ahead of safety issues rather than playing catch up, which is what we often do when the horse is already out of the barn. And so we started to just have a lot of conversations about what that might look like and what might make sense and how do we absolutely promote innovation. The last thing we would ever wanna do, the last thing I would wanna do is to keep innovation. How do we do that and also make it safe and really address that issue? And so we had an enormous number of conversations, lots of outreach. Last September before we recessed for the year, I put what we call I put into print. We introduced a formal piece of legislation, really an outline of what we were looking at in terms of an AI safety bill. And I put that out there so that it would be just floating out there for months and months. I wanted to be completely transparent about what we were doing, welcome lots of broad based feedback. We introduced that outline, send around to a bunch of people. And then in February, introduced the formal bill. And my intentions are really threefold. First, set some basic safety and mitigation requirements that are reasonable, light touch, and not micromanaging. I mentioned not micromanaging this. Light touch, reasonable safety and mitigation requirements that are super doable and super possible for developers of all alliance frontier models to accomplish. We wanna focus on only the most capable models, upcoming really large, powerful models that are beyond what's possible today with tools like Google and other tools that we have. We'll talk to you about future really large models. And then how do we detect and foster innovation, including open source model developments? And how do we do it in a way that takes safety into account? So those are our goals, and that's how it came about.

Nathan Labenz: (9:38) You wanna give us just a tiny bit of background on your legislative career?

Scott Wiener: (9:43) Yeah. So I'm just like a gay Jewish guy who moved to San Francisco from the East Coast in '97 for same reasons a lot of queer people came out here by my community, practice law for many years, very involved in the LGBTQ community. And ultimately, ran for and was elected to the San Francisco board of supervisors, which is like our city council back in 2010 representing the district. Used to represent did a lot of work on the board around housing, public transportation, and LGBTQ issues and so forth. Within 2016, I was elected to the California State Senate representing all of San Francisco and part of San Mateo County, right to the South of San Francisco. State Senate districts in California are huge. I represent about 1 million people. And I served for a number of years, and I'm the chair of the senate housing committee, and I currently serve as chair of the senate budget committee. And then I got appointed budget chair just in time for a massive budget deficit, which is always a lot of fun. And my focus, I'm probably best of my work for housing policy for clients to remove barriers to building more homes. We made way, way too hard to build housing. So we try to make it easier and faster to build new homes, but a ton of work to make sure we're adequately funding. Plantation had a lot of work around health care access, including single care health care, a lot of work around climate, energy issues, criminal justice reform, so forth. A note here, I back in 2018, after the Trump administration got rid of net neutrality protections, you know, people know, but seems to be the only know that net neutrality is the notion that you are actually getting inside where we go on the Internet. And AT&T, Verizon, and Comcast, and other Internet service providers should not be telling us or manipulating us into where to go or not go on the Internet. And so as you talked to that, I offered and we were able to pass California's net neutrality laws, which is a sign in some country. And some of the people either in democratizing the Internet and not having concentration of power to making sure that the Internet is a democratic place to help.

Nathan Labenz: (12:00) Thank you. So for this bill, 1 thing that is striking is that it really does focus on the frontier, and it leaves, I would say, a number of questions which you probably agree are important for another debate at another time. Those would be things like the future of professional licensing, the overall impact on the workforce, the potential for algorithmic bias or discrimination, none of those are really addressed in the bill at all. So question would be why? Is that something you think you'll come back to later? And how have you assembled a coalition to support this given that from what I understand of the coalition, I would imagine some of those folks would be pretty concerned with those issues as well.

Scott Wiener: (12:44) Yeah. And 1 thing that's important to know about just about AI policy, but any policy is you can't do everything in 1 bill. And so we have a number of members of the legislature who have been or working on AI related issues, then various bills that are moving forward now. So our bill, SB 1047, focuses on safety evaluation and mitigate for the very largest models that are being developed. There is a bill that's pending in the state assembly by my colleague, some member, relating to algorithmic discrimination. There are there are a few different bills around watermarking of AI generated images. There are a couple bills on AI generated revenge corn. There are bills around government contracting around AI related services and government use of AI. And there are others as well. But there are quite a few AI bills moving forward, and SB 1047 is simply 1 of them relating to safety in particular.

Nathan Labenz: (13:57) Okay. Do you have a established position on these other bills? Is there any broader context on your AI worldview that you'd wanna share?

Scott Wiener: (14:06) Yeah. Generally, I'm a supporter of artificial intelligence. It has so much potential to make the world a better place, to make people's day to day lives better, to address some of the biggest issues of our time around climate or various huge healthcare issues. The potential is limitless in a lot of ways. And I am so excited about what AI can do to make human life better. But we also know that there are impacts that we need to be mindful of. And so I am supportive of addressing algorithmic discrimination. The last thing we need is to have AI make the discrimination that exists in the world even more pronounced. And so that needs to be addressed. I think it's important for people to be able to know if an image is fake or not. There could be so much disinformation around audio and visual fakes and deep fakes are a real issue. We need to address it. And in terms of impacts on the workforce, it's a really hard issue. Technology always impacts or often impacts a job classification in the workforce. But we're looking at impacts on the workforce at a very large scale. And I know that they're in a theoretical perfect world. If AI made works or made it possible to serve for less work to exist in ideal world, the benefits would be spread around to everyone and everyone could just work less and have more free time. But we know that in this society, on this work in this world, we are very bad about spreading around benefits. And what I don't wanna see happen is that AI generates huge economic benefits that are enjoyed by a few, whereas most people are made worse off because they no longer have jobs. We need to be intentional about what the future looks like in terms of AI's impact on the workforce. That is a huge mega issue. That's not gonna be solved by bills, or 1 bill. But that's something we need to focus on.

Nathan Labenz: (16:21) Hey. We'll continue our interview in a moment after a word from our sponsors.

Nathan Labenz: (16:21) You said that goal is to have a focus on the frontier. The goal is to have a light touch. How would you describe the burden that the law would place on frontier developers, and how would you say they have reacted to it so far? It's been striking to me that I haven't really heard, like, an official position from the companies that would seem most likely to be directly impacted in the near term.

Scott Wiener: (16:51) Sure. So I'll describe what the bill does, and I'll also describe after that all the elements that I rejected that some people wanted to put in the bill. My goal here is to be light touch, is to not to make or manage, not to, you know, undue burdens on people. That's something. I don't like passing laws that make people's lives harder just for the sake of making my. That's not what that's not who I am, not what I ever want. I do want my the laws that have my name on them to actually make the world a better place. So the bill, it applies to models that are size threshold. We use the same size threshold that the Biden executive order on AI safety use, which is 10 to the 26th flops. And because we don't wanna tie everything to where we are in 2024, we want this to be flexible over time. If you're developing that model of that scale, then before you train the model and before you finalize it, then make it available to other people, you need to perform a safety evaluation on the bill. There are various kinds of these evaluations. Red teaming is 1 of them, but there are others as well. So you have to perform that evaluation. If the evaluation, the testing shows that there is a significant risk to the model leading to catastrophic harm, how you meant how we define catastrophic harm, then you need to take reasonable mitigations to reduce that risk, not to eliminate risk. Eliminating risk in life that is usually not possible. And when we try to eliminate risk that will take a lot of the joy and innovation out, we'll all be sitting in a coffin in our basement. So it's like doing nothing like this about risk. And so if there we want people to at least try to mitigate or reduce those risks. When we talk about catastrophic harms, how we define it in the bill is that it will cause lead to the deployment or development deployment of nuclear, chemical, you know, biological weapons that it will cause harm damage to critical infrastructure $500 million worth, where that it will create trigger some sort of cybercrime, aggressive sort of crime that would be a crime that a human did it, causing $500 million or more damage. So we're not talking about small things. We're talking about large scale destruction and damage. Some people say that threshold should be higher. Some people say, wait, why 500 million? Is a 50 million enough? And that's top of the conversation we could have. And then if there are other warrants that are sort of equivalent in scale and destruction to those. And when someone engages in the safety evaluation, you then have to certify to the department of technology that you've done so. And if then 1 of those harms occurs from your model, then you have not done the safety testing or that you already did the safety testing that showed that there was something really bad that was likely to happen, you didn't do anything about it, then the attorney general can sue you. I also just wanna note that there are a number of things that I rejected, including the bill because I wanted this to be light touch. But first is there are some who wanted us to include what we call private right of action so that anyone could file a lawsuit against the developer if something were to happen. I did not accept that idea. We limited enforcement to the attorney general of California to look with this limited resources so they will focus their enforcement on the really bad actors, not just on someone. There are people who wanted us to have a licensing requirement so that you can't train or release any of these a lot of these models until unless the state gets permission and gives you a license. I did not wanna go in that direction, but I don't want the state to be in the middle of all that. Some people wanted us to ban certain kinds of models. We didn't go there. And some people wanted what's called strict liability. Strict liability means if you cause a harm, you're automatically liable. Whether or not you acted reasonably, I rejected that as well. That's basically what this that element of the bill does and some of the items that we did not. But I do just wanna address 1 thing because there was some conversations on Twitter saying some very inaccurate things about the bill. First, they said the bill is being fast tracked, which is false. The bill, we first analysis 9 months ago, and it's on the regular slow boat. And so there's plenty of opportunity for additional dialogue, for amendments. And so it's not a gas track. It's super transparent process. And the other piece is always some people say, oh, this developers are gonna go to prison for this. You don't make honest mistakes and go to prison, which is also false. The only criminal aspect of the bill is about perjury. If you literally intentionally maliciously lie to the government, that's perjury. Just like it would be if you lie on a driver's license application or any other documents with the government. You actually intentionally lie, misrepresent, that can be charged as perjury. It's not about if you click click fake mistake or an honest mistake or you just submit something that was even some sloppy, that would not rise to the level. So, anyway, that's I'll stop there.

Nathan Labenz: (22:41) Yeah. So that's an important distinction. So just to make sure I get it in layman's terms, basically, obviously, these frontier models are very unusual technology in that we don't really know what they can do until they've been created. And then we're still figuring out new things and better ways to prompt GPT-4 a full year after its public release, 18 months after it was finished training. So there's this whole process. What you're saying there is that a developer is responsible for doing some amount of reasonable testing to try to get their own arms around what can this thing do. And then it's their job to report obviously honestly to the government that we have tested this. Again, I'm little fuzzy on, like, exactly how much they're supposed to test, how exhaustive that's supposed to be. Maybe you can clarify that for me. But if they do an honest testing and reporting and they say, hey. We don't think we have a problem here. We can go ahead and distribute this model. And then something happens, they could be sued would be the base case. And only if it was later found that they actually knew better and were covering up information that they had willfully not reported, would they be subject to criminal liability. Do I have that right?

Scott Wiener: (23:55) Yes. But the suit lawsuit was only made by the attorneys out of California, not by other random people. So it's very limited, very focused, of course, but by 1 law enforcement agency. Gotcha.

Nathan Labenz: (24:09) Okay. So how have the developers reacted to this? I feel like there's a lot of crosstalk in that dimension as well. Some people are, of course, saying, oh, this is a ploy by the developers to create a regulatory capture environment. Others are saying that it's gonna prevent them from doing what they wanna do. What has your conversation with the developers been like?

Scott Wiener: (24:31) It's been a wide array. We're talking to all sorts of people. And there are a lot of people who, some publicly, but many quietly say, this is the right approach. Thanks for doing this. Largely, don't wanna be public about it. I'm learning a lot about the politics of Silicon Valley. There are lot of these people who wanna be funded. They wanna they and so there are a lot of people who don't wanna be public about their support. Some do, don't. And we've had some very prominent folks in the AI world who are supporting the bill. We've had a lot of meetings both with the large labs, with the megatech companies, but also with smaller developers. The large tech companies, there's been this narrative that this is about regulatory capture and trying to bat out small folks and sort of that is completely untrue, and it's not what the build will in any way do. None of the big tech companies are supporting the bill. We're in conversation with them to try to get feedback, but none of them are supporting the bill. And in fact, TechNet, which is the trade association for all the big tech companies, the Metas and Googles, Amazon, is opposing the bill. The ideas in this world did not come from big tech companies. It actually came from a lot of technologists and folks who focus on AI safety. We have a number of startups that are supporting the bill. I think that also speaks volumes and a lot of folks who are just sort of watching and staying away. We have a particular conversation with the open source community, folks who are think who believe that open source should be treated differently from other models, and we're in active conversation with folks in that community. I think open source has enormous potential value in terms of us stirring innovation and democratizing access to these models. But there are, of course, also risks if we release a very powerful open source model. People could do amazing things with that and could also potentially do particle things. So that's a continuing conversation.

Nathan Labenz: (26:37) We definitely are looking to undermine open source. Objections that I've heard. You mentioned that the big tech companies in their network are opposing. 1 of the other objections I've heard has been that, well, all the leading developers are basically doing this already anyway. So what's the point? They've all put out model cards and walked us through. Here's what we're doing for testing. Anthropic's got their responsible scaling policy. OpenAI's answered that. Google is getting there too. Even Meta with their open source approach has done pretty thorough testing. So interested to hear what are they saying that they find a problem with, and how would you answer the opposite end of it says, if all the big guys are doing this, why do we need to make it a law?

Scott Wiener: (27:16) Yeah. Just to be clear, the big tech companies, they're they have not come out individually. So you were having conversations with, I think, all of them. I have some and the big folks, with the small folks, with the academics, with advocates, everyone. It's super open door. And yeah, there is testing. So for the folks who are doing reasonable good testing, this still will have will have very limited impact because they're already doing it. And so it's not an issue for them. There are other labs that we're aware of who are maybe not doing the best testing. There's we've heard some this has been listed on Twitter, so I'm not seeing this on Twitter. Some folks who have concerns about Meta's safety testing, whether it's really where it needs to be. And so I'm saying, again, just repeating what I've seen online. So there are diverse views about whether all of the large labs are doing testing to the degree that they need to. And I've also found in life when it comes to any industry, any ecosystems, it's good to have minimum standards and not just trust that every lab is gonna do the right thing. There are plenty of labs that will do the right thing. They're taking safety very seriously. We'll go above and beyond on testing and on guardrails and mitigations. But that doesn't mean that they all will. And I think it's good to have a a level of fuel where they're the minimum standard. And again, this is light touch safety rules. We're not looking to micro vanish, but we wanna make sure everyone, if you're creating a model of this scale, and these are huge, powerful models, just at least do that baseline safety testing.

Nathan Labenz: (29:08) On the open source question in particular, this is obviously 1 of the hottest topics. The cause around models that have similar power to what 10 to the 26th would get you in 2024 has a lot of people projecting out a couple of years and saying, look. We're getting a lot better at this stuff. It's going to become pretty affordable to create something of that power over the next couple of years. And so the threshold effectively in terms of, like, dollar budget falls from the rare air today where we're talking hundreds of millions of dollars and relatively few companies can do it to maybe a couple years from now, just a couple million dollars. Maybe, like, a ton of companies could do it or even groups or clubs or whatever could rally that many resources. It seems to be like that is right unless there's some sort of conceptual breakthrough that would allow people to really definitively say that their stuff is safe. Is that sort of a long term bullet you're willing to bite as of right now where you basically would just say, hey. Look. If there's no way to say that this stuff is safe, then you really just can't open source it into the public. Is that kind of your default position and hope that there is a breakthrough?

Scott Wiener: (30:17) We know that no 1 can predict the future, but we know that right now and the future that I think we can see that it's going to it's gonna be incredibly expensive to develop models of this scale, this magnitude. And so we're talking about significant undertake to develop models of this scale. And so we think it's reasonable to say it's just to a safety test to at least, again, not eliminating risk to at least try to mitigate the risk. With respect to open source in particular, we want to protect open source. I mean, very, very clear about that. We're not looking to eliminate open source. That's not what I wanna do. It's not what we're going to do. We're working right now as we speak in good faith with a lot of people who are a lot of really smart people in the open source space to try to figure out how we can address some of their concerns in a way that still promotes safety in the open source context. And we know that when people if you take a very powerful open source model, when you're building on that, there's a good chance that what you're building on top of it may be very small in terms of turning that model into some sort of real world application. And so we do think that it's the model developer is best positioned to do that safety analysis. However, we are really actively receiving feedback from open source developers and experts to make sure that we are again, I don't like I'm not looking to create any kind of structure that people can't meet. So we're gonna continue to work on the open source issue. Don't really continue to have it having open door on it.

Nathan Labenz: (32:05) 1 of the best ideas I've heard there would be the idea of maybe getting a little more precise on how much more you can do on an open source model before the responsibility becomes yours. Right now, if I read the legislation correctly, it's like open source developer puts out a model. Anybody does downstream additional training or whatever on that's still a derivative model, and so the original open source developer retains the responsibility. Some of the ideas I've heard have been like, if you do 10% additional compute, then maybe you should now own the ball. The original open source developer can be off the hook. Is that the kind of thing that you're entertaining now?

Scott Wiener: (32:44) No. I think that's a fair conversation, and we're open to exploring that and other ideas about with the open source folks. Okay.

Nathan Labenz: (32:55) Hey, we'll continue our interview in a moment after a word from our sponsors.

Nathan Labenz: (32:55) 1 other idea that's just like super simple is what if we just made it illegal to distribute AIs that do certain things? What's the fault in that idea?

Scott Wiener: (33:10) Yeah. There's certainly a case to be made that there are certain types of activities that question not be allowed. I don't think we want AIs to be creating biological weapons, for example. That's probably the most commonly discussed idea. But we don't really know, first of all, what the capabilities of these models are gonna be in 2 years, 5 years, 10 years. Even if we decided there are certain things we wanna ban, we may not even know what it is we would want to ban. And so we decided in this bill, rather than saying what you can't or can't develop in terms of capabilities, must require safety evaluation. And, again, I'm not gonna pretend that this bill is gonna solve all the central problems that could ever result from for an AI models. But I think it at least takes that step of requiring some introspection and evaluation. And then if you discover, oh, wait, this is gonna generate biological weapons, trying to mitigate that. And the conversation about whether to restrict or ban particular AI capabilities, that will be an ongoing conversation, but it's certainly a fair conversation to have.

Nathan Labenz: (34:21) I was surprised also that there's no AI sort of having to identify itself when it goes out into the world. This evolved no hurrahi. The first rule should be AI must identify itself as AI. Is that just something you think will be handled by other legislation?

Scott Wiener: (34:33) And I should've I don't have a list of AI bills in front of me, and I think that might be a bill pending this year. I can't remember. But that is certainly an active topic of conversation. And I think it's in a that's there's a dull in that this year, but, I could be wrong about that. But that's certainly an important issue.

Nathan Labenz: (34:58) I think 1 of the big ideas that people really worry about in AI broadly is the idea of an arms race between countries, certainly, but even between firms within The United States. The bill obviously insists on a certain amount of testing. But in a world where it's really easy to switch from 1 language model to another, there is, I think, risk of a sort of winner take all dynamic between the companies where, you know, giving the testing kind of the shortest possible window that they could give it starts to maybe become attractive because I think we might even see this, like, in the next couple weeks here. Google's gonna have their event. They're gonna release something, and then it's expected that, like, OpenAI is gonna try to come back over the top and steal their thunder. And so you can imagine inside the labs, the conversation might be like, look. We're launching at this date when we can most effectively compete our rival, and you gotta have your testing done by then. And whatever we can do by then, like, we'll call it reasonable. Do you worry about that at all? And do you have any ideas, or has there been any conversation about things that legislation could do to try to mitigate this sort of winner take all AI arms race dynamic that seems like it is starting to take shape?

Scott Wiener: (36:13) You mean in terms of people cutting corners?

Nathan Labenz: (36:16) Yeah. Exactly.

Scott Wiener: (36:18) Ultimately, under this bill, if someone cuts corners on safety testing, those really shoddy testing receives problems and just sort of continually pretends not to see them, they're gonna have to file a certification. And they and then they could be that if something terrible happens, that could lead to liability. The attorney general could cloud a lawsuit. Yeah. People, especially again, this is light touch. We're not micromanaging. We're not we're not just saying broad parameters. And people can certainly violate the law, act in shoddy, irresponsible ways if they want to, but then there's going to be a risk to them. Maybe they'll get away with it. But that's how life often works that way. They're trying from people to problematic things and get away with it, but sometimes they don't. And so they're and then when we're talking about catastrophic harms, so they developers will need to be mindful. And I think ultimately, many developers wanna do the right thing. They want they don't wanna release a lot of that that's gonna cause catastrophic harm. And so I think that we will have good compliance.

Nathan Labenz: (37:32) The penalties are how would you describe the penalties? Because I see in the bill, like, a 10 to 30% sort of fine, 10 to 30% of what it costs to train the model, I believe. And then I think there's, like, additional sort of possibility for kind of punitive damages or whatever. But I could imagine a situation where you're at Google, for example, and you're like, man, OpenAI has eaten our lunch. How much did this thing cost to train again? Oh, it costs $500 million to train? Okay. That means we could be on the hook for $150 million. It's worth it. Right? Do you think that the financial penalties sort of probabilistic and deferred into the future as they are enough to be really persuasive to these companies?

Scott Wiener: (38:13) I think so. And especially punitive damages. And punitive damages requires very high it's a high threshold of malice. And if punitive damages are triggered, those are typically based on your net worth. And so punitive damages blocked against me as a public servant who doesn't have a lot of money are going to look very different than punitive damages against trillion dollar corporation or whatever the size is. And so I think the incentives are really in the bill.

Nathan Labenz: (38:46) I started my career very briefly in the mortgage space, and it just happened to be at the time when the mortgage industry blew up the American economy. And then it became clear that while there were a lot of people nominally watching after the behavior of the lenders and all the different steps in the value chain, like, that had eroded to be more of a rubber stamp. And it is a big part of what the safety community most worries about. It's okay. These people are gonna be under pressure. It's such an easy change to switch from 1 API call to the other. I can switch from OpenAI to Google to Anthropic literally in, you know, 1 line of code. So it does create these sort of, you have the best model, like, you win the day dynamics. And then there's this sense of, okay, we're racing. Maybe we wanna do this, but do we really wanna do it? Do we really wanna find everything we could find? Do we really wanna take all the time that we could take? It seems like there is likely to be a space where there would it would be plausible for the developers to have a reasonable defense. Look at all the things we did. This is, like, very reasonable. But then somebody who's, like, really in the know would say, think about all the things you didn't do. And here's all the things that I really think you should have done and you didn't, but it's still reasonable. This leads me to, I think, 1 of the most common suggestions that I heard in preparing for this, which is a more forceful requirement for truly independent third party testing. What has there been any conversation around that sort of thing where who provides that testing, under what circumstances, how long they have, exactly what sort of access they have? There's a lot of particulars there. But I do worry along with a lot of these safety people that the motivation or the sort of standard of reasonableness could be met, and yet we might want a higher standard if we're really talking about catastrophic or, as you well know, the safety community even worries about extinction level risks. Is reasonable enough, or should we sort of say, hey. You have to allow in these sort of red hat testers to go in and dig deeper than you might dig on your own.

Scott Wiener: (40:50) Now the bill, you know, we are walking a fine year where we wanna have good safety protocols without being too intrusive. Right? And we're already there are people who are criticizing us for being too intrusive. I don't think we are at all. Other people saying it should be more intrusive. There there, as you know, probably better than I do, there are some philosophical divides within that AI world. Right now, on this issue, the bill calls to developers to use third party testing, quote, when appropriate. And I understand that is a little vague and certainly an issue we'll continue to take a look at. We're open to the possibility of a greater greater role for third party testing, But we also wanna wanna know more about who those third party testers would be. And I think as you alluded to earlier, and I alluded to you said, I guess, it's just some big accounting firm that comes in checks the box or is it real? Our goal here is just to have good testing. And we're open to different ways of approaching that. Again, with that, I know you're gonna get sick of me using this phrase, but it's real. As long as it is light touch, that is our goal to micromanage. And there are people who would like us to micromanage. Right? They want people who want you to have to get permission from the government and get a license before you train or or, in any way, release a model of this scale. And I respect that point of view, but that's not where we toasted elements, though.

Nathan Labenz: (42:22) Yeah. I do think it's a tough 1 to figure out exactly how you would specify who the third parties should be and, again, exactly what kind of access they should have. I had a personal experience as a member of the GPT-4 red team, and I was really taken aback by what a leap it was, and the public hadn't really seen anything like it. My feeling at the time was that the testing that they were doing appeared to be inadequate, and I became sort of a lost in the wilderness, like, under NDA on the 1 hand, like, not trying to blow up their spot on the other hand, but, like, feeling like I was 1 of just a couple people in the world that had the time to devote. And I literally dropped everything else I was doing. I worked on it full time for a while. And I felt like, where do I go? 1 of the problems that I had was that I was totally at the pleasure of OpenAI. I do think 1 thing that would be really useful would be some protections for the third party testers, some perhaps, like, accreditation or some status that they need to achieve, but also some ability for them to report their findings without being at the risk of immediately being cut off. Because the access dance that they have with the developers is a really tricky 1.

Scott Wiener: (43:35) Yeah. I think so in a in a totally different context. I authored, a law that was passed since last year. It was a law to require large corporations to disclose their carbon emissions, including from their supply chain. It was a huge fight with a lot of the large business associations where we got it passed in the governor's side and the law puts California in the forefront. And then in that law, we require these disclosures and disclosures have to be audited. Every California AI Resource Board sort of certifies the auditors that are qualified and trusted to that auditing. Different context, not just the subject matter, but also because carbon emissions are not necessarily like trade secrets and whatnot. I guess they could be, but are typically not particularly sensitive in terms of some sort of confidential part of a business model or a technology. Here, obviously, there's a much higher sensitivity in terms of this is like the core products of the business, and then there's a lot of confidentiality issues as well. So it's a little touchier. But I think that this is an absolute word we talked to the conversation about how to make sure the testing is as solid as possible.

Nathan Labenz: (44:50) What about sort of a similar transparency requirement for model development as well? What if you were to say, you have to just disclose how big your training runs are gonna be? If you're gonna do 2 times 10 to the 26th, okay, but you gotta put it on record. That would maybe help address these sort of race dynamics where the different developers don't know just how fast the other companies are going, how big. And you might also imagine extending that to things that people have seen models do. I do think there is definitely a place for trade secrets. I don't think total transparency makes sense, if only because then we're also sharing all our secrets with all the governments of the world, which we probably don't wanna do. But I've often thought, does it really harm the company if somebody says, hey. I saw an AI do x, and it freaked me out. Because I was under an NDA where I couldn't even disclose that sort of thing. And people have been fired, of course, for saying that they think their AIs are conscious and whatnot. I don't necessarily think they are, but I also don't rule it out. It certainly seems like something we should be discussing. I wonder what you think about these sort of are there ways to put strategic transparency either requirements on the company or sort of opportunities for very careful, like, narrow, but hopefully illuminating disclosure for employees or testers that could help everybody at least get a sense for kind of what's going on across the environment?

Scott Wiener: (46:17) Is that possible? Yes. And is that worthy of conversation? Yes. In this bill, we're really focused on just creating, which does not exist now, the baseline requirements to do safety testing, which a number of the labs are already doing, and some are probably doing better, and more consistently and thoroughly than others. And we wanna make sure that labs that are creating these huge models are doing that baseline testing. Are there other things that may be reasonable to put on top of that? Potentially, but we're really focused on let let's just let's do this. Take this important steps that is already gonna be controversial. Right? And we're already seeing that controversy. So that that's what we're focused.

Nathan Labenz: (47:04) Yeah. Makes sense. So on the core testing concept, I hear you. The bottom line seems to be we wanna make sure that this testing is happening. We wanna make sure that it's good quality testing. Wanna create the right incentives for that. Openness, you've expressed several times to discussion about potentially a bigger role for third parties or a more kind of forceful insistence on that. Where should people come to have that conversation with you, particularly those that are motivated to do that testing that feel like they have something to contribute, but are currently boxed out of contributing in the way that they would like to?

Scott Wiener: (47:40) Yeah. So people can reach out to my office. So we're super easy to find. They can also message me on Twitter, Scott underscore wiener, I d 4 e n wiener. I can message you there. I can reach out to the office. We really welcome feedback and ideas. And people have been very gracious and helpful in pointing out maybe things in the building could be better or things or problems that we may not have anticipated, and we very much welcome that feedback. Never guarantee that everyone's going to be in complete agreement with everything in that in any bill that I owe out there, but I do really value feedback to make the bill as good as it can be. And there are times I've had bills where there are there's 1 philosophy of legislating saying, if you're opposing my bill, then I'm not gonna listen to anything you have to say. I don't subscribe to that philosophy. My view is that even if you are fighting me on a bill, if you then come forward and say, hey, there's there's something you missed and this is gonna play out in a problematic way. There are times when I'll say, oh my god. Yeah. You're totally right. Thank you for bringing that to my attention. We changed the bill. Even though the person's gonna continue to oppose it. Because my take is I just want my bills to be as good as they can be.

Nathan Labenz: (48:53) I appreciate what a complicated situation this is, what how much uncertainty there is on so many of the key questions. And I know you're balancing a lot in terms of different constituencies and trying to create something that has positive effect and still has a light touch. I definitely appreciate the fact that you're focused on these frontier tales sometimes dismissed as speculative, but I don't think wisely dismissed as speculative risks. I appreciate the work that you're putting into this and the openness that you've expressed to further input as well. Any other closing thoughts from you?

Scott Wiener: (49:27) I appreciate that. First of I'm I'm excited about what AI has to offer. I appreciate the engagement that we're seeing. I would also just 1 thing I wasn't asked, my only request is this is true in life, not the AI policy unless it's about our bill. But if you see something on Twitter or elsewhere online about the bill, just please don't assume that's what the bill does. Because we've seen some real inaccurate information about the bill in Twitter, in particular, over the last weeks. I might say it's all malicious or some people who just see things or hear things and then just post about it. And feel free to reach out to us and ask because we're happy to ask some questions.

Nathan Labenz: (50:12) California state senator Scott Wiener, sponsor of SB 1047, thank you for being part of the Cognitive Revolution. It is both energizing and enlightening to hear why people listen and learn what they value about the show. So please don't hesitate to reach out via email at tcr@turpentine.co, or you can DM me on the social media platform of your choice.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.