The RAISE Act: Minimum Standards for Frontier AI Development, with NY Assembly Member Alex Bores
In this episode, New York State Assembly Member Alex Bores discusses the RAISE Act, a proposed bill aimed at regulating frontier AI models with basic safety protocols.
Watch Episode Here
Read Episode Description
In this episode, New York State Assembly Member Alex Bores discusses the RAISE Act, a proposed bill aimed at regulating frontier AI models with basic safety protocols. He explains his background in technology, his motivations for the bill, and the legislative process. He emphasizes the importance of having clear safety protocols, third-party audits, and whistleblower protections for AI developers. He explains the intricacies of the bill, including its focus on large developers and frontier models, and addresses potential objections related to regulatory capture and state-level legislation. Alex encourages public and industry input to refine and support the bill, aiming for a balanced approach to AI regulation that keeps both innovation and public safety in mind.
Link to the RAISE bill: https://legislation.nysenate.g...
Link to support Raise bill: https://win.newmode.net/aisafe...
Link to support AI Transparency Legislation: https://docs.google.com/forms/...
Upcoming Major AI Events Featuring Nathan Labenz as a Keynote Speaker
https://www.imagineai.live/
https://adapta.org/adapta-summ...
https://itrevolution.com/produ...
SPONSORS:
ElevenLabs: ElevenLabs gives your app a natural voice. Pick from 5,000+ voices in 31 languages, or clone your own, and launch lifelike agents for support, scheduling, learning, and games. Full server and client SDKs, dynamic tools, and monitoring keep you in control. Start free at https://elevenlabs.io/cognitiv...
Oracle Cloud Infrastructure (OCI): Oracle Cloud Infrastructure offers next-generation cloud solutions that cut costs and boost performance. With OCI, you can run AI projects and applications faster and more securely for less. New U.S. customers can save 50% on compute, 70% on storage, and 80% on networking by switching to OCI before May 31, 2024. See if you qualify at https://oracle.com/cognitive
The AGNTCY: The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org/?utm_campai...
Shopify: Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive
NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive
PRODUCED BY:
https://aipodcast.ing
CHAPTERS:
(00:00) Teaser
(04:58) Introduction to the Raise Act
(05:55) Alex Bores' Background in Technology and Politics
(08:22) Legislative Achievements and Focus Areas
(12:23) Constituents' Concerns and AI (Part 1)
(17:47) Sponsors: ElevenLabs | Oracle Cloud Infrastructure (OCI)
(20:14) AI Legislation and the Raise Act
(25:52) Challenges and Future of AI Regulation (Part 1)
(38:44) Sponsors: The AGNTCY | Shopify | NetSuite
(43:06) Challenges and Future of AI Regulation (Part 2)
(56:58) Understanding Knowledge Distillation Clause
(57:34) Challenges of Legislation
(58:44) Safety Protocols for Frontier Developers
(59:16) Addressing Sophisticated State Actors
(01:00:07) Epistemology of Risk Analysis
(01:01:37) Reasonable Person Standard in Law
(01:02:15) Balancing Specificity and Flexibility in Legislation
(01:03:18) Grading Their Own Homework
(01:04:04) Open Source vs. Closed Source Debate
(01:09:40) Thresholds for Critical Harm
(01:24:04) Whistleblower Protections
(01:29:07) Internal Deployments and Their Risks
(01:31:20) State vs. Federal Regulation
(01:34:34) Legislative Process and Collaboration
(01:41:47) Call to Action for Public Involvement
(01:43:49) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...
Full Transcript
Transcript
Nathan Labenz: (0:00) Hello, and welcome back to the Cognitive Revolution. Today, my guest is Alex Bores, New York state assembly member representing New York's State District 73 on the East Side Of Manhattan and sponsor of the RAISE Act, a bill designed to set a minimum standard for safety practices among large AI developers. In the generally polarized political environment that we take for granted in 2025, it is a striking fact that while poll after poll shows that significant majorities of voters across both American parties and internationally want more regulation of AI with far more voters worried that the government won't go far enough than that it will go too far, The United States still has no meaningful laws covering foundation models or frontier AI development. In part, this reflects the fact that while people generally agree on this issue, it's not a top priority for many. In part, it's the result of a very healthy fear among legislators that they don't understand this fast moving technology wave well enough to regulate it effectively and thus might end up doing more harm than good. And in part, it stems from the fear that anything that slows US AI development down could allow Chinese AI developers to get ahead and ultimately win whatever AI race we ultimately find ourselves running. These are real and important concerns, and I've spent most of my adult life arguing against premature or heavy handed regulation that might inadvertently deny us the benefits of breakthrough technologies. But nonetheless, it seems to me that especially considering the breathtaking pace of AI capabilities advances and the fact that even among Turing Award winners, forecasts range from AI enabled utopia to AI caused human extinction, a functioning democracy would be responsive enough to public concern to put at least some minimal standards in place now, both to reduce catastrophic risks and hopefully to avoid a future crisis, real or perceived, that could lead to knee jerk and ultimately counterproductive decisions. With that in mind, I think assembly member Bores, who notably has a master's degree in computer science and spent a decade in the tech industry, including a number of years at Palantir, is perhaps the tech savviest and friendliest legislator the industry could hope for. And the RAISE Act, which targets large AI companies and imposes relatively modest requirements around the development and publication of a safety plan, audits to assure the safety plans are followed, and whistleblower protections to alert the public if they're not, is approximately the least burdensome regulation we could realistically expect to see passed. And it should be said, is very much in line with voluntary commitments that frontier model developers have previously made. Nevertheless, in this conversation, we get deep into the weeds of the various definitions the bill uses and the requirements it imposes. I act as a sort of red teamer of the bill, assembly member Boris clarifies and defends key provisions and also explains the process he's already gone through to work and compromise with industry and also to avoid a situation in which lots of different states create undue friction by passing their own distinct regulatory frameworks. Importantly, everything we discuss would only apply to a high single digit or perhaps low double digit number of super well resourced companies that are working at the frontier of AI capabilities. And these companies would still have wide latitude to design their own safety plans so long as they take reasonable care to minimize risks that could cause 100 or more human deaths or greater than $1,000,000,000 in damages via either chemical, biological, radiological, or nuclear mechanisms or what I would call automated crime. While some will no doubt raise additional good faith objections, I found Assembly Member Bores' defense of the bill's neutral stance with respect to open source quite compelling. And as you'll hear, I hope, if anything, that this bill spurs open source champions to invest heavily in new safety techniques so that we can continue to enjoy open source frontier AI without dramatically elevating the risk of engineered pandemics or other AI enabled disasters. In any case, this is just the first of a series of episodes on different AI policy proposals that we'll be bringing you this summer. So I look forward to exploring a broad range of perspectives, and I will continue to watch this bill as it evolves through the legislative process. As always, if you're finding value in the show, we'd appreciate it if you take a moment to share it with friends, read a review on Apple Podcasts or Spotify, or leave us a comment on YouTube. Of course, we welcome your feedback either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. Finally, a quick reminder, I'll be speaking at Imagine AI Live coming up May in Las Vegas, the Adaptive Summit, August in Sao Paulo, Brazil, and the Enterprise Tech Leadership Summit, September, again in Las Vegas. If you'll be at any of these events, definitely send me a note, and let's meet up in person. For now, I hope you enjoy this deep dive into the RAISE Act, a bill meant to raise the floor for frontier AI development safety practices with New York assembly member and sponsor, Alex Bores. New York assembly member, Alex Bores, sponsor of the RAISE Act. Welcome to the Cognitive Revolution.
Alex Bores: (5:03) Thanks for having me.
Nathan Labenz: (5:04) I'm excited for this conversation. You have waded into, you know, what might seem at first like no big deal, just a little light technology regulation waters and which, you may find if you haven't already. It brings out a lot of strong feelings in people. So but I do applaud you for taking on the challenge, and I'm excited to get into your perspective and motivations and what you're hearing from people and obviously the the proposed legislation itself. And I really appreciate you taking the time to do this.
Alex Bores: (5:34) I'm really excited to be here. I know you have a lot of guests from all over the AI field, which I've enjoyed listening to, but not as many elected officials, so excited to dive in.
Nathan Labenz: (5:43) Yeah. You're on a short list. Speaking of short lists, 1 thing I like to do, especially for people that are coming into the AI world from other backgrounds, is at least give a little bit of kind of, you know, credentials or context on you. As I was doing my homework, I noticed that you are the only member of, I guess, the Democratic Party in the New York state government that has a degree in computer science. So maybe just give us a little bit of your personal background and relationship with technology. I think that'll be helpful.
Alex Bores: (6:10) Absolutely. I was the first with a degree in computer science. I'm happy to say I'm no longer the only 1, though it is quite limited. But, yeah, my background is that I worked in tech for nearly a decade before I ran for Office. I was at Palantir for almost 5 years. I joined as a data scientist, rose up to lead a large portion of the government business. I joined a couple of startups after that. During that time, I got a master's degree in computer science with a specialization in machine learning. And then this seat opened up in 2022. And I had a lot of helpful conversations with friends. I had always been downstream of policy, often trying to fix it with tech. And this was an opportunity to go upstream and actually design policy the right way. And so, 1 of those conversations I had was focus on, listen, this is a thesis as to how you can have an impact and run. You don't know if you're going to win. And if you do win in 2 years, in 4 years, if you're not effective, if you're not enjoying it, you can quit. But you can't in 2 to 4 years say, now I'm gonna run for the open seat, right? That happens when it happens. And so I threw my hat in the ring, it was a contested primary, I'm still friends with everyone who who, we all ran against each other, but I ended up being victorious there. I'm now starting my second term and terms are only 2 years. So I'm just starting my third year in the legislature. But so far I found it to be a place where you very much can be effective. And so I'm enjoying it.
Nathan Labenz: (7:43) Yeah. That's cool. Definitely more people with technology backgrounds in government seems good. And also frankly, just more people who have no fear of a post elected office life, I think would be great. Too many, you are kind of like, what would I do if I didn't have this seat? Right? And that's not a great position for the public to be in.
Alex Bores: (8:03) No. It's a very dangerous debate. You have to be not too attached to to the seat or the job. Absolutely.
Nathan Labenz: (8:08) Yeah. Gotta be willing to that's that's all I ask for from our elected officials is willingness to sacrifice your political career when the occasion calls for it. And, turns out that is kind of a lot to ask. So how about just a little bit more detail on, like, some of the other technology related things that you've done in your time there? I noticed that you had pushed for the state to adopt cloud computing. Yep. I also caught the push for a land value tax, which I I'm coming to you from Detroit, Michigan. We have a lot of empty lots and probably much less of a problem in New York City than it is here, but we have a lot of empty lots of people that are just free riding on others' investment and, you know, waiting for their land to appreciate since it is not taxed in that way. So maybe just give a little bit more context on some of the things that you have pushed so far.
Alex Bores: (8:52) Absolutely. I there's when you're a legislator, you end up working on a wide variety of things because your constituents care about a wide variety of things. So, you know, all of us have our specialization and our knowledge that we bring into the legislature that's perhaps more unique and mine is around tech, so I've done a lot there. But certainly the concerns of my constituents vary widely and so I work on a lot of things. Within tech, as you mentioned, I've encouraged the adoption of cloud computing within government so that we can deliver services quicker. I've helped to strengthen the protections for tech workers, really for all workers but it was contracts that are more specific in the tech industry where companies would say they own any IP you develop while you're employed even if it's not on company time or related to anything in the company and that would just chill startups. And actually partnered with the tech industry to pass that because it was based on a regulation that had already passed California nearly a decade before. But 1 of the things people want the least is a bunch of varied regulation across different states. And so I was like, I'm going to copy the California 1 exactly. And tech was like, Great! Even though it's limiting us, like the fact that it's a copy that it won't be additional work, they ended up supporting it. So I've also done a number of bills this session around AI beyond the RAISE Act. I'm working on ensuring companies don't fall into legal liability when they red team their own algorithms. We actually want to encourage safety. And so while we are strengthening a lot of provisions around preventing CSAM, child sexual abuse material, We also want to make sure there's a legal liability shield for red teaming for trying to stop your algorithm for doing that. I'm encouraging the use of industry developed standards like C2PA that are metadata that help to establish provenance on an image or a sound so that you know what's real and what's not. But beyond sort of tech as you mentioned, I do a lot on housing. I have a bill to enable a pilot on land value taxes. I have something specific to New York City. So you mentioned Detroit and maybe New York City doesn't have as many vacant lands. We don't have as many, but the ones we have are valued at an incredible amount and the loss of tax revenue is substantial. If you just looked at the vacant land within New York City and were to tax it at its normal market value instead of this discount, I don't wanna get into all of New York City property law, but no property is taxed at its actual market value. If you were to just tax vacant land at its market value, that difference would be another $800,000,000 a year for the New York City budget. So I have a bill that would shift that around to make more uses for it. And then I do a lot around public safety. 1 of the things is that our trials in New York State are very backlogged. They're extremely delayed and there's many, many reasons for that. The dumbest of which is that we don't have enough judges. And I say that's the dumbest because that should be an easy thing to fix. So you just create more judges and 2 years ago I did. I passed a bill the governor signed that created 20 new judges throughout New York State, but I couldn't create any new ones in Manhattan or in The Bronx or in the capital region around Albany because of a limit in the state constitution that dates back to 1846. So 1 of my other bills is a constitutional amendment to get rid of that limit, to allow us to have more judges and speed up trials. So and then another 60 or so odd bills on my website. Anyone can take a look at and I always love feedback on.
Nathan Labenz: (12:21) Cool. That's great. I appreciate the introduction. You mentioned your constituents and I thought it would be helpful also just to locate you in, geography and sort of get a sense for the people that you're representing. And then I wanna ask, you know, kind of to what degree are they thinking and talking to you about AI? You know, where does it rank on their priority concerns? So maybe just take us quick through, like, the geography, the sort of, you know, profile of the people that you're representing, and then what, if anything, are you hearing from them about AI? Like, is this something that they're pushing you to act on or is this something that you are doing out of intrinsic motivation while they, you know, are mostly concerned with other things?
Alex Bores: (12:56) I represent part of the neighborhood where I grew up. So my district is in Manhattan, much of the East Side Of Manhattan. It's part of the Upper East Side and Midtown East. So for those in New York, 34th To 93rd Street, 2nd Or 3rd Avenue to 5th for most of the district plus Sutton Place. So in the 50 is the whole East Side. It is a highly educated district. It is also the wealthiest district in New York State but even within that, there's many people facing challenges. 40% of the renters within my district are rent burdened. They're spending more than a third of their income on rent. And so it's a district that has a
Nathan Labenz: (13:33) **lot of sort of
Alex Bores:** (13:34) pride in its education and its schools. It's in District 2. I happen to represent my elementary school and right near my high school and my middle school. They're both across the street from my district. It's an area I know well. It's sometimes neighbors or it's been fun getting to represent and now meeting a different like parents of the friends I grew up with. And they, like any district, have a wide variety of concerns. I think if you were to pull them, AI probably wouldn't be towards the top. But, you know, they're worried about the cost of living. They're worried about public safety. They're worried about the schools. They're worried about other things that everyone is worried about. When we get into a conversation and I start mentioning my background, I'm expertise in tech, the usual response is like, Oh, thank you. Because like, I'm terrified. And I don't know what we should be doing, but I'm glad there's someone there thinking about it. There's this sort of latent fear might be too strong a word, although for some I would say fear, but certainly unease and a feeling like something is coming and they don't really know what they're supposed to be doing about it.
Nathan Labenz: (14:40) Yeah. Is that, is there any more like shape to it than that when people get on the topic of AI? I mean, we entertain all, you know, the full range of risks on this program. And I would say your bill is, as we'll get into here, it's more consistent with my approach, which is like, I want to see us have kind of a lot of benefits, a lot of deployment, a lot of use in places like education, even though that's going to be fraught and we're going have to figure out a lot of things. I'm quite convinced that like an AI tutor for every kid is a part of a winning future. Then there's, you know, all kinds of things around, like, privacy and, you know, New York is probably the place with the most security cameras on the street of any place in the country. Maybe DC has more. I don't know. But that's something I could imagine people talking about, you know, surveillance and just kind of who's watching who all the time. Do people where are they on this? Or, you know, what what sort of mix of AI specific concerns do you hear about?
Alex Bores: (15:35) All of the above. And I would say at the start, I I largely agree with you. I think there is so much capability out there now that is unevenly distributed and we could be making so much better use of the existing tools in ways that government helps to service people and education, as you said, in a variety of different fields. And also, I am worried about what the future holds if we don't put guardrails on for the research. And that sort of like short term bullish, long term kind of wary is, I share with you, but I don't know if it's as common in many places. I tend to all of those concerns matter to my constituents. I hear a lot about workplace displacement. I hear a lot about privacy and surveillance. New York is now in the minority of states that doesn't have a comprehensive privacy law. We've been working on it for a few years so that certainly comes up as a bedrock that we are delayed on. But I tend to think of AI concerns using like 3 binary questions. So first is pessimistic or optimistic. The second is short term or long term. And the third, when you think about bills, is it use case specific or is it general to the model? And I think you can find concerns and therefore you can find legislation in all of those categories. And you know, I have bills in many of those categories. Most of the legislation my colleagues are working on tends to be short term pessimistic and either use case or general, right? And by short term, don't mean that it'll expire. Just mean that it's dealing with harms that are definitely already here. The chance of discrimination, the chance of privacy violations, right? Things that are in use today. Whereas long term is things that maybe they're not here yet, maybe they are, but they're just starting to and we're thinking about where it'll go, whether that's broader societal risks, etcetera. And that's where the RAISE Act is focused, not because I think that's the only thing that matters or the only thing we have to do, but just that is a place that not as many people are focused and I do think there's important steps we have to take.
Nathan Labenz: (17:42) Hey. We'll continue our interview in a moment after a word from our sponsors. Let's talk about 11 Labs, the company behind the AI voices that don't sound like AI voices. For developers building conversational experiences, voice quality makes all difference. Their massive library includes over 5,000 options across 31 languages, giving you unprecedented creative flexibility. I've been an 11 Labs customer at Waymark for more than a year now, and we've even used an 11 Labs powered clone of my voice to read episode intros when I'm traveling. But to show you how realistic their latest AI voices are, I'll let Mark, an AI voice from 11 Labs, share the rest.
Ad Read: (18:24) 11 Labs is powering human like voice agents for customer support, scheduling, education, and gaming. With server and client side tools, knowledge bases, dynamic agent instantiation and overrides, plus built in monitoring, It's the complete developer toolkit. Experience what incredibly natural AI voices can do for your applications. Get started for free at 11labs.io/cognitive-revolution.
Nathan Labenz: (19:00) In business, they say you can have better, cheaper, or faster, but you only get to pick 2. But what if you could have all 3 at the same time? That's exactly what Cohere, Thomson Reuters, and Specialized Bikes have since they upgraded to the next generation of the cloud, Oracle Cloud Infrastructure. OCI is the blazing fast platform for your infrastructure, database, application development, and AI needs, where you can run any workload in a high availability, consistently high performance environment, and spend less than you would with other clouds. How is it faster? OCI's block storage gives you more operations per second. Cheaper? OCI costs up to 50% less for compute, 70% less for storage, and 80% less for networking. And better? In test after test, OCI customers report lower latency and higher bandwidth versus other clouds. This is the cloud built for AI and all of your biggest workloads. Right now, with 0 commitment, try OCI for free. Head to oracle.com/cognitive. That's oracle.com/cognitive.
Nathan Labenz: (20:10) Are there any other things that you're pushing legislation on that might be described as controversial? I mean, the 2 that you mentioned previously around, like, encouraging red teaming for things like CSAM seem like most people would be quick to sign on onto something like that. Is this a situation where, you know, we we sort of see the 10% of the iceberg that pops up and actually gets public debate and then like 90% of things are generally pretty smooth sailing. People agree?
Alex Bores: (20:41) Well, first of all, I don't think any of my bills are controversial. I think they're all common sense. But, know, you don't always control what the outside reaction is. You'd be shocked at the level of pushback I've gotten on encouraging C2PA, which is a, again, a free, open industry standard that industry itself developed. And then I say, Oh, this is great. We should encourage it. And I get pushed back on, Woah, no, don't encourage the thing we develop. So, yeah, but I think what you said at the end there is quite insightful and something that people forget. Most of the work that any government does is non controversial. Most of the bills we pass are unanimous or nearly unanimous and they're just about making government work. You know, we'll do probably 800 bills total between the Assembly and the Senate that we both do that we send to the governor's desk. She'll end up signing 600 or so a year. And, you know, of that 800, that 600, you're going to maybe hear about 20, maybe 50. Most of the work is just making government work and this is the thing I really remind people because we're at a point where belief in government is so low that and partly that's because all they see on the news are fights. All they see are the drama but that's because there's an incentive to cover the fights drama. There's not as much incentive to talk about funding water infrastructure so that everyone has clean water. Yeah. Of course, we should do that. Right? But that's most of the work of government and most of what we end up spending time on.
Nathan Labenz: (22:13) Yeah. Are you getting any push for sort of protecting various industries? I mean, this is something honestly that I have expected. Okay. Yeah. So tell me more about that. Because I my perspective was, like, 2 years ago, I was like, wow. We are gonna get into just brutal bitter fights about where AI is allowed to be deployed and who's gonna have what sign off authority and whatever. And it's been slower to develop than I would have guessed. So I was kind of expecting you to say not yet much, but now you're saying all the time. So tell me more about that.
Alex Bores: (22:45) Yeah. Well well, sorry. In what ways do you think it's been slower? What were you expecting to say?
Nathan Labenz: (22:49) I would have expected the medical establishment to have a, like, strongly unified front by this point that AI doctors must be confined to some very narrow box and, like, not available to the public directly and so on and so forth. And still today, can go on to ChatGPT, Claude, and Gemini, which I did this week for a little thing on my kid's eye that was trying to figure out what it was. The tip to the user is tell them you're preparing for a conversation with your doctor and that, disarms the I'm not your doctor, you know, cached routine. Yeah. With that, you can basically engage, right? And it's extremely valuable. And I I would say, in my case, it probably did displace a trip to the doctor, which is 1 of the things obviously that any professional guild that might fear. So, yeah, that's I mean, maybe that still will happen, but I had kind of expected it already and it's you know, the fact that I can just still go to Chad GPT and ask my questions is, like, honestly kinda surprising to me of 2 years ago.
Alex Bores: (23:50) No. It's a it's a really good flag. I wonder if partially that's because nurses and doctors and so much of the health profession are licensed and so there's already some built in protection. 1 of the tongue in cheek things I say, although there's some truth to it, is that people in government elected officials are sort of the least qualified to be focused on AI displacement of workers because by law we can't be displaced by AI. You're not allowed to elect an AI and it would be quite the change to the constitution to make that true. But kernel of truth to that is when you have licensing, there's some kind of built in protection. I think it's the jobs where none of that exists and especially where they don't have unions that you're going to see much quicker turnover. And so we see that in terms of a lot of the entertainment industry, right? You saw a lot of strikes last year by the Screening Actors Guild and the Writers Guild and the Directors Guild was about AI's role in producing movies and TV and what that'll be going forward. We see it in government employees as well. 1 of the things that New York did last year was pass a bill that said you cannot use AI to replace a government employee but it was specific on replacing the actual employee. It doesn't mean you can't replace tasks and focus them on other issues. We have so many open slots in government and so much more that we could be doing And once you have that baseline of oh, you you as an individual worker are not going to be replaced, you can change the conversation on and this is why you should learn it and be happy about it. It's going to make your job easier and you have a protection of law versus many people who approach AI coming in like the not the metaphorical Luddites, but the literal Luddites that just said this is just here to take my job and let's destroy it.
Nathan Labenz: (25:46) Okay. That's main focus of our conversation will still be on the RAISE Act, I promise. But that's really interesting, and I wonder what you think about it. I see a very similar question. I was honestly kind of surprised by the answer that I got from New Jersey governor Phil Murphy not too far away. He was touting that, you know, they had done various AI deployments to accelerate call center response times. Yeah. So previously, if you called, like, whatever, you know, line, you would wait 40 minutes on average, and they were able to bring that down to, you know, single digit minutes. So great improvement in quality of service. But I asked if you had an option. Right? And it seems like this is like coming quite realistically quite soon. Right? Let's say you have an option where you can deploy AI to a customer service function. It's not you know, we're not yet talking about like strategic decision making legislative, but just where the rubber hits the road, you can put an AI in a call center. Let's say you could do that in a way where you could cut 90% of headcount and 90% of costs and improve the service, still got some people there perhaps to, you know, take the escalations or whatever. How do you think governments should be thinking about that? Like, should they be prioritizing efficiency and service, or should they be prioritizing the jobs that they have? Or, you know, is there some synthesis of those that you can imagine?
Alex Bores: (27:09) Yeah. It is an interesting question, but 1 that is largely academic because of how underfunded and already perhaps inefficient government is and that like there's more work for any of these people to do. Recently in New York City, they transitioned to automated purchasing of Metro cards and now this OmniCard and so the station agents weren't as needed. They didn't get rid of any of the station agents, they just empowered them to actually walk through the station and then they can help to actually be a presence there and be engaged with people in that way. With call centers, I mean, I've personally called a government agency and been on the phone for 4 hours and waiting and meanwhile I'm Googling and searching. And if there were an easy chatbot or something that could have answered my question ahead of time, not only do I get an answer but that takes me off the queue. And the person that actually needs to talk to a human, because there will always be people that still need that sort of engagement, get to get there a lot faster. So we really, if we're smart, we should be pairing all of these new tools with the knowledge that comes from this deep work in government and this deep work with our citizens in order to make things better for everyone. You know, I'm a big fan of Jen Polka and Recoding America and in that book she talks about someone who had joined, I think it was the California Department of Labor to process unemployment claims. I may be in the state wrong but the story was that there was the new guy who didn't feel really confident in all the systems and all that and the new guy had been in the job for 18 years. I mean, the amount of just human knowledge that is tied up that we could unleash if we're protecting their jobs, right? And that doesn't mean in the future you're always going to hire the same number of people, you know? You can go down by attrition and that. You can repurpose people into other roles. But if you make it us versus the machines, you're you're not gonna get the best results for people.
Nathan Labenz: (29:07) Yeah. I feel like I wonder how long that paradigm lasts. That does seem like the current paradigm, you know, accurately described. But the Razek certainly seems to be a bill that is, like, at least partially feeling the AGI, so to speak. And I do wonder if that paradigm lasts more than, say, another 1 or 2 years. And I also do really wonder about people's ability to change into those new jobs, you know, especially at scale. It's like we have millions of people driving, you know, cars and trucks in the country. And if the self driving car stuff really starts to work, which by the way, having been in a Waymo and an Uber or not an Uber, a Waymo and a Tesla recently, like, it really is starting to work. Totally. You know, that's gonna be a wave where it's like, we're not really gonna be able to tell 4,000,000 drivers or whatever. Like, oh, you can go learn to code. By the way, it's still it's also now a hotly debated question as to whether or not it's even worth learning to code. So where are they gonna go? It's like another interesting question.
Alex Bores: (30:10) I guess Well, and I a 100% agree. I just wanna point out. I think that'll hit the private sector before the public sector. I think the public sector has so much kind of we we can do so much with just trimming our regulation and we have so many vacant positions already. That will the private sector first. But it's a thing that I am concerned about because people say you look through history and every technological revolution, every advance creates more jobs than it destroys. Maybe, but but the speed with which that technology advances has been shrinking over time. And so until recently, you could guarantee that maybe it would create new jobs and it would be worth it for you to go back to school or to be retrained because the next revolution is not gonna happen in your career. Right? But now we're at a place where jobs could be replaced, you know, every year or every 6 months. And if the AI is acquiring new skills faster than any human being can, that is a fundamental question we don't really have a policy answer to because by the time you retrain for the new job, you're gonna be in the same circumstance. So that is that is the thing government government and people outside of government need to be thinking a lot more about.
Nathan Labenz: (31:23) Yeah. Do you spend time thinking of a new social contract?
Alex Bores: (31:29) You know, I am right now thinking a lot about the RAISE Act and how to get that through. But once that is through, this is definitely a place where I I wanna spend some cycles, and I'm really interested in having conversations with others that are doing that deeply.
Nathan Labenz: (31:44) I saw something recently, and I don't know much about this at all, but I think Tyler Cowen posted on marginal revolution. Who needs a UBI pointing to the New York state? And you can correct anything I get wrong on this. The the idea that, like, there's now ability for people to choose and hire their own independent caregivers, and many are hiring people that they know, people from their families. And this is like in some circles, you know, treated as a scandal. And I kind of looked at it and was like, well, if there's 1 candidate, like broad class of activity that people could maybe shift into in real numbers, caregiving broadly would maybe be the thing. And this almost does seem like, you know, I'm not sure how Tyler meant his, who needs a UBI commentary, but I was kinda like, oh, this does sort of seem like a proto UBI policy that also still tries to get some useful contribution from people and, you know, probably does quite well on scores of meaning and things like that. And so I was kinda like, man, maybe New York state government has kind of stumbled onto something here that actually could be the seed of a new future social contract.
Alex Bores: (32:54) I I haven't read that piece. And certainly, if he's referring to what I think he's referring to, it it is not meant to be a UBI. So while there have been a lot of investments in caregiving and home healthcare, it's often paying for family members or others to do it instead of someone going into a nursing home which might be much worse for them from a social perspective but also cost the state a lot more. So there certainly have been people on the edges that have taken advantage of the program and we made changes in the budget last year to crack down on that. But overall, those sort of home health care have actually saved the state a lot of money. So I I think you have to put that in the broad picture of things. No. UBI is definitely part of the conversation. This is 1 of the things that's moving really, really quickly. And so I imagine that will be part of it and there will be many other ideas that come as well.
Nathan Labenz: (33:46) Yeah. And by the conversation, do you mean, like, the conversation in the New York State Assembly?
Alex Bores: (33:51) I think more broadly than that at the moment, but hopefully, part of the conversation legislature soon.
Nathan Labenz: (33:58) Cool. Alright. Well, let's, narrow focus then to the RAISE Act and what you're trying to do with it. I've got pretty detailed notes here, but maybe the first thing to do is just have you give the pitch. Like, what are we trying to do? What does this bill require? Why should we be confident that it's not too big of a burden to put on companies? Give us the high level. So this bill is meant to ask companies that are doing
Alex Bores: (34:24) extremely advanced research of the kind that we don't really know the impacts yet to have some basic safety protocols in place. Largely, those safety protocols that are required in the bill are in line with commitments that they already made during the Biden administration. And so how do we know that it's not too onerous? Well, they've largely already committed to do it and in many cases are already doing it, if not to the exact letter of the law close to the spirit of the law. And those 4 provisions that it requires is that they have a safety plan, that safety plan be looked at by a third party, it's not them, that they disclose critical safety incidents and we define that strictly in the bill as to what qualifies, but it has to be something that's really increasing risk and that they don't retaliate against their own workers or that third party if they are whistleblowers, if they disclose something that is truly catastrophic risk. That's all it asks and people say, you know, what is the impact? What is the target of that? In many ways, we're just putting very basic guardrails there. I don't think this is the furthest that we should go. I don't think this is the end of what's there. I think in many ways, this is just laying out a floor such that most people in the field are really good actors. But when you have the crutch of your next quarterly profit because largely this right now, this would almost be exclusively public companies with some extremely well funded exceptions. But when you have the next pressure of quarterly profits, it can become easy even if you've written down a safety plan to maybe say, hey, we're going to cut a corner on this. We're, you know, we're 2 months behind in the AGI race and that could be catastrophic. Let's just skip this test. We want to make sure there's no incentive for doing that jump. And at the most basic level, we defer a lot of the choices to the companies themselves. We don't come in and say you need to have exactly these tests done or exactly this evaluation of risk. What we're trying to prevent are the cigarette companies of old knowing that their cigarettes cause cancer but then denying it publicly and not doing anything to make the cigarettes healthier. Or the oil companies knowing for decades in advance that their products are causing climate change but denying it and still putting it out there. This is meant to say that if your own testing, if your own research that you've thought of ahead of time without the economic pressures, if your own testing and research is saying this is a massive risk that it could cause what we define in there as a critical harm, a 100 deaths or 1000000000 dollars in damages, you shouldn't be releasing that model.
Nathan Labenz: (37:11) Yeah. I mean, that's that's, seems like a not super, stringent threshold.
Alex Bores: (37:16) As I said, I think all of my bills are noncontroversial.
Nathan Labenz: (37:19) It's just the other people that sometimes don't see it that way. So, yeah, let's let's go through a few of the definitions. So you gave the kind of threshold for severe risks, 100 deaths or 1000000000 dollars in damages. There's also this large AGI company or it doesn't say AGI, it says large AI companies. And that seems to be defined as a company that has spent a $100,000,000 in total training models and at least $5,000,000 on 1 model in particular. Do you have an idea of how many companies that would cover in today's world? I I'm not quite sure, honestly.
Alex Bores: (37:55) I think still single digits. Maybe we've crossed into double, but it's a very small number, and that's intentional. It's meant to be looking at the absolute frontier as it exists right now and not sweep in too many others. I think 1 of the objections to previous regulation was that it would involve a lot of startups. It would involve a lot of smaller companies. So, chose that $100,000,000 threshold intentionally and we also exempted, I'll point out, academic research as part of that. As I said, again, this is really about those potential incentives to skip your safety plan, and we just don't see those same sort of incentives in academic research.
Nathan Labenz: (38:35) Hey. We'll continue our interview in a moment after a word from our sponsors. Build the future of multi agent software with Agency, a g n t c y. The Agency is an open source collective building the Internet of agents. It's a collaboration layer where AI agents can discover, connect, and work across frameworks. For developers, this means standardized agent discovery tools, seamless protocols for inter agent communication, and modular components to compose and scale multi agent workflows. Join CrewAI, LangChain, Lama Index, Browserbase, Cisco, and dozens more. The agency is dropping code, specs, and services, all with no strings attached. Build with other engineers who care about high quality multi agent software. Visit agency.org and add your support. That's agntcy.org.
Nathan Labenz: (39:31) Being an entrepreneur, I can say from personal experience, can be an intimidating and at times lonely experience. There are so many jobs to be done and often nobody to turn to when things go wrong. That's just 1 of many reasons that founders absolutely must choose their technology platforms carefully. Pick the right 1, and the technology can play important roles for you. Pick the wrong 1, and you might find yourself fighting fires alone. In the ecommerce space, of course, there's never been a better platform than Shopify. Shopify is the commerce platform behind millions of businesses around the world and 10% of all ecommerce in The United States, from household names like Mattel and Gymshark to brands just getting started. With hundreds of ready to use templates, Shopify helps you build a beautiful online store to match your brand's style, just as if you had your own design studio. With helpful AI tools that write product descriptions, page headlines, and even enhance your product photography, it's like you have your own content team. And with the ability to easily create email and social media campaigns, you can reach your customers wherever they're scrolling or strolling, just as if you had a full marketing department behind you. Best yet, Shopify is your commerce expert with world class expertise in everything from managing inventory to international shipping to processing returns and beyond. If you're ready to sell, you're ready for Shopify. Turn your big business idea into cha ching with Shopify on your side. Sign up for your $1 per month trial and start selling today at shopify.com/cognitive. Visit shopify.com/cognitive. Once more, that's shopify.com/cognitive.
Nathan Labenz: (41:27) It is an interesting time for business. Tariff and trade policies are dynamic, supply chains squeezed, and cash flow tighter than ever. If your business can't adapt in real time, you are in a world of hurt. You need total visibility from global shipments to tariff impacts to real time cash flow, and that's NetSuite by Oracle, your AI powered business management suite trusted by over 42,000 businesses. NetSuite is the number 1 cloud ERP for many reasons. It brings accounting, financial management, inventory, and HR all together into 1 suite. That gives you 1 source of truth, giving you visibility and the control you need to make quick decisions. And with real time forecasting, you're peering into the future with actionable data. Plus with AI embedded throughout, you can automate a lot of those everyday tasks, letting your teams stay strategic. NetSuite helps you know what's stuck, what it's costing you, and how to pivot fast. Because in the AI era, there is nothing more important than speed of execution. It's 1 system, giving you full control and the ability to tame the chaos. That is NetSuite by Oracle. If your revenues are at least in the 7 figures, download the free ebook, Navigating Global Trade, 3 Insights for Leaders at netsuite.com/cognitive. That's netsuite.com/cognitive.
Nathan Labenz: (42:52) Yeah. Okay. That sounds about right to me. For for I was kinda like, is Amazon on that list yet or not? I mean, we're talking like big companies that would be sort of the marginal, you know, in or out. Yep. It's also notable to me that the kinds of risks that are covered are pretty narrow. You've got your classic CBRN, chemical, biological, radiological, nuclear, I I believe is the end. Yep. Biological, of course, being the number 1 in that category by far. And then you've got another category, which I think is pretty smart, which is just automated crime. I've got a little background myself as a red teamer of various models and products, and it does honestly amazing. Although this, you know, actually wouldn't necessarily be covered in some forms by the bill, but it is amazing. In some cases, you can go to some of these, like, calling agent companies, clone a voice. I've done Biden. I've done Trump. Yeah. I've done Taylor Swift and prompt the model to just call any number and say anything and just try to scam people at scale. And they're like, still in the uncanny valley, but, like, automated crime is definitely the kind of thing that is now doable Totally. With some success rate. But that's it. Right? So I I'm interested in kind of, you know, any any reflections you have on kind of where you decided to draw that line. It strikes me that these are not like behavior I mean, with smoking, of course, like smoking is obviously bad for you. I am well aware of that. But it is much more of a behavioral sort of thing where it's like, do have some agency in the situation. You kinda know it is bad for you at this certainly at this point. You probably know you shouldn't be doing it, but you're still kind of doing it. You know, the smoking is almost more to me like the, you know, we're going have addiction to AI, we're going have people kind of falling in love with and you know, going off into weird lands with their guys.
Alex Bores: (44:39) I don't wanna say using AI is smoking. I use AI every day for various things. Right? I was just doing the example of companies have knowledge. Their own tests show it's risky, but then they go ahead anyway. And the point of the bill being if your own tests are saying this is risky, we actually as a state are going to say, you need to take a pause there. I want to talk about the risk, I do want to say 1 more thing on the definition of large developer before we move on from that, which is, you have to be
Nathan Labenz: (45:07) a large developer, so you
Alex Bores: (45:08) have to spend $100,000,000 in training costs. And then a frontier model is either 10^26 parameters and $100,000,000 on that model or specifically knowledge distillation. So, that's not the post training modification, that is the specific process of using another frontier model, something that is 10 26 to train a smaller model that can have similar performance and as broad performance as that original 1 and that you need to have spent at least $5,000,000 on. That's largely in response to DeepSeek. We're seeing a lot of new models spin up that are being trained on the larger ones that are introducing their own risk And so this bill has written, you know, might not cover the first version of DeepSeek that caught everyone's attention because that was trained on a 1, which wasn't 10 to the 20 sixth, but a DeepSeek like thing in the future as long as it has any interaction with New York. So that means it's available in New York via the App Store. That means, you know, you have any employees, any business presence. If you want access to New York markets, this applies to you. And so, that future version of DeepSeek would actually have all these requirements in there as well. And that was an intentional push on our front to not just apply to the first movers and then have people especially overseas introducing similar levels of risk. That was a really intentional choice in that regard. But jumping to the risks as you brought up, we include, right, chemical, biological, radiological, nuclear risk and that's, you know, if it in any way is aiding to bring that about, then these provisions apply and that's seen as an unreasonable risk of harm. And then I've never phrased this as automated crime but I think I'm going to do that going forward. The reason we're saying it has to be committing those crimes with limited human intervention is we do want some built in protection on like, you can really abuse and twist and modify a model. And if a human being is really determined to prompt it in order to do something, you know, that violates some obscure law, that that's not what this is meant to target. But there's some level of limited human intervention, right? If I can just tell it to do a thing, it largely does it, right? That's what would fall under the potential risks. And there's an additional caveat in those risks which is it has to be materially helpful in doing that. If I could Google how to build a nuke and I get the same high level overview as I do when I enter it into an LLM, that's not something you're going to be held liable for. It's really for when you're making a material difference in the ability to do that.
Nathan Labenz: (47:53) Yeah. For what it's worth, what I meant to emphasize in bringing up the sort of behavioral aspect of smoking was really just that like there's a lot of other things that people are worried about, and I think with at least some good reason that are not, you know, that are out of scope for this bill. And but that includes basically anything where it's like, this might be bad for you, but you might like it, and we're not really sure. And so, you know, all of that kind of stuff is out, and we're not, like, you know, in people's sort of private business with their personal relationships with AIs with this particular bill. Yes. Totally. The knowledge distillation thing is, if I had to guess, probably going to be 1 of the most fraught provisions that people are going to really wanna understand and pick apart. It seems to me very reasonable to say to, you know, high single digit to low double digit companies that are spending a 100,000,000 plus that you gotta have a plan, you gotta publish the plan, you gotta have an audit to make sure you're standing by the plan, and you gotta not you gotta have some prediction for whistleblowers in case you're not doing that and people see that internally. And I think most everybody's gonna be sort of on board with that. I certainly, like, your constituents, you know, I would expect will be quick to support that. The knowledge distillation piece, is tricky. I guess first, just clarification question is, like, is it an and clause? Like, is it if I'm doing knowledge distillation, I still have to be spending Yes. $5,000,000? Yes. Or and a $100,000,000 or not necessarily a $100,000,000? Okay. Yeah. So that so That is a hard hurdle that I if I'm not over those financial levels, then I can distill all I want. Like, I can go grab r 1 and distill into llama, you know, 4.1, whatever I wanna do as long as I'm doing it under that 5 and $100,000,000 spend level.
Alex Bores: (49:42) Correct.
Nathan Labenz: (49:43) Okay. And those are, I assume, cumulative over, like, all time.
Alex Bores: (49:48) The 5,000,000 is per model. The 100,000,000 is cumulative over all time.
Nathan Labenz: (49:51) Okay. So when I get to a 100,000,000 and then it's both. Right? I have to be if I spend 5 on 1, but I haven't spent a 100, I'm
Alex Bores: (49:59) Okay. Not
Nathan Labenz: (50:01) So that that basically gives people a ton of freedom if they're operating at, like, certainly personal budgets or startup training budgets. What you can do and for people that may not know, like, what is a realistic fine tuning budget, obviously, they can get bigger than this. But, like, the I was recently a very minor contributor to a project, that made some waves called emergent misalignment in which, you know, research team actually, in the the pursuit of answering a different question, fine tuned an OpenAI model on 6,000 examples of code being written that was not done according to security best practices. In some cases, like kind of flagrantly so, like just not taking proper precautions in the code that you're writing. So fine tuning a model to do that turned out to create a generally evil model and that was the emergent misalignment phenomenon. And you would think like, how does this happen? And people are still trying to figure that out. But what is clear and has been replicated is like train on insecure code and get a model that, like, wants to have dinner with Hitler and, you know, has all these, like, crazy notions about AIs enslaving humans and so on. And you're like, wow. That's really, you know, quite out of domain and yet, you know, pretty striking. Anyway, the cost to do that fine tuning with those 6,000 examples is like $25 fine tuning on the OpenAI API. So, you know, there's a lot of wrinkles. You're doing low rank fine tuning there. If you're doing, you know, all weights, like that ends up costing more, whatever. But like we have orders of magnitude between, you know, making a rather large behavioral change to a model and the sort of thing that you are, you know, you have to be over in order for this bill to apply to you at all. And I I do think that's important to understand. I guess 1 question would be like, given the relative cheapness of knowledge distillation and the relatively high financial hurdles, what is the purpose of that clause? Like, couldn't could you delete it and what would be lost if you deleted that that knowledge distribution clause entirely?
Alex Bores: (52:14) Would be lost. If if you deleted knowledge distillation, it would only be individual models that are 10 to 26 and 100,000,000 in spend, and you wouldn't have any coverage of deep sea like phenomenon. And I think it's important if we're seeing more of those kind of threats to have some coverage there. But as you correctly point out, we're giving people orders and orders of magnitude to do interesting things here. It is really meant to just cover the frontier and in particular those with large financial resources. So I keep emphasizing to everyone that a 100,000,000 threshold is its own threshold. If you have not spent a 100,000,000 specifically on compute, specifically on training, this bill does not apply to you. And so we were really talking about single digits, maybe double digits at this point that it applies to.
Nathan Labenz: (53:10) So can you tell me a little bit more about the theory of DeepSeek? I mean, I don't know what their total training spend has been, but they did say, I think, 6,000,000 was the Was for deep DeepDeep v 3 Exactly. Then got turned into the r 1. Yep.
Nathan Labenz: (53:42) Took a bunch of OpenAI outputs and trained on that. And certainly, it is a way to save money.
Alex Bores: (53:48) V 3 would not be covered. Right? Because it was trained on o 1 and it's only knowledge distillation if it's of a frontier model and o 1 didn't qualify as a frontier model because it wasn't 10^26. So, but it's meant to say, Deepa So the it has
Nathan Labenz: (54:00) to be 10^26 flops and 5,000,000?
Alex Bores: (54:03) No. No. No. No. It has to be the knowledge distillation needs to be using a model that itself qualifies as a frontier model. And so if it's using a model that isn't itself 10^26 in order to train the smaller model, it does not apply.
Nathan Labenz: (54:20) But that original qualification, is that 10^26 and 5? Or is that Oh,
Alex Bores: (54:24) the original is 10^26 and a 100. Right? It is really just the most extreme models. 10^26 parameters and a 100,000,000 in spend, that is the base definition of a frontier model. Then there's this additional definition, which is if you use a frontier model to do knowledge distillation and spend 5,000,000 in that process, the resulting model also counts as a frontier model. Okay.
Nathan Labenz: (54:51) So I think I had a misunderstanding. I'm not a professional legislation reader. So let me just make sure I have this So the way you get into the large AI company category in the first place is you train a model that is 10^26 flops or a $100,000,000 in spend on a single model?
Alex Bores: (55:16) So I I I would I would think about this a different way. Right? I would say the definition of a large developer of who this bill applies to is you have trained at least 1 frontier model, and you have cumulatively spent a 100,000,000 in training that model. Right? So that's that's there are a 100,000,000 in training models. Frontier models. Okay. Frontier models are defined as 1 of 2 things. First is the 1 I think most people are familiar with that have been used elsewhere, which is the model itself is 10^26 parameters and a $100,000,000 was spent on training that model. Right? That's the base of people think of frontier similar to what was in California, similar to the Biden EO without the spending threshold. Right? 10^26 and 100,000,000, that's a frontier model. A second way that something can be a frontier model is if it is trained via the process of knowledge distillation from a frontier model and that process was at least 5,000,000, right? So 2 pathways to become a frontier model and then a large developer is someone that has spent a $100,000,000 training frontier models. So they can do that either by training 1 10^26, a 100,000,000, or by training 20 knowledge distillations, 5,000,000 on each of them or any combination thereof.
Nathan Labenz: (56:42) Gotcha. Okay. So the main reason for the knowledge distillation clause is that you wanna catch companies that are working at large scale but taking a knowledge distillation route such that their largest individual models could still sneak under the mainline definition of a frontier model, but would have similar capabilities because obviously that's the whole point of distillation.
Alex Bores: (57:12) Absolutely right.
Nathan Labenz: (57:14) Okay. Good. Well, thank you for walking through that with me. This stuff does get a little gnarly sometimes.
Alex Bores: (57:20) No. Let and listen. Legislation, especially at the state level, is much easier than reading federal bills. Right? Federal bills are thousands and thousands of pages. This 1, I think, is 15 or so.
Nathan Labenz: (57:30) I'd like to read every word. It is I know. Manageable And
Alex Bores: (57:34) the details really matter and it's it's dense legal language and I think that's 1 of the challenges we've seen in past bills is is you have people hop on Twitter and take, you know, a couple words out of context or don't think about, you know, every bill is inserting language into the code, right? The legal code, not the computer code, the legal code of the state. And so, it's affected by all of these other words that are around it as well. And so, you know, I I support everyone asking questions. None of this is easy to understand exactly all of this the same way, you know, coding any of these models is not easy But, you know, you don't see people jump on Twitter having read, like, 3 lines of code in Llama and be like, oh my god. The whole you know, right? Like, you should think about that.
Nathan Labenz: (58:16) You should cut join my part of Twitter. I I think
Alex Bores: (58:20) Fair enough. Fair enough.
Nathan Labenz: (58:21) I could show you some. Okay. Well, let's go a little maybe double click then into a little bit of the you know, deeper into the language. Here's my summary of the requirements that the Frontier developers would have in terms of the safety protocols that they would have to develop and publish. Basically, need to come up with various ways to reduce risks. They have to reduce risks of, you know, the model being used for these CBRN type purposes. There's an interesting clause about reducing risk by sophisticated actors, which I'm interested in. I I assume that's, like, code for the CCP or maybe North Korea or whatever.
Alex Bores: (59:02) Yeah. Any any sophisticated state actor. You know? It's not that's not targeting 1 specific 1. That's sort of any out there. We're saying that the stakes of this debate and the risk are so high that you need to include nation states acting. And that is
Nathan Labenz: (59:16) typically translated to, like, secure the model weights, you know, sort of tighten up your security practices in today's world. Is that how you are imagining that playing out as well?
Alex Bores: (59:25) Yeah. Largely. It's not, you know, all of cybersecurity is based on your threat model and based on, you know, what the risks potentially are. If you're applying normal corporate security to these models, you're probably not doing enough. This is just meant to be very explicit that the stakes of this incredibly powerful technology are large and your threat model should be including sophisticated state actors. Okay.
Nathan Labenz: (59:53) There's then interesting section also where it's like basically saying you have to sort of explain why you think the tests that you have outlined actually tell you what you're claiming they tell you. It's sort of an epistemology of your whole risk analysis. And this is a tricky 1 to me as somebody who has built a bunch of these workflows and simple agents and stuff, you always get these caveats in like work from meter and so on where they're like, we built some scaffolding to try to figure out how, you know, what scale of research engineering task and a model could do, we don't really know that we did a great job. You know, we don't know what the limits are. We you know, a scaffolding could almost certainly be improved. And I'll maybe I'll just couple that with a, you know, an actual quote from the bill is a large developer shall not deploy a frontier model if doing so would create an unreasonable risk of harm. So I guess where the wrangling ultimately is with something like this is like, what's a reasonable or unreasonable risk of harm and to what extent must people go to, you know, demonstrate that they've pushed the scaffolding and really elicited the capabilities to the fullest knowing that that's hard and, like, the state of the art is, like, very much evolving. How does somebody know if they have done a good good enough job that they're on the reasonable side of unreasonable?
Alex Bores: (1:01:23) The short totally unsatisfying answer is that often in law, we use this sort of reasonable person standard. Right? There's a lot of things where we're not going to be able to exactly specify everything that needs to be done. We can sort of point at as close as it is, but you sort of lead a little bit of deference to what a reasonable person would be. That's not invented in this bill, that's not invented in law. That's a well established legal standard. And I think it's particularly So the longer answer though is that anytime you're writing a bill on anything but in particularly something as fast moving as technology and in particular AI at this point in time, you have this tension where people rightfully want specificity of exactly what it is telling you to do and on the flip side want it to be able to evolve in time because exactly what you should be doing right now will change in 6 months, in a year, in 2 years, etc. And that's always a balance, right? No law is ever final. The legislature can obviously come back and make changes at any point so you don't need to write it so that it lasts 1000 years. At the same time, don't want to be deferring so much that companies really don't know exactly what's required of them. So any bill is going to have that tension and you're going to find the balance somewhere. You know, I think a sign that we found the balance in a pretty good place on this 1 is that we have about an equal number of comments on both sides of it of I want more specificity or actually I want less government telling me exactly what to do. We've probably hit it right, but you're pointing out that exact piece. That's why at the start when I was describing the bill and I try to emphasize for people, we are largely letting the companies grade their own homework. We are largely saying, you know, you put out what the standard should be, write it ahead of time when you're not in real economic pressure, and then, you know, grade it against that. And and the only real pushback on that is a) we have a third party audit and that audit is going to you know include, did you actually follow this? Did you follow best practices, etc? And then b) this reasonableness standard. So if you write a plan that just says, we don't care about safety, we're not gonna do any of this and this is the standard, well, you're you're that that's pretty clearly unreasonable.
Nathan Labenz: (1:03:46) How about the relationship between all of this and open source releases? You know, at the sort of sophisticated actor level, you know, obviously, anybody around the world can download a LAMA model or any open source model. And there was also 1 clause I wasn't quite sure how to interpret it, but it referred to modifications, which in at least in some other, you know, debates has been understood to be like post open source release modifications that, you know, who knows who might make. I mean, maybe with either, like, close reading of the bill or just, like, your, you know, intent, what how does this apply to somebody like Meta who is going to potentially release a behemoth version of Llama 4, which I think would probably get to that 10^26 and I'm sure would be, you know, north of I mean, they've clearly spent a 100,000,000, whatever. Like, seems like they're gonna be in. We don't really have great ways, I'm sure you're well aware, to, like, really control what people do downstream once a model is released. So are they on the hook for that or not?
Alex Bores: (1:04:58) The bill's agnostic to whether you open source or closed source, right? The best way of balancing this is not saying specifically open source or closed source, but just saying think about the risk, think about the use case, make your own judgment with that. And so there's many, many ways to keep a model safe, right? You can have it on your platform or you can monitor it at all times. You could go a step further and do know your customer and only release certain features and certain things that are really powerful to people that you trust, right? That becomes another way of doing the risk. If you opt to open source it, like that's great too. It helps to encourage academic study and analysis of all these things. But any of those choices come with risk and I find it kind of bizarre those that say, Oh, you should evaluate, we should take a risk based approach which is what companies always say. You should evaluate everything in its context except open source. Ignore that context. That context doesn't exist. Just write all of that off. I think we are not in any way targeting open source. We're just saying it's up to you to make your choices on your risk profile and you should do that accordingly. I think the vast majority of AI that has been released, it's great that it's been released and it's been encouraging. You'll note that this bill doesn't actually require you to take in the risk of cyber. And I think that's largely because it's really already there and maybe that ship has sailed, but it's up to companies to decide the risk and the way that they're deploying it and all of those decisions matter. So we sort of leave that quite open ended. But to my point of legislation always being changeable, Like, that's not it. We don't wanna shut down that ecosystem. That's not an outcome we want. If that's where it's trending, we can change this.
Nathan Labenz: (1:06:46) So I think that's an intellectually honest position, and I do think this is Which
Alex Bores: (1:06:53) is not something that elected officials are often accused of, so I appreciate that.
Nathan Labenz: (1:06:57) Yeah. I mean, the the normal way that people try to get out of this is some sort of denial or cope or whatever. But it does strike me that, like, in taking a neutral approach with respect to open source, it does make open source harder. Like it is much easier, let's say, to manage risks if you have a closed source model where you don't release the weights. If you do release the weights, you just really have a hard time in in many, many ways even knowing or, you know, being able to predict what will happen, let alone controlling what will happen downstream from there. So it does seem that this could create real risk for a company like Meta that's trying to evaluate and might steer them toward not releasing if they're like, jeez, you know, we have we have basically no known you know, we can train this thing to refuse and we can put out Llama Guard. We can do all these different things to try to enable people that wanna do the right thing to do the right thing and to set them up for success. But we really can't prevent somebody from, you know, untraining that refusal behavior or just not using, you know, Lama Guard or whatever. Right? So they're they're in that analysis. It sounds like basically your sense is like, the risks are the risks. And if you can't do it or we don't have the right techniques, then, like, maybe you just shouldn't put it out, and that is kind of the reality.
Alex Bores: (1:08:24) I I would say 2 things to that. I would first say, I start often with a question of, do you believe that there is ever any information or any capability that should not be open sourced? And I think most people would say like, really detailed analysis or the ability to produce really powerful bioweapons should not be open sourced. Right? But but whatever your threshold is. Right? Do you believe in classification at all? Do you believe in restrictions on weapons at all? Is banning the sale of nuclear weapons a reasonable right? Like, just as long as there is some level that you think should not be open to the public, all we're saying is declare that level, right? And then go from that. I'm not putting a specific level out there. And so if people can, you know, with a straight face say no, I think every capability and every power should definitely always be open source, then you've already sort of accepted, hey, we got to think about the risk here. But the more specific thing I would say is I don't think this is going to change behavior because the threshold for critical harm is a 100 deaths or 1000000000 dollars in damage. But I think the leadership of every company is probably comfortable saying, we don't want our products to cause a 100 deaths and 1000000000 dollars in damage. And we'll do actions that will stop that from happening. And right now, none of their products, as best I can tell of any of these products, really reach that threshold. So we're not talking about restricting any current behavior. But whether this bill exists or not, I would hope that the board of a public company would be comfortable saying, yeah. Our policy is not to cause a 100 deaths and 1000000000 dollars in damage. That's all we're asking them to do.
Nathan Labenz: (1:10:13) Yeah. It's gonna be really interesting to see how this plays out. I mean, I think we're, I don't know, 1 to 2 model generations away from certainly that's if you listen to the Dario and Anthropic timeline, you're 1 to 2 generations away from models that would be in a very meaningful way able to do some sort of, you know, needle moving contribution to the creation of a bioweapon. And I would definitely agree, you know, that Zuckerberg doesn't wanna have to face the public and say, you know, yeah, we shipped it even though we thought maybe it would, you know, cause a pandemic or whatever. But it is really gonna be tricky because there's like first of all, there's a lot of people who not necessarily in their role as like a, you know, corporate executive, but there are a lot of people who would say, first of all, that threshold isn't that high. You know, the the how many people die on the highways every year. Right? It's like literally 100 people die a day on American roads. So okay. You know, it is a big world, and a 100 deaths is a tragedy, but it is also 1 day of US road deaths. And so
Alex Bores: (1:11:17) Well, it of And specifically, a 100 deaths or 1000000000 dollars from a chemical, biological, radiological, or nuclear weapon or something that is already a crime in the penal law. Right? So it's not talking about sort of accidents. It's talking about automated crime or CBRN. It's a it's a I don't wanna use the word intentional death, but it is 1 that is caused by a crime or a big weapon. That is not a thing that you see every day.
Nathan Labenz: (1:11:44) Yeah. It's almost like I mean, what's so weird about a lot of these things is that, you know, the pandemics we really worry about cause a lot more than 100 deaths. It's like an an an an in expectation sort of thing. If COVID caused, whatever, 10000000 deaths, then, you know, in expectation, 1 in 100000 chance of a COVID like outcome would lead you to an expectation of 100 deaths. And that's just like a very weird epistemic position. Basically, nobody has the the clarity of what you know, we just don't you know, again, to quote Dario, like, we don't know why these things do what they do half the time. Totally. So we're in a really weird spot where it's very hard to give an assurance that's at 5 nines on anything. And we do have existence proofs that, you know, 1 of these things can easily get to 10000000 deaths and obviously, you know, could have been a lot worse. And so we're in a just a very weird epistemic position. Yeah. I mean, I I think it is worth, you know, taking all this stuff very seriously. And my I guess my hope would be that it really pushes people to invest hard in areas where we haven't got answers yet. Right? I mean, the the real thing is like, we create a model that doesn't know about virology? Is there some way to wall off that kind of knowledge and excise it from the version that gets released? You know, interpretability techniques or otherwise, is there some way that we can say with an affirmative safety case, like, yes, we can be confident we're being reasonable here. And we know this, you know, not just because the thing refuses 9 times out of 10, but because we have a much deeper understanding of what's going on. But if if I had to guess, I think we are probably headed for a moment. I I mean, tell me if you would see this differently, but I would expect a sort of 2026 reality being like, this bill gets passed. Llama 5 is trained. We don't have that, like, affirmative safety case yet. The risk is kind of on the unreasonable side. And unless Meta is just, like, willing to run the risk for whatever reason, they probably have to look at this and say, can't quite release it in this form. We either need to, like, solve some technical problems we haven't solved or we just can't put it out there because it's just too powerful. And maybe they would even come to that decision on their own, you know.
Alex Bores: (1:14:09) That that's my point is I think we're talking about a bill that, you know, requires people to do some work upfront writing things down on paper, knowing that their homework is gonna be checked and whose fines are in the 8 figures. I mean, they are gonna make these decisions separately on societal level large risks.
Nathan Labenz: (1:14:30) So I guess in the spirit of sort of red teaming the bill, as I understand it, the maximum penalty for, like, repeat offense under the bill is a $30,000,000 fine. Yes. And, you know, I think they paid Trump off at that level. Right? And we've seen, like, multiple people just sort of save you know, forget it. We'll just settle this lawsuit because we just, you know, wanna get this guy off our backs. Then maybe 1 way to sort of soften the bill while still getting all the benefits that you're wanting would just be to, like, require the plan and, you know, be a little bit less on the sort of downstream enforcement side of, like, what was reasonable or not reasonable. Do you think that that could be a version of this that, you know, if if pushback or whatever dynamics ended up being conducive to it, is that something that you think would be a viable possible compromise at some point in the future?
Alex Bores: (1:15:22) I think the version that we have right now is the result of many, many compromises. And I want to be clear that I, you know, I started this by looking at all of the public debate around last year's regulation and probably accepting 95% of the public critiques. And then I sent a draft of this bill to 5 or 6 major labs, asked for red teaming, asked for red lines on the bill, got them back, did another draft. Around December, I sent again 5 or 6 labs, Hey, here's a new version. Got more feedback, ended up talking to lot of people in the state and that's why this bill was published in March but it's been circulating since probably July, August getting a lot of this input and the feedback. So this is in no way the first stab at it. This is in response to a lot of industry feedback and a lot of compromise. And I think if you see other people saying, well, we need a compromise version. We this is this has been the compromise version. But on the specific thing of just releasing the plan, you still need to define what the plan is. Right? There need to be some standards. You can't have someone have a 1 sentence, we're building the AGI, and that'd be the plan. Right? So you gotta put some standards as to what the plan ought to be doing. I think the third party audit is incredibly important. That was part of the voluntary commitments. That was part of the regulations that came out of the commission from the the Newsom Commission after October embraced it. It was part of the EU version. Right? Third party audits, I think, are really core to the bill. Whistleblower protections, I think, are extremely core to the bill. It's sort of the 1 part everyone agrees on and is moving in every state. And then disclosing critical incidents is crucial just to keeping New Yorkers safe. So I think all 4 parts of that bill are pretty required and pretty drummed down. The part that I think you're pointing out of the don't release a model that has an unreasonable risk, again, is just that they're largely grading their own tests. This is the smoking companies, once they know it causes lung cancer, need to proactively take action. Oil companies, once they know it causes climate change, need to take proactive action. When your own tests say this is gonna cause deaths, you need to take action. The fact that it's 100 deaths, I think, is part of the compromise. Yeah. Okay.
Nathan Labenz: (1:17:49) Maybe just a couple double clicks on on several issues since we since we're going deep here. How do we not have regulatory capture of the auditors? This is something that I experienced once upon a time in the financial services industry. And, you know, I've had a number of these kind of model testing orgs on the podcast in the past, Apollo Research and folks from METER and Palisade and more and FarAI. You know, all these folks have a tricky position. I would say that the and I know them personally to some extent, and I would say they are very sincerely motivated by a safety mission, but they are also very mindful that their access and ability to do their work at all is at the moment at the pleasure of the companies. And so they're like very cautious about how they speak publicly and, you know, all this kind of stuff is like very, very carefully thought through because they don't wanna get you know, they don't wanna offend somebody and get cut off. So I don't think the bill has much on that yet. You know? Is there any, plan for that problem?
Alex Bores: (1:18:58) I think that's 1 of the ones where we don't wanna legislate ahead of time on it, but it's definitely a thing I'm concerned about. I mean, the only requirement is that it's a separate auditor. It's not a government agency. I think of it as pretty similar to the SOC 2 process, right, where you have consultants, these auditors that are set up to evaluate your security stance, they take into account the size of your company, the risk, etcetera. It's not sort of a checklist of hard things. I'll point out that all of the companies that would be subject to this already require SOC 2 of all of their vendors. So it's a similar process that they engage in. But you're right, when you have this kind of open market and government is not licensing the auditors, there is always that chance of regulatory capture. We require the auditors follow best practices. We require, you know, who's doing the audit and all of that, but it is a thing that we will monitor over time. And I hope not to have to take more action, but certainly a real risk.
Nathan Labenz: (1:19:58) Yeah. Okay. On the safety incidents, as I was reading through the the, you know, the different things that would qualify, various incidents kept coming to mind, and I was like, would that qualify? Would that qualify? So I don't know that you can, like, officially judge, but I'm interested to get your reactions on Yeah. So couple lines from the safety incident definition. Clause a, a frontier model autonomously engaging in behavior other than at the request of a user. And here I'm like, I can point you to a lot of people who have reported that, you know, Claude changed the model from OpenAI to Claude or, you know, just yesterday a friend was like, basically, I I vibe coded my way onto the project maintainers shit list because I ended up modifying the database in ways that I wasn't even meaning to, but the model just, you know, got blocked 1 way and went another way. And I mean, I think that we're very confused honestly broadly as a field on, like, autonomy. Like, should we want it? I'm not so sure that we should, but we're clearly pushing for it. And, I guess my my sense is, like, maybe that clause is, like, being triggered a lot in the world today. Do you feel like that may in fact be the case?
Alex Bores: (1:21:11) It it it might by itself, but remember that all of those 4 specific incidents have to be only if they're increasing the risk of a critical harm. And so if it's a minor, you know, thing that pops up and you're not this isn't gonna increase the risk of a 100 deaths, 1000000000 damage, that's not something to report. You know, if a employee forgets to log out at the end of the night but no 1 comes in and oh, the janitor saw the code but they're right, that's not a thing. But if China steals the code, right, that is a different sort of circumstance. So it's taking into account if it would cause a real risk of a harm. And largely, again, relying on the company's judgment of that, but saying that things that do rise up to that level of these specific 4 need to be disclosed.
Nathan Labenz: (1:21:55) Yeah. That I mean, it's easiest sometimes as you get down the nested structure of these bills to forget the sort of top level cause. So it's a good reminder that all of this stuff is in conjunction with this increasing of the aforementioned critical harms. I guess a couple other incidents that came to mind would be basically examples of companies not following their process or sort of mistakenly releasing capabilities. Specifically, Bing famously, like, launched in India and a couple countries, I think, without going through the safety board approval process that they and OpenAI had together agreed on. I think some people at OpenAI were also involved in sort of saying, yeah. Go ahead and do it. In retrospect, all the Sydney behavior was in fact reported on the forum, and they missed that as well. Would something like I mean, that model was only GPT-4, so it's, you know, maybe not at this critical harm level. But would something like that qualify? That seems like it would in a future scenario where the model is, like, more powerful. Like, you can't do that.
Alex Bores: (1:22:57) Yeah. It it might. You know, I I don't for a legislator, I have a deep background in these things, but I don't claim to be the foremost safety researcher. Right? I I wanna defer some judgment to the people who are doing this every day, and that's what the bill is meant to do. So with that caveat and and, you know, my my voice here not necessarily being binding on it, you know, I think if you temporarily enable a feature and you've monitored the platform that whole time so you know if it was used in any way and now you've turned off that feature, Did that actually introduce a real risk? Like probably not. Again, depends on the exact feature and how much you can see into how it's used etc. But this is when you're when you're playing with fire, you know, it's a different standard than when you're playing with, you know, a smaller feature.
Nathan Labenz: (1:23:48) Yeah. We're definitely playing with the new kind of fire here. Say that all the time. Okay. Whistleblowers. So I'm a big supporter of general whistleblower protections. How strong do you understand these protections to be? Super weak. Yeah. Okay. So what I mean, 1 key question is, like, if you go to the AG, can the company fire you for doing that?
Alex Bores: (1:24:11) It would be illegal under this law. And on top of any other laws, you would also be subject to this $10,000 fine per employee per retaliation as well as injunctive relief. So you might have to hire the person back. But yeah, dollars 10,000 is nothing compared to the resources here. Personally, I would love to see that be a lot higher and a lot stronger. You know, every bill exists in the context of its state and the state law and 10,000 is the standard in New York across a wide variety of fields. I think there was a push last year even to increase to a higher percentage for those that have worker violation of child laborers, right? But that sort of got beaten down. It's maybe seen as a Pandora's box of doing anything above that 10,000 level. So, yeah, I think probably you want a stronger incentive there, but you wanna make sure at the very least that there is statutes on the books that if you are firing someone for raising catastrophic risk, that is illegal and that there can be action taken for that.
Nathan Labenz: (1:25:14) And in terms of injunctive relief of hiring somebody back, that seems hard in the sense that, like, you're not gonna have, you know, the AG sitting in on meetings at, you know, at the companies making sure that this person's job, you know, is the same as it used to be. And obviously just things change. I guess, what do you how do you imagine that playing out in practice? Like, if somebody actually says, you know, I'm freaked out about whatever. I'm going to the AG. The company's like, you, you know, violated our trust. Like, we have our own protocols. You should have followed. You didn't do it. Whatever. Either you're fired or you're sort of, you know, banished to home office status and will, like, continue to pay you or whatever, but you're not, like, gonna be, you know, privy to all the things you used to be privy to. How does that actually like, what should a whistleblower expect in terms of actual material outcome or protection for them individually if they do kind of violate the chain of command and come to the government?
Alex Bores: (1:26:15) There there's good news and there's bad news here. The good news is that there is a lot of case law and statute around whistleblower protections because they exist pretty strict covering a variety of actions in New York State but also throughout the country for reporting anything that is illegal behavior, right? And so it's not just you have to be rehired, it's that your job is protected, your responsibilities are protected, right? And if any of that changes, can be injunctive relief, can be follow on things from that. We see this I come out of the labor movement and you see often people being fired for organizing or for labor violations and then you have the ability to restore that. So that's the good news is, you know, much of this debate is out there and the system functions already. We're just adding on something that might not be explicitly illegal but is a catastrophic risk. That's the only change here. The bad news is, yeah, I think broadly in society labor protections are not as high as I would like them to be, and I can't promise the system functions exactly as it should. And I think that's part of a a larger conversation that should be had maybe separate and outside from this bill and and 1 that I would love to have. But it is not largely, we're not changing whistleblowers in New York because there is so much on the books. We're just adding that bit about catastrophic risk.
Nathan Labenz: (1:27:39) Yeah. There's been a couple calls recently for class consciousness among the research engineers at the leading AI developers. And it's a weird dynamic because they are seemingly increasingly in a very like self aware way trying to automate themselves out of a job. You know, the the vision seems to be increasingly explicitly as far as I can tell, like get the AIs to do the AI research and then hope that we can steer them or hope that we like set the initial conditions right. And all that honestly is pretty scary to me. How about internal deployments? As far as I can tell, that is not part of this bill, but the sort of, you know, bleeding edge of policy discussion is is turning toward internal deployments. Think for this reason where there's sort of an expectation that the gap between what companies have and maybe are using for their own AI research and what the rest of us, you know, pleads in the public get to see and use might widen pretty dramatically if the companies are all locked into this sort of game theoretical race for first to AGI. And so first, am I correct that's not really addressed here? And second, like, is it on your mind for, you know, possibly something to come back around to?
Alex Bores: (1:28:53) So actually, internal deployments are would be covered in most cases here. So so the definition of deploy at the top includes using the model as well as making it available to others. And so if you are using it even internally, then it is covered by this. Now we exempt anything that you are like testing and development and evaluation, that doesn't count as using it. Additionally, we exempt if you're doing it to comply with, you know, state or federal law or it's part of broader federal project. Right? Those things are already exempt. But general use that isn't in 1 of those categories would actually be covered and trigger the requirements in this bill.
Nathan Labenz: (1:29:30) Yeah. That sounds to me like the most likely place where the whistleblowers might might end up feeling compelled to come forward. The idea that, like, you know, what exactly counts as use and whatever, and we've got this sort of, you know, guardrails light or guardrails free purely helpful model that, you know, we have kind of access to internally and the controls are not great and people are, you know, asking it to do AI experiments and the whole thing has a sort of gain of function type of vibe to it. So, yeah, that's really interesting to think about. But that's a that's a really good clarification. And I do think a an enlightened 1 to include at this stage.
Alex Bores: (1:30:08) I appreciate that. And I and I think it goes back to part of the conversation we had of how is this field gonna develop, and do we really expect the most dangerous models to be released open source or we're already seeing companies move in this direction? And whether this bill exists or not, right, this isn't changing like liability for things that happen after you release it, Right? So companies are already going to start to make that decision of when we get really powerful, how should we be thinking about what goes out there? This is just meant to sort of improve your internal stance and your safety planning and everything that goes into that ahead time. Of And I think you're right. I think more of the risk is coming from internal deployments now, and that's why we wanted to make sure it was covered.
Nathan Labenz: (1:30:49) Cool. Couple final questions, and I really appreciate all the time. This has been great. Obviously, 1 of the big concerns and and really the sort of nominal concern I've been hesitant to even bring up, you know, previous legislative battles at state level legislative battles because I feel like some of them became so toxic and needlessly so that it's like, let's just leave that, you know, in the past and, you know, to tackle this new chapter with a a fresh start or Blake's late or whatever. But 1, you know, commonly raised concern that that was sort of the final presented concern last time around was doing this at the state level is, you know, just not great because, obviously, we got a lot of states and, you know, in the limit, the final worry is always China. So it's like, well, if we have 50 different state rules, then we'll never be able to get anything going, and then we'll lose China. It seems like you have thought about that and if, you know, tried to take that into account as you've developed this. I wonder if there's any sort of possibility of kind of a compact of states or some sort of reciprocity between states that could kinda neutralize that that objection, which I do think has some reality to it. Although, I think it's also kind of a smokescreen at times, But it would be good to, you know, get the best of both worlds if we can. Do you see any prospect for that?
Alex Bores: (1:32:12) Yeah. I love the idea of a a compact, and I would say usually the best smokescreens have a little bit of truth for them, and and that that's a good description of this right here. I I think I agree it should be done at the federal level. Come back to me when it is. Like, federal laws already override state laws. If any congress member wants to take my bill and do it at the federal level, please do so. I'm really happy to partner with any congress member, any staffer that's listening to this. But that objection holds a lot more weight when a, congress is doing anything, and b, once 1 state has passed something. No state has passed anything yet. So there's no conflict to be thought of. And we're very good at focusing on, you know, making things standard across the states. Not a 100% and that's why there should be things done at the federal level. But I can say that the, you know, I'm aware of maybe 6 or so states that are pushing forward frontier model regulation of some kind and we're all on a group text. We're already talking and thinking about that.
Nathan Labenz: (1:33:21) Plus possibly a few reporters or whatever that you added in by the sake.
Alex Bores: (1:33:25) Add The Atlantic as far as I know. But, you know, I I guess we are just a slip of the finger away.
Nathan Labenz: (1:33:30) That thread. You guys added me, and it was just so crazy.
Alex Bores: (1:33:35) Yeah. Honored to have you as part of it. But but, yeah. So we're already talking and it's got to be established somewhere first. And I think all of us have shown that we're happy to copy other places. Know, I have other bills on protecting more short term consumer aspects that are based on what Colorado passed because, hey, they got there first. Okay, let's try to unify this. As I said in the intro, I worked in tech before I entered this field. Like I know what that compliance looks like and having 1 standard is way better. So let's work towards that. But you know, before we start trying to draw a line between 2 or more points, like let's get that first point. Let's get a state on the books and then we can talk about how it should be the same with any further ones.
Nathan Labenz: (1:34:18) Yeah. Okay. What's the process from here and maybe, like, how are the dynamics shaping up? Gavin Newsom once said that the building shall not be named for the moment, had created its own weather system. You seem relatively relaxed. Maybe that's just your, good humor, but are we seeing anything similar where, you know, people are sort of mustering all their force and kind of bringing the Eye of Sauron to this debate, or is it gonna be a little more chill this time around?
Alex Bores: (1:34:53) No 1 here is the Eye of Soron. Let me be clear. I I especially need to say that as someone that worked at Palantir. Anytime you get into the Lord of the Rings references. But but, no. I think I've been very intentional to engage with industry and with industry associations. I want to hear their feedback. I want to see their red lines. I've gotten many versions of it and incorporated probably 90% of what they have sent. You know, this is not these labs are not the enemy. As as I think I said at the start, I want to see more adoption of what we have already. I want to see where this field is going. The incredible potential of what we are gonna have from medical discoveries, from routinizing the monotony of life, from even cyber defense, from from what's going forward. This is really incredible technology and I want the guardrails to make sure we get it right and those guardrails work best when they're done in conjunction with industry. So this is not in any way sort of a fight. I mean again politics is often painted that way and we say you know what gets the press or the things where it seems like a fight and then there's an incentive to design it as a fight. It doesn't mean we agree on everything but I think this has been a very collaborative process so far. The second thing I'll say is the rate of collaboration is probably about to speed up quite a bit. So our legislative session ends on June 17. So we're a little more than a month away for when this has to pass. And I think there'll be an intense focus as we get closer and closer to it. I think it's really important that people not sit on the sidelines as part of this. So if you're listening and you have suggestions to improve the bill, email my office. Call my office, right? Borisanyassembly dot gov. We take all input. I would love to hear it. If you live in New York, I would encourage you and that's New York State, not just New York City. I would encourage you to reach out to your legislators and say, Hey, I support this bill and why don't you hop on as a co sponsor? And we've made that easy actually. If you go to bit.lyraiseactny,
Alex Bores: (1:37:25) all 1 word, all lower case, that will give you a form. Enter in your address and it'll generate an email for you to send to your assembly member and your senator. If you're not in New York or even if you are but you run a company, you run an organization that has a unique view on AI, You can also write in what's called a memo of support. Something that helps as a this is who I am and why I care about this and why I think the state legislature should take action. And you can go to bit.lysupportraiseny. Again, all 1 word, all lower case. But this is, you know, government is not a spectator sport and decisions are made that by those who show up. I assure you industry is showing up and I welcome that. If you're someone that listens to this podcast, you probably already have an interest in this field. I'd encourage you to show up as well. There's a lot of actions you can take right now to really influence what Frontier Model AI regulation looks like in The United States. Cool.
Nathan Labenz: (1:38:23) That's great. I've I've noted those URLs. We'll include them in the show notes.
Alex Bores: (1:38:26) Thank you.
Nathan Labenz: (1:38:29) I don't know if you'd like to handicap your own bills, but it was striking that, you know, the vote in California was, like, very one-sided. And then, of course, we had the governor's veto. What do you think will be like the hard parts or are there gonna be any, you know, obvious hard parts between here and making this a reality? Yes. There will be hard parts.
Alex Bores: (1:38:53) You know, it's it's like any bill, it's gotta pass both houses of the legislature. It probably has to go through multiple committees. It has to each individually pass before it even gets to the floor. And then it has to be signed by the governor. But I think 1 important thing and different thing about New York versus any other state or the federal government actually comes with that governor step. So I'm skipping ahead at this part, but I think this is useful for context for people to have. If we are successful in passing it, in every other state and in the federal government, once a bill is passed, it goes to the executive who can either sign it or veto it, or do nothing and that's interpreted as signing it or vetoing it. In New York, there's a third option called a chapter amendment. And what that looks like is the governor negotiates with the sponsors of the bill for changes that they may want to see. And so they might say, Oh, like the bill but I can you change A, B, C and D? And the sponsors say, Well, we can change A and B. And then you reach some agreement to change A, B, and C. And the governor will sign the bill at the end of the year with a memo that says, you know, I'm signing this pursuant to an agreement that I've reached with the sponsors and then we'll introduce the amendments early in the next session and pass it. And I, so I bring that up A, because the version that passes in June doesn't have to be the final version, right? It doesn't have to be a this or nothing. And so if you're not a 100% on board with every clause 1 way or the other, you can still participate. But B, if there's rapid changes in the field between June and December and we all agree that those changes have come to be and we need to act on it, we actually can. We're not locked in. We can make those amendments. And so 1 of the things I've talked to industry a lot about is a standard definition of a frontier model. And I keep saying, I'm happy to change this, right? We know what we're trying to capture, but is the threshold exactly right? Should it be different in this way? If everyone in industry, if everyone in academia comes together and says, Hey, this is really what the definition should be. We're happy to swap that out and put that in there. And so everyone should keep that in mind different than California is we have that additional flexibility as the year goes on.
Nathan Labenz: (1:41:10) Cool. Well, I think if there's 1 safe bet we can make in this process, there will be some developments in the next 6 or 7 months. So my guess is there will be something that will at least challenge some assumption or provoke another round of discussion. That seems like a safe bet. This has been great. Anything else you wanna share or leave people with before we break for today?
Alex Bores: (1:41:33) I really encourage people to get involved. I mean, this is a field where your voice, you the audience as people who are yours as well, Nathan. But you the audience as people who are thinking about these topics deeply at a time when most legislators throughout the country aren't just because it's new and it's not most people's background really makes a difference, especially people who are deep researchers or academics or engineers who really want to be precise in your language and in what's going forward. There's a hesitancy, I think, to be involved in politics that is often speaking with sweeping statements and maybe not always seen as, to steal your phrase from earlier, intellectually honest. But I encourage you now embrace that nuance. You can give specific feedback. Bill isn't my baby. You're allowed to tell me things to fix it. But it's really important that you express a desire for something to happen here because there are definitely people with an economic incentive to say no to any regulation and if the reasonable people that want some regulation don't speak up, don't send that email, don't send that memo of support Instead of getting something you agree with 80%, 90%, you're going to get absolutely nothing. And so what I want to leave everyone with is take action, tell me all the ways you'd improve the bill, I would love that. Know, amendments are all there. But then please, please, please speak up in support because we have a real chance to make a difference here, but only if everyone gets involved.
Nathan Labenz: (1:43:10) Great. Well, the bill is called the RAISE Act. In my humble opinion, it's not really all that much to ask. And I think your, with your background in technology and obvious technological literacy, I think you're a great avatar to represent what are some pretty modest requirements to the public. So thank you for taking the time. New York assembly member Alex Bores, thank you for being part of the Cognitive Revolution.
Alex Bores: (1:43:34) Thanks for having me.
Nathan Labenz: (1:43:35) It is both energizing and enlightening to hear why people listen and learn what they value about the show. So please don't hesitate to reach out via email at tcr@turpentine.co, or you can DM me on the social media platform of your choice.