a16z on Protecting Little Tech: The Techno-Optimist AI Policy Agenda with Matt Perault

a16z on Protecting Little Tech: The Techno-Optimist AI Policy Agenda with Matt Perault

In this episode, Matt Perault, Head of AI Policy at a16z, discusses their approach to AI regulation focused on protecting "little tech" startups from regulatory capture that could entrench big tech incumbents.


Watch Episode Here


Read Episode Description

In this episode, Matt Perault, Head of AI Policy at a16z, discusses their approach to AI regulation focused on protecting "little tech" startups from regulatory capture that could entrench big tech incumbents. The conversation covers a16z's core principle of regulating harmful AI use rather than the development process, exploring key policy initiatives like the Raise Act and California's SB 813. Perault addresses critical challenges including setting appropriate regulatory thresholds, transparency requirements, and designing dynamic frameworks that balance innovation with safety. The discussion examines both areas of agreement and disagreement within the AI policy landscape, particularly around scaling laws, regulatory timing, and the concentration of AI capabilities.

Disclaimer: This information is for general educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product. Turpentine is an acquisition of a16z Holdings, L.L.C., and is not a bank, investment adviser, or broker-dealer. This podcast may include paid promotional advertisements, individuals and companies featured or advertised during this podcast are not endorsing AH Capital or any of its affiliates (including, but not limited to, a16z Perennial Management L.P.). Similarly, Turpentine is not endorsing affiliates, individuals, or any entities featured on this podcast. All investments involve risk, including the possible loss of capital. Past performance is no guarantee of future results and the opinions presented cannot be viewed as an indicator of future performance. Before making decisions with legal, tax, or accounting effects, you should consult appropriate professionals. Information is from sources deemed reliable on the date of publication, but Turpentine does not guarantee its accuracy.

SPONSORS:
Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud platform that delivers better, cheaper, and faster solutions for your infrastructure, database, application development, and AI needs. Experience up to 50% savings on compute, 70% on storage, and 80% on networking with OCI's high-performance environment—try it for free with zero commitment at https://oracle.com/cognitive

The AGNTCY: The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org/?utm_campai...

NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 41,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive


PRODUCED BY:
https://aipodcast.ing

CHAPTERS:
(00:00) About the Episode
(06:04) Introduction and Guest Welcome
(06:13) AI Policy and Abundance Institute
(07:37) Dreams for the AI Future
(09:24) AI in Mental Health and Writing
(11:50) Balancing Fear and Excitement in AI
(17:05) Public Policy and AI Regulation
(20:03) Regulating AI Development vs. Harmful Use (Part 1)
(20:58) Sponsors: Oracle Cloud Infrastructure | The AGNTCY
(22:58) Regulating AI Development vs. Harmful Use (Part 2)
(29:25) Frontier AI Development and Safety Plans
(31:46) Introduction to AI Regulation Challenges
(32:28) Concerns About Market Concentration (Part 1)
(34:42) Sponsors: NetSuite by Oracle
(36:06) Concerns About Market Concentration (Part 2)
(36:32) Balancing Innovation and Safety
(42:16) Transparency and Legal Challenges
(47:21) Regulatory Thresholds and Their Implications
(53:14) Exploring Flexible Regulatory Approaches
(59:17) Final Thoughts and Future Developments
(01:00:27) Outro

SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...


TRANSCRIPT

Introduction

Hello, and welcome back to the Cognitive Revolution!

Today, I'm pleased to have Matt Perault, Head of AI Policy at Andreessen Horowitz, here to discuss a16z's approach to AI regulation, including their advocacy for AI policies that support experimentation & innovation by "little tech" startups, and their efforts to avoid common pitfalls like regulatory capture that could further entrench big tech incumbents.

We cover a range of critical topics, including 

  • A16Z's core principle of regulating harmful AI use rather than the research & development process; 
  • the Raise Act, sponsored by NY State Assembly Member and recent guest Alex Bores, and the challenges of setting appropriate thresholds for regulation; 
  • transparency requirements and what information companies should be required to disclose; 
  • and the pros and cons of California's SB 813, which would create a novel public-private regulatory framework offering liability shields in exchange for opt-in compliance.

Overall, I was again struck by how much common ground exists between the a16z worldview and that of many in the AI safety community, myself included.  Our techno-optimist credentials are perhaps best proven by the fact that we began to take both the upside and risks of AI seriously, years before the technology itself really started to work, and many share a16z's generally libertarian politics, veneration of entrepreneurship, and fear of concentration of power.  

With that shared foundation, I think it's clearly good that someone is working to ensure AI capabilities don't become unduly concentrated in just a handful of companies.  And Matt does raise some important considerations – including the possibility that SB 813 could create an unfair advantage for incumbents by setting the compliance cost for liability protection so high that only the biggest companies are able to participate.  I've not yet studied this issue in any depth, but this strikes me as worthwhile red-teaming of the bill, and I definitely plan to raise the concern with some of the people advocating for SB 813 in an upcoming episode.

Meanwhile, where important disagreements persist, and you will hear a few, it seems to me that they largely stem from disagreement on empirical questions, such as whether scaling laws do or don't imply that only a small number of hyper-scalers will push the frontiers of AI over the next few years, and even more importantly, differing expectations about how powerful AI will become and how soon that will happen.

With those differences in mind, I'm definitely more concerned than Matt is about the risks of secretive R&D processes, particularly if companies are allowed to double down on AI-powered ML research, to execute large-scale YOLO training runs without careful testing throughout the training process, and to continue deploying models internally without the same safety standards that have become industry best practice for public deployments.  All this, to me, does seem to create the potential for surprising and potentially irreversible harms, for which prevention is the only really viable solution.

In any case, I came away from this conversation feeling very optimistic that the overwhelming majority of non-ideological people, who simply want what's best for everyone, will converge on shared understanding and smart policy ideas as more evidence comes in.  

I personally would love to see a research breakthrough that solves core AI safety & control issues on a technology level, and I would happily update my policy positions if that happens.  And on the other hand, it seems that mounting evidence of autonomous bad behavior from AI systems has the potential to change at least some accelerationist minds as well.

Timelines, of course, might be quite short, so it's critical to keep tracking all these developments in real time, and I look forward to continuing to bring people of very different perspectives together to grapple with the latest evidence in good faith as we move through this critical period. 

As always, if you're finding value in the show, we'd appreciate it if you'd share it with friends, post about it online, or leave a review on Apple Podcasts or Spotify. We also welcome your feedback via our website, cognitiverevolution.ai, or you can DM me on your favorite social media platform.

Finally for now – a couple of quick disclaimers, from me, and from a16z, which as you may know, recently acquired the Turpentine Network.   

First, while I have lots of help in the production process and business side of the show, for which I'm very grateful, I personally, am solely responsible for all content on The Cognitive Revolution, including guest & topic selection, the questions I ask, and the editorial commentary I offer.  For this episode with Matt in particular, I was under no pressure and received no consideration to have him on the show, and I while we followed our usual practice of sharing questions in advance and allowing our guests to review our edit and request additional cuts, no topics were off the table and nothing meaningful was cut in the editing process.  

And, second, this is directly from a16z: "This information is for general educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product. Turpentine is an acquisition of a16z Holdings, L.L.C., and is not a bank, investment adviser, or broker-dealer. This podcast may include paid promotional advertisements, individuals and companies featured or advertised during this podcast are not endorsing AH Capital or any of its affiliates (including, but not limited to, a16z Perennial Management L.P.). Similarly, Turpentine is not endorsing affiliates, individuals, or any entities featured on this podcast. All investments involve risk, including the possible loss of capital. Past performance is no guarantee of future results and the opinions presented cannot be viewed as an indicator of future performance. Before making decisions with legal, tax, or accounting effects, you should consult appropriate professionals. Information is from sources deemed reliable on the date of publication, but Turpentine does not guarantee its accuracy."

With that, I hope you enjoy this exploration of AI policy from the techno-optimist, little-tech perspective, with Matt Perault, Head of AI Policy at a16z.


Main Episode

Nathan Labenz: Matt Perrault, head of AI policy at A16Z, welcome to The Cognitive Revolution.

Matt Perault: Thanks so much for having me on.

Nathan Labenz: I'm excited for this conversation. AI policy is a hot topic, and I have many angles I want to discuss with you. But for starters, since I think we'll have a lot in common here, I noticed in my prep that you are a fellow at the Abundance Institute. I wanted to take a minute to give you the floor to share your dreams for the AI future of abundance, and get a sense of what you envision that looking like.

Matt Perault: My primary affiliation is as head of AI policy at Andreessen Horowitz, but I have a couple of side affiliations that I started as an academic. Before this, I was at UNC Chapel Hill running a center on technology policy. As a fellow at Abundance, that's one thing I've been able to continue. I'm also a fellow at the center that I used to direct, which is now at NYU. Those affiliations are awesome because they enable me to continue working with certain communities that I really enjoyed working with in the past. The Abundance Institute is a great organization. Christopher Koopman is the lead for it, and it's really focused on a regulatory agenda that can help unlock abundance. I've learned a lot from that group. They have a great group of policy professionals, people oriented around economic policy. They do a lot in energy, which has been interesting and relevant to my current job at Andreessen Horowitz. So, I've really enjoyed maintaining a close relationship with that group. The focus for me, regarding the question you ask, I think will be a little disappointing for you. People always ask tech people, regardless of where they sit in tech, to forecast the future and, "What does the future of technology look like?" I would say that's something I've never been particularly good at. I'm not an engineer, I'm not a computer scientist, so the people who are really building the tools tend to sit on a different side of the house. I've previously worked at then Facebook, now Meta. I was on the policy team, not on the product side. At Andreessen Horowitz, we have a lot of people who understand the technology very deeply. That's obviously an important part of my job, doing it as well as possible, but my focus is on the public policy side. So, my orientation to the question is really, what is the right policy agenda to enable other people to unlock abundance? How do you think about ensuring that the people who can create the tools are able to do that? And that doesn't mean total deregulation. That doesn't mean being able to do whatever an engineer wants to do. But it does mean trying to avoid regulatory models where the cost of those models, particularly for startups and small tech companies, outweigh the benefits. That's really the day-to-day focus of my work.

Nathan Labenz: So, I hear you on all that. You have to have some dreams though, right? For me, the future involves medical advances, equal access for everyone around the world to an AI doctor. Maybe one day a robot in my home will do my dishes and pick up my kids' messes. Maybe one day people won't have to work if they don't want to, or if they don't have work they find intrinsically motivating or fulfilling. I don't know. I'm always interested to hear, especially because you're so invested in this, I would love to hear what animates you.

Matt Perault: All those things sound cool. Both my parents and my sister are mental health professionals. So when you talk about AI in medical care, I think about breaking down barriers to access to mental health treatment. That's been something I've discussed with my parents since I was little; they talked to me about the value of having mental health support in your life. That's hard for people to do for many different reasons. It can be cost-prohibitive, hard to identify a mental health care professional, or carry a stigma. So, I think some of the advances there are really exciting and interesting. Obviously, it's important that be done in a way that is consistent with providing a high-quality level of care. But I think breaking down access barriers is a key thing. As someone who spends a lot of time writing and reading, I've been excited about the benefits AI can bring to the analytical and writing processes. It's a scary one because I think it's coming after my business model. My ability to have some level of writing competence has always been a part of my skillset that has been important in my academic life, as a student and then as a professor, and important to my professional life too. And I think the ability of AI tools to develop thoughtful, clear prose is something that corrodes. You have to be a differentiator on top of that. Another way to say it is, at one point when I was working at Facebook, I said to a boss, "I understand my job now is to deliver a bad first draft to you." His response was funny. He was like, "Yeah, that's great. But every now and then, could you develop a good first draft?" To the extent that the jobs some people in these fields have is to produce that first draft that enables people to provide feedback and critique, and then massage and hopefully turn into a really good final, now we have tools that are able to do bad first drafts, and maybe at some point, do very good first drafts. So, I think the fact that it's coming after something that has been important to my professional life is a really interesting, exciting opportunity, and also somewhat of a scary one. But one that I think is exciting, because I think it will again play an important role in breaking down access barriers.

Nathan Labenz: I feel that very much myself. Coding and even podcasting. Notebook LM-

Matt Perault: Yeah.

Nathan Labenz: ...I feel in many cases it competes effectively with what I am able to do. It is coming for all of us.

Matt Perault: What is the balance for you on fear versus exciting motivator? As a writer, I think there is something exciting to feel like there is usually a person, now it is a machine, and people who are chomping at your heels. I think that can result in being more creative and being more focused and devoted to parts of your craft in certain ways. It is also scary to feel like people are potentially coming for your work if you do not perform particularly well. I have not thought about it as a podcaster, how do you... What is the fear versus excitement balance?

Nathan Labenz: I describe myself as classically ambivalent. The dictionary definition of ambivalent is contrasting or even contradictory, but strong feelings. That has always been very natural to me with AI. The upside is-

Matt Perault: Mm-hmm.

Nathan Labenz: ...legitimately thrilling. When new models and new products come out, I am always rushing to try them and get hands-on experience. Today, this morning, it has been Cloud for coding an app and Operator for doing some web tasks, and I genuinely feel a thrill from these-

Matt Perault: Yeah.

Nathan Labenz: ...new product experiences. I am less concerned, honestly, about it coming for my work. Not because it is not, but because I worry about even bigger things, like what gain-of-function AI research looks like, and do we really have the wherewithal to control this phenomenon in a macro sense? I am less worried about meaning or jobs or, "What are we going to do with our time?" I would say I am an optimist when it comes to people being able to figure out how to use their time and have a good life, given-

Matt Perault: Yeah.

Nathan Labenz: ...resources, space, and opportunity to do that. But I am a little concerned because we do not really know how these things work, we do not really know why they do what they do, and they do seem to be demonstrating more and more of the negatively thrilling behaviors. As much as it is thrilling to see these things go about these tasks and learn to overcome obstacles, I also cannot help but see that paired with deceptive and scheming behaviors that we are seeing emerge. The dynamics of that get to be really weird and hard to predict. The only thing I do not expect is a business-as-usual future. So I try to... I am obviously a small player in history. I sometimes call myself the Forrest Gump of AI because I always find myself in notable scenes, but usually as an extra. I just try to make whatever little nudge I can in a positive direction. I feel whatever those odds are, it is less for me about handicapping and more about can I put a little bit of shoulder into shifting the odds of a radically positive future just a bit, and hopefully we can all make a little impact like that.

Matt Perault: That seems right to me, and I think that feels like more of the direction of travel. You probably track the macro conversation more closely than I do. I had thought initially, in the post-ChatGPT release moment, that there was a lot of focus not on the immediate, not on what is exactly in front of you, but on the infinite future. Robots taking over the world, what happens in a world where people have less autonomy, what does that mean philosophically? Those big picture things. Then over time, it seems as the technology has improved, more models have become more available to more people, people have a better sense of how to use them. Our digital AI literacy, I think, has improved. People compare notes with their friends about how they use it, and then they integrate some of those use cases into their own lives. The gaze has turned much closer to people. So, exactly as you are describing at the end, it is more about marginal value creation as opposed to, "I am creating a robot that is going to take over my life." My sense, at least, is when the focus is on that, the marginal value creation you get has connected people much more closely to a value proposition that is much less scary. So the question is less, robots taking over the world, and more, "I need ideas for recipes," or, "How do I deal with a kid who is not going to sleep?" Or, "I am looking for activities to do on a family vacation in this place," or whatever it is. Or, "I am struggling with this, how to get this piece off the ground." "How can I structure the ideas?" I have been using AI sometimes because I do not like writing conclusions. AI writes very good conclusions, or at least gives you a bad first draft of a conclusion that can be helpful for that use case. That to me has shifted the conversation, and also, I think, shifted importantly, given the nature of my work, how people think about public policy. You want public policy, I think, to help us avoid long-term catastrophic harms. But I think you also want policy to give you room to unlock and experience the value. Thinking that through, what are the right models, I think, changes as more people have a closer connection to the value proposition.

Nathan Labenz: Maybe that's a perfect segue to jumping into some policy questions. Folks will know A16Z as obviously a prolific, super successful investor led by techno-optimist thought leaders. A huge theme of everything I have read of your work and the firm more broadly is making sure that there's room for little tech as opposed to

Matt Perault: Yeah

Nathan Labenz: big tech to experiment, innovate, bring things to market, develop new use cases, and so on. With all that in mind, you said it's not about imposing all regulation, but what is the first order articulation of the policy agenda that you would advocate for today?

Matt Perault: I can frame what's baked into your question slightly more explicitly. A lot of people think that we are extremely focused on being as deregulatory as possible, that I would have done my job effectively if we saw no AI policy. That's not the case because it's connected in a really important way to the economics of the firm. The life cycle of an Andreessen Horowitz fund is 10 years. That's significantly longer than the life cycle of a tech vesting cycle at a public tech company. When I was at Facebook, it was a four-year vesting cycle. It's a longer time horizon than you would typically see in private equity. We're not looking to juice stock prices tomorrow or produce massive returns tomorrow. We're looking to create a healthy, interesting, exciting, vibrant, abundance-oriented, big dream-oriented view of the world and reality in AI technology over a longer period of time, to get back to your first question. That's important for a bunch of reasons. One additional way to think about it is if people were really into AI tomorrow, really excited about it, and there was an explosion of interest in AI, an explosion of AI products, and then the market cratered a year later because of many examples of harms in our society, we're not going to get the economic returns we're aiming to get. We're trying to build healthy, long-run, strong companies over a period of time. When you're trying to do that, you need to allow startups to get companies off the ground. You were getting at this in your question. We are not focused on the health of big tech companies. Sometimes big tech interests overlap with ours and are aligned with ours; sometimes they're not. That's really beside the point. Our focus is for little tech companies: how do they have room to grow? But we have to have an ecosystem that's also providing a set of tools that people like to use and feel safe and secure when they're using them. The core of our policy agenda is focused on encouraging policymakers to regulate harmful use, not to regulate AI development. In our view, that leaves room for little tech companies to build and grow. It's typically how technology regulation has historically been approached. If you think about software development, there's not a lot of regulatory intervention in the building of software. But if you create software that's harmful for the world, depending on the specifics, in most cases, there's going to be law that can account for that harm, prosecute you, and deter you from future activity where you're building something harmful. That's our orientation: try to leave the development process available to little tech, give them room to build, but then importantly, the second component is regulating harmful use.

Nathan Labenz: Let's dig into those, one at a time. On the development side, development can mean many things. It can mean me sitting down and fine-tuning a model, just tweaking it a little bit for my

Matt Perault: Yep

Nathan Labenz: purposes and baking it into an app. On the very high end, it means pushing the frontier of AI capabilities. While I'm definitely not a big analogy guy, I sometimes think of this as being akin to biological research when I think of the canonical example, without coming down firmly on the exact origin of COVID

Matt Perault: Yeah

Nathan Labenz: obviously there are institutes in Wuhan and elsewhere doing this research where they ask, "What would happen if we made a virus do this? Could we?" They have good intentions, to be clear. I understand the idea is we're going to make these things so we can figure out how to treat them so that if it ever does happen, we'll be prepared. But then we do have this history of lab leaks. At the very frontier, it seems there is a very material risk that is analogous, pretty analogous to biological research, where you don't necessarily have to deploy it. You don't even have to intend to deploy it. If you're developing something sufficiently powerful, there is this risk that it might get out of control. So I wonder how you feel about that risk, which for most people

Matt Perault: Yeah.

Nathan Labenz: ... in the AI safety community is honestly the number one concern.

Matt Perault: I don't know enough about the regulation of biological research to know whether it's a good analogy or not. It feels, at least with my superficial understanding, to be a good one. We give broad latitude, I think, to researchers to conduct research, including things that could be very harmful if they're released in a way that violates the law. The reason we do that is there's a lot you can discover and a lot that can be really beneficial to society from engaging in the research process. The research process itself isn't inherently good or bad. Typically, again, I think we want to allow researchers a lot of room to explore and not be overly prescriptive. There are any number of things researchers can do that are problematic. One example, closer to something I've worked on, is that I was at Facebook when the Cambridge Analytica incident occurred. That involved data going from Facebook to an academic researcher, right? So, that transfer, at least under Facebook's terms, was legitimate. The researcher then misused the data, but the initial transfer to a researcher was consistent with Facebook's terms. I think in some ways, perhaps most ways, you would think that initial transfer is desirable. We want academic researchers. Typically, the research community asks for more access to data, not less. That is probably a positive thing. The abuse or misuse of data once it's in your control is a real problem, and I think we want to ensure stringent rules are in place to penalize misuse. So again, in your COVID analogy, it's not okay to develop a virus that could cause a global pandemic and then release it into the world to cause a global pandemic. I don't believe—again, I don't know the legal regime—but my assumption is that is unlawful and should therefore be penalized. The research, I think, is a different question. Again, mapping it back to AI development, the development itself is just science. It's math. So the concern our firm has had is that when you look to criminalize the math, when you look to make it harder for scientists to do science. That just has the impact of slowing development and doesn't really do much to address potential harmful uses. So, if we're concerned about harms, we should try to ensure they're punished. I think the concern I have, and maybe it's why so many people think we're purely deregulatory, is that people spend a lot of time hearing, thinking, absorbing, "Don't regulate development." Then, when we say, "And regulate harmful use," the ears close up. Because I think maybe the idea is that it doesn't feel like there's a clear pathway there. Is it, maybe the view is that it's just a diversionary tactic, that we're trying to get people away from regulating AI development so they do nothing. But actually, we think regulating harmful use means there's a lot for lawmakers and enforcers to do that is really significant and meaningful. When I was in law school, I spent a summer at the criminal section of the Civil Rights Division of the Justice Department. That was an interesting experience for a number of reasons. Primarily, people go there because they want to litigate cases, I think. When you're there, every day over the course of a summer, what you see, especially as an intern, is that you're not traveling around the country going to different courtrooms. You're in the offices in the Civil Rights Division, watching what the attorneys are doing when they're not in court, and what they're doing is building cases. That's really hard to do. Cases don't just come with a clear fact pattern where you can really understand exactly what's going on and where a violation occurs. They don't come to you with that clarity. You have to build the case and identify the violation. I think the same thing will be true in AI. We're not going to know when AI is used in violation of anti-discrimination law. That's not going to be, or I should say, it's not going to necessarily be obvious. You will need to build cases. You need to identify the harm that's occurring, an AI tool being used for that harm, and that, I think, requires work, investment, and probably lawmaking in some cases. You need to have enough people to do the enforcement. Those people need to have the resources they need to do the enforcement. They need to have the technical understanding of the technology to understand when laws are being violated. There may be cases where existing law isn't sufficient to account for harms created by the use of AI. In those cases, we may need to think about tweaks to existing law to ensure they can be taken into account. So, when we say, "Don't regulate development," that means something. Then, when we say, "Regulate harmful use," we consider that second component to be a very active part of the policy agenda.

Nathan Labenz: Let me do one more beat on the development, and then we'll circle back to the...

Matt Perault: Yeah.

Nathan Labenz: ... harmful use. Obviously, a big pattern of bills has been put forward, which I think the proponents would say is their effort to meet folks like you in the middle on this, or even be quite compromising. I think they would also say it reflects a genuine shared concern that they don't want to...

Matt Perault: Yeah.

Nathan Labenz: ...squash frontier research, certainly unintentionally.

Matt Perault: Yeah.

Nathan Labenz: The pattern has been recently, and this is now happening again in New York with a bill that I just did an episode on, the RAISE Act, which I'm sure you're tracking. Basically, the idea is we'll have some threshold, and I know you have thoughts on thresholds. If you are above that threshold, and typically they're trying to design the thresholds so that high single-digit, maybe low double-digit numbers of companies would be caught by these thresholds, then you have to develop a safety plan, publish that safety plan, share it with a regulator, maybe publish it publicly. Those little details vary. Then you have to follow it, and there's some variance on the provisions, like, do you have to have an audit or do you just self-report, or whatever.

Matt Perault: Yeah.

Nathan Labenz: Exactly, right? My impression is that you don't favor those, but it does seem that is pretty consistent with the general principle of giving people pretty wide latitude, because most of these proposals, at least the ones that seem to be getting the most traction, are that the companies get to define their own safety plan and they're basically grading their own homework for the most part. "We just want to make sure," speaking on behalf of the sponsors of the bills, "that they're actually doing it in a serious way, because we do think this stuff is pretty serious, and we're not going to tell every biologist exactly what experiments they can and can't run, but there is a security level for our type concept where if you're-

Matt Perault: Yeah.

Nathan Labenz: ...not implementing certain standards, then we look very unkindly on that." So, how do you feel about this? We can bring in thresholds here, but also just this very top-tier, grade-your-own-homework, but we really want to at least know that you're doing it vibe?

Matt Perault: So, just as a threshold matter, you agree that those regulations are targeting AI development, right? Not harmful use?

Nathan Labenz: I think so. I would say frontier development is usually the phrase that I...

Matt Perault: There are a bunch of ways to get into this. One, just at the top-tier thing, is what is the problem with targeting harms? If what we're concerned about are harms, why don't we put the emphasis on ensuring that we can address harms when they occur? In terms of the development side, which I know is what you want to focus on, I think there are a couple different aspects of it. One is who do the bills cover, and then the second is for the organizations it covers, what does it ask you to do? Is that, your words were reasonable. Is that... I would say maybe not, I wouldn't use the term reasonable. I think I would say, is that going to encourage the AI ecosystem that we want to see? On the who side, I think even in your question you said the thing that, as you framed it, sounds like a compromise, sounds totally reasonable, and it's the thing we fear most, which is, this is just going to apply to a handful of companies. Five, 10, 15 companies. It's just going to apply to a couple, a small number. That's the thing that we're worried about, because the other way to say it is, this is going to be a concentrated market, and we're going to have a regulatory model that is not only going to be okay with a concentrated market, we're actually going to have an approach to regulation that is going to create concentration. You can only build at the frontier. There are only going to be a small number of companies that are able to build at the frontier. That's not what we want. We want to ensure that startups can build at the frontier, and that if you're a founder and an engineer, or a founder and five engineers, a founder and 10 engineers, those companies are able to have the ambition of building at the frontier. So, when you said in your opening, "What's the dream of abundance?" and I said, in a roundabout way, the way that I think about that is, "What is the regulatory model that allows smart, creative, ambitious, skilled entrepreneurs to build great businesses?" This is what I'm talking about. The capacity, the ability to build at the frontier. And it's just going to be a five-to-10-companies line. I thought, before I started at Andreessen Horowitz, I thought that was a story that I'd heard people tell, but I didn't think it was a real thing. And now, I've heard it a couple of times. I heard it on your previous podcast. I heard it in a conversation with a member of the European Union, someone who's working with them on their AI regulation. So, I don't think it's an exaggeration. I think it's a widely held view that, "Look, this kind of development is just going to be a small number of companies." And that is... I think, for a lot of people, they'll hear that as very reasonable, and I think it's really important to, whether or not you agree with us, to see it as very strongly opposed to the vision that we would have of what a healthy AI ecosystem would look like.

Nathan Labenz: Okay. In some ways, I am very sympathetic to the idea that concentration of power generally does not appeal to me. So, I do find this, for what it's worth, to be the case among many people that would call themselves AI safety-minded people. Broadly speaking, they are techno-optimist libertarians throughout their lives until this one thing. I think the thing that sort of makes it different for people, that answers the question around what's wrong with just regulating harms, is, there may be some things where, by the time the harm is done, it's just too late, right? The COVID pandemic is an instance of that, where it's like... And again, I don't take a position on where COVID came from exactly, but if it did come from a lab, and now we have 10-plus million people dead globally, it's really hard to go to that lab and say, "Now you have to pay a fine," and act like that's easy enough, right?

Matt Perault: I think it's a perfect example. How many constraints have we put on biological research in the wake of COVID?

Nathan Labenz: As far as I know, the gain-of-function research continues, right? I don't think any actual lines of research have really been shut down. Have they?

Matt Perault: This is what I mean. What if the response to that, instead of focusing on all the problematic incentives and their geopolitical issues related to how COVID was released into the world, the response was, "We're going to make it really hard for every single researcher working on biological research to do their research."

Nathan Labenz: That would be... You'd hope to be more targeted than that, and that obviously connects to the number of companies. But there is something to be said, I think, for, "I'd love to fix the world. I'd love to have more wastewater monitoring, and I'd love to have air scrubbers installed in all the schools," and there are so many things that seem honestly pretty obvious hardening. And certainly, this is a theme in the AI safety discourse too: what can we do to harden society against these potential future assaults? But there is also the notion of choke points, and you can say, "There's one thing that seems to be a really bad problem, and that is if you create new viruses that don't exist in nature that have super transmissibility or super lethality, that seems really bad, or at least potentially really bad." And it doesn't seem crazy for society to come in and say, and I don't think we have, but I would support it if society got together and said, "Don't create-"

Matt Perault: Pandemic. We lived through the pandemic. We didn't create civil or criminal liability for what you're describing. I'm not a biological researcher, so I have no idea. I don't have a sense of why we didn't do that, and why we as a society might think that risk is one worth bearing. But my guess is, because that kind of research on some of the most virulent, potentially problematic viruses is the same thing that will prevent those viruses from spreading over time. You have to do research on them, I assume, in order to figure out how to combat them. That doesn't mean it's okay to release them into the world, and it doesn't mean that there are lots of reasonable things you could do in that situation to try to ensure there's a deterrent for harmful use. But we don't take that kind of research off the table because of the upside. And I think that when we were talking about AI use cases, there are an enormous number of positive use cases from the mundane to the revolutionary. Mundane might be, "It helps me write a conclusion," but dramatically lowering barriers to medical access or mental health treatment, or making it much easier for more developers to build more products that are valuable to people. Whatever the explosively positive use case is, I think that, just like with biological research, the reason we preserve a fair amount of ability to do the research itself is because of those possibilities of really positive explosive use cases. And I think if what we want is abundance... And I think there are a lot of people who are skeptical of abundance, but if that is a thing that we want, then we need to have a regulatory environment that is conducive to it, and putting significant constraints, handcuffs on the development process, I think is not going to get us there. And there are a few different components of it that feel to me to be particularly off-base. One is just going to apply to a couple of companies. That is another way, for us, that means we can't invest in startups to grow. I also think it means... That's one part of the innovation angle. The other part of the innovation angle is the reason, one of the reasons that our laws disfavor monopoly is that monopolies tend to not innovate rapidly. And so, if you have a super concentrated market, it's likely that you're going to see fewer of those most interesting use cases be pushed and be pushed aggressively. The other thing, which I think is really important, is it would be a different story if we said, "We're going to make it a lot harder to build AI models, but the impact of that is going to be that we avoid a significant number of safety incidents, and we really can ensure that we are going to dramatically reduce harm in the world." And I don't think that's the case. I don't think that publishing a document with safety protocols means you're not going to have safety incidents. It doesn't even... I think there are probably some number of safety incidents that wouldn't occur that might otherwise occur. But I think it's conceivable that there will be very few of those, that it actually has no direct correlation to the actual safety of your product's performance. And if that's the case, if there's pretty limited efficacy in what you're doing, then you're simply burdening development without a lot of upside. And that, again, is why we think the idea isn't don't regulate. The idea is to put resources and emphasis on the harmful use side, both because it leaves room for development, but also because it actually would be more effective in addressing potential misuse.

Nathan Labenz: I agree with you that the proposals put forward, where everybody has to act. Not everybody, of course, but some number, which we could debate exactly how big that number might grow over time. These companies above a certain threshold have to develop a safety plan, follow it, and disclose safety incidents. I would agree that's not that strong of a barrier against certain possible bad things happening. I think it is more meant to ensure we are not flying blind, which is honestly one of my bigger worries. I wonder if reframing some of these requirements around transparency is appealing at all. One of my worries is that we are seeing signs of this from OpenAI and others, and frankly, Anthropic, who has, at least in some corners of the safety community, been the darling frontier developer. They seem to be going for a self-improving feedback loop, a recursive dynamic where the AI will get so good at coding and ML research that it will do the improvement. And then, literally, somebody from Anthropic, as they launched Claude 4, said, "We want Claude to take over all the ML research so we can all go to the beach." People responded, "You can't put it much plainer than that," right? I wonder what starts to happen behind those closed doors. And again, this is all very speculative right now because we are in unprecedented territory. I also think Daniel Cocotello, one of the lead authors on the AI 2027 projection, forecast, or scenario, which I guess might be the best word for it. One of the things he says is that we should expect that right now, we have a very narrow gap between frontier AI capabilities and what is deployed for you and me to use. He expects that gap to grow much wider over the next couple of years as companies realize, and I think they may already know it, that they are in a race for supremacy. Who can create the most powerful AI model first that dominates the market, dominates their rivals, and so on, and that they just are not going to want to show that to anyone. And right now, there is no rule that would require them to do that. So I wonder if you would support something like transparency requirements, even if only around what capabilities AIs have. OpenAI, Anthropic, Google, you have to tell us what your AIs can do, so we at least know what they are capable of in your labs. What do you think about something like that?

Matt Perault: We published a blog that I encourage anyone interested to read, because it is a little hard to run through it quickly. The basic idea is that many transparency proposals we think are probably unlawful, they probably violate the First Amendment, and are not particularly useful for consumers. I do not think some of the proposed transparency mandates, such as certain kinds of safety risk assessments, will actually provide consumers with information that is useful for them, truly informative, and shapes how they interact with a model or which models they choose to interact with. In our view, the way to get at a disclosure regime that is lawful, useful for consumers, and not unduly burdensome for smaller tech companies, is something we call AI model facts. It would basically be a basic fact sheet that would outline different information helpful for users. This was a fun project to work on because it was interesting to see in current disclosure mandate proposals what things might map onto this. What would actually be, if the criteria is not unduly burdensome for startups, useful, not unlawful, and legal, are there elements from current proposals? There were not many, actually, that I would say would meet that criteria. The kind of information we thought of included things like knowledge cutoff date, which I think is extremely useful for consumers. So if you ask an LLM to give you information on who won the basketball game last night, that is not in its training set. Having a sense of exactly when that cutoff date is will enable you to ask better questions that are within the corpus of data. If you were to ask it who won the basketball game last night and it gave you an answer, the likelihood that that is a hallucination or information you should not trust should be high. So I think that is the kind of information that is useful for consumers. The courts have been clear that it is very hard to survive legal scrutiny if you require information that is either controversial, not factual, or burdensome for companies. Therefore, I think some types of information typically seen in transparency proposals, such as speculating about potential long-run harm or forecasting potential safety risk, will be very challenging for those things to survive First Amendment scrutiny.

Nathan Labenz: One more quick point on thresholds. I share your concern about regulatory capture and the structural oligarchy of various markets that we could find ourselves in via such mechanisms. The people who developed these proposals around thresholds have come to them because they see the scaling law phenomenon as a good indicator that we are headed there regardless, and it seems we are basically there today. I do a live players analysis where I ask people, "Who do you think should count as the top-tier AI companies in the world today?" The list is sometimes as short as three and never longer than 10. That seems to be, at least so far, a structural reality of the inputs these things require. Yes, costs are definitely coming down, so people have proposed naturally escalating thresholds, whether a certain amount of money or even revenue, which I know you favor more than other concepts. Do you see the notion that there will be many companies at the frontier of AI as contrary to what increasingly seems like a natural law? There is a scaling law reality, and as you know as a venture investor, power laws rule everything around us. Is it unrealistic to think that a small number of companies could be caught by these thresholds?

Matt Perault: First, if you are trying to capture large companies able to bear compliance costs, revenue thresholds are clearly the policy vehicle that gets you there. Once you are taking hundreds of millions of dollars in revenue, we can figure out what the threshold should be. I have seen proposals with $100 billion thresholds, higher revenue thresholds, and lower ones. Whatever the amount, at some point of taking revenue, you can devote a percentage of that to compliance costs. It is hard to argue that if your annual revenue is $500 billion, you cannot devote a percentage of that to the exercises and transparency disclosures regulators require. Compute and training cost thresholds are different because we think startups can build at the frontier. They are able to build models with significant compute power. I had a conversation with someone on our technical team about the $100 million training cost threshold because I was seeing that in more legislation. I asked, "Is that a good way to carve out little tech companies?" He said, and I trust him on this as I lack the technical expertise to critique it, that every model costs $100 million to build. Those thresholds will not be successful in separating small tech from big tech. Technology will outpace regulators' ability to update them over time. One thing I noticed in the RAISE Act, which was part of our discussion, is that there are a couple of ways to hit the large developer threshold. You have to hit a $100 million training cost cap, but I do not think anything says that could not be cumulative. This means if you built a large number of $25 million models, you would be captured. Right now, that might seem far in the future for a startup, but a startup might be building a couple of those models a year. I do not know the highest rate of model development, but over a period of time, more than one year, every startup would eventually become a large developer. Those kinds of thresholds focus on the development layer, where we think regulation is much more fraught. If it is cost or compute, that focuses on the science of building the tool and has nothing to do with a company's ability to handle the regulatory complexity that some of these regulatory models require. Another way to address your question is that you are right there are natural barriers to entry in AI, such as the cost of compute. We have ideas about different ways to provide broader compute resources to more entities. Regarding access to data, we have ideas about ensuring more startups have more data for training. The question for policymakers is, do you want to add an additional significant hurdle? Do you want regulatory barriers to make it even harder? If so, we end up in the world you are describing, where a small number of developers can build at the frontier. Everyone else can have the rest of the ecosystem, with less capable models not building at the frontier. We essentially have a regulated monopoly, similar to what we have had in telecommunications, with a very small number of companies under stringent regulatory oversight. At one point, there was even an explicit government grant of a monopoly to one telecommunications provider. Some people would say that is the right model, the exact direction we should pursue. From our standpoint, that level of monopolization is problematic. It suggests a lower level of innovation than we desire and reduces the ability of startups to compete at the frontier, which we believe will unlock significant value for people.

Nathan Labenz: One thought I had earlier on the ten-year time horizon at an A16Z fund is that not only is that longer than investing schedules, it's also longer than most people's AGI timelines these days. I wonder, something I've pitched many times, and it never seems to get any traction, but there is a version of it right now under consideration in California, if you squint at it, with SB-813. It's an ability to move in and out of different regulatory regimes. The simplest possible thing might be to have a sunset clause on a lot of these things. For example, for the next three to five years or whatever, we think we're in a critical period, let's put this in place, but then have it sunset at three years. If it turns out some of this stuff was hype or we got certain thresholds wrong, then they can just disappear. With SB-813, there's a similar concept where a trade is able to be made between an AI company that wants to develop a liability shield if they opt into a certain regulatory scheme that's approved.

Matt Perault: Yeah.

Nathan Labenz: by the state but administered by a private organization, then they can have that liability shield. They could also potentially opt out of it at a future point in time if it's not a trade that's working for them for whatever reason anymore. So I have two questions there. One is, what do you think-

Matt Perault: Yeah.

Nathan Labenz: about SB-813 specifically, and do you have any other creative ideas for ways that we can put some measures in place, whether they be guardrails or transparency or whatever, that don't lock us in long term? Nobody wants the GDPR of AI, but some of us are definitely nervous about what happens 10 years from now.

Matt Perault: There are a lot of lawmakers who seem to want the GDPR of AI, because the proposals they are moving forward with suggest that. One interesting thing is, I saw a story, and I don't know how accurate it is, but I saw a story that the EU is considering pausing implementation of the EU AI Act. So the EU is even having concerns about its own AI-oriented version of the GDPR and starting to pull back. The only state that has actually enacted legislation to govern AI comprehensively is Colorado, and the governor of Colorado came out in support of the federal AI moratorium, which would have made it impossible for him to enforce his own law. So policymakers who have actually enacted these approaches are expressing some rightful concern about how they'll be implemented in practice. Again, that doesn't, from our standpoint, mean the policymaker shouldn't take action. We think lawmakers should focus on actively regulating harmful use. We think there's a role for states and the federal government to play in that. Regulating harmful use is actually pretty consistent with how states historically, and according to the Constitution, have gone about lawmaking. They can't unduly burden interstate commerce, but they can police harmful conduct within their jurisdictions. Some areas of law are disproportionately occupied by state governments; most criminal law is at the state level. So if you're looking to tighten criminal law to address harmful criminal use of AI, most of that is really for state lawmakers to do. Investments in workforce development and AI literacy, there's a lot for states and the federal government to do there. Transparency, we've talked a little bit about an AI model facts model. I think there's a ton of stuff that lawmakers and enforcers can do to try to arm us for a world where AI is more prevalent. That doesn't mean there's an infinite road. The way you described it, I think of, 'Can we just do some things for a short period of time, and if it's not the right approach, we can pivot?' I like that kind of experimentation generally, but I do think depending on the model, if it's a really stringent regulatory model, it's like saying, 'We want to run this race, we want to run it as quickly as possible, but you're going to have to wear a 20-pound backpack for the first mile.' Different people have different views of how important it is to run that race at a certain pace, but it's not cost-free even to do it for the first mile to require someone to wear heavy weight.

Nathan Labenz: So where does that leave you on SB-813 and the quasi-private, at least, more dynamic regulatory idea there?

Matt Perault: It's something that we're still exploring. There are lots of promising things about it. The idea that lawmakers will have to wrestle with tort liability at some point is important. That's baked into the fundamental premise of the bill, and I haven't seen many proposals that really seek to do that. If we want to see people have even very mundane uses of AI in their lives that will improve their lives in some marginal but meaningful way, you have to think about tort law. The fact that the bill does it is a positive thing. The question is whether there's a regime there that works for little tech, and there are some things that people have said about it that don't fully wrestle with that component. Some people say, 'This is just voluntary,' but immunity from tort law, or some level of immunity, some protection from tort law, is a massive benefit. If you said, 'You can get immunity from tort law, but you have to pay $20 billion,' most people would say, 'That doesn't sound fair.' That doesn't sound workable for little tech. It means every large company is going to pay to get immunity from tort law, and every small company is going to have to bear that legal risk. That's not really voluntary. You're saying there's this very valuable benefit, and you're making it prohibitive for startups. So the question is, is the regulatory regime in 813 one that's workable for startups? There are elements of it that are, but there are elements of it that aren't. My hope is that over time, as that bill is examined, it comes into a form that's more workable for little tech. If it's workable for little tech, that's the direction of travel we want to see. That gets us closer to support.

Nathan Labenz: Do you want to leave the audience with any other thoughts? This could include specific things you're tracking and supporting right now, or other priorities and ideas we touched on that you want to make sure people are aware of?

Matt Perault: One thing I'll flag is that the Trump administration is releasing this national AI action plan at some point in June or July. So that's something to look out for.

Nathan Labenz: I definitely expect there will be many more developments in this story before we get any stable resolution or stable policy regime, let alone even a stable technology regime. I appreciate this. I think this was a good, constructive conversation, and I hope we can do it again because-

Matt Perault: Yeah.

Nathan Labenz: If anything, all the time intervals are getting compressed because I suspect all these things are tightening and will be coming at us with a level of intensity that, if I'm generally right about the direction of where the technology is headed, is going to require all of us to come together and really try to synthesize all the different perspectives into the best possible plan. So come back and let's do that again before too long. How's that sound?

Matt Perault: Sounds great. Thanks a lot.

Nathan Labenz: Cool. Love it. Matt Perott, head of AI policy at A16Z. Thank you for being part of The Cognitive Revolution.

Matt Perault: Awesome. Great. Thanks for having me on.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.