AI & The Law: Changing Practice, Claude Constitution, & New Rights, w/ Kevin & Alan of Scaling Laws

Kevin Frazier and Alan Rozenshtein discuss how AI is changing legal practice and careers, its role in legislation and governance, proposals like AI-written contracts and new digital rights, and future conflicts over surveillance, AI sentience, and welfare.

AI & The Law: Changing Practice, Claude Constitution, & New Rights, w/ Kevin & Alan of Scaling Laws

Watch Episode Here


Listen to Episode Here


Show Notes

Kevin Frazier and Alan Rozenshtein explore how AI is reshaping the legal profession, from “secret cyborg” lawyers using tools like Harvey to the uncertain future of junior associates and access to legal services. They discuss maximalist legal services, AI-written “complete contingent contracts,” and where AI should fall between strict formalism and legal realism, including Claude’s virtue-ethics-inspired constitution. The conversation then turns to AI’s role in legislation and governance, including outcome-oriented law, the “Unitary Artificial Executive,” and new rights like the Right to Compute and the Right to Share personal data. They close by examining limits on government surveillance and how future debates over AI sentience and welfare could spark social conflict.

LINKS:

Sponsors:

Blitzy:

Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enterprise software development by up to 5x with premium, spec-driven output. Schedule a strategy session with their AI solutions consultants at https://blitzy.com

Framer:

Framer is an enterprise-grade website builder that lets business teams design, launch, and optimize their.com with AI-powered wireframing, real-time collaboration, and built-in analytics. Start building for free and get 30% off a Framer Pro annual plan at https://framer.com/cognitive

Serval:

Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive

Tasklet:

Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai

CHAPTERS:

(00:00) About the Episode

(03:35) Surveying AI-law landscape

(14:56) Legal deserts and demand (Part 1)

(15:02) Sponsors: Blitzy | Framer

(18:06) Legal deserts and demand (Part 2) (Part 1)

(28:25) Sponsors: Serval | Tasklet

(31:14) Legal deserts and demand (Part 2) (Part 2)

(31:14) AI and legal careers

(45:10) AI counsel and self-representation

(59:50) Maximalist law and outcomes

(01:12:30) Rules, principles, and Claude

(01:25:26) New rights and restraints

(01:38:26) Outro

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Website: https://www.cognitiverevolution.ai

Twitter (Podcast): https://x.com/cogrev_podcast

Twitter (Nathan): https://x.com/labenz

LinkedIn: https://linkedin.com/in/nathanlabenz/

Youtube: https://youtube.com/@CognitiveRevolutionPodcast

Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431

Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.


Introduction

Hello, and welcome back to the Cognitive Revolution!

Today my guests are Kevin Frazier, Senior Fellow at the Abundance Institute and Director of the AI Innovation and Law Program at the University of Texas School of Law, and Alan Rozenshtein, Associate Professor of Law at the University of Minnesota. 

Together, they host the "Scaling Laws" podcast, which has become a go-to resource for tracking the impact that AI technology is beginning to have on our otherwise slowly-evolving legal system.

In the first part of the conversation, we focus on how AI is affecting the legal profession.  

While lawyers are more insulated from change than most professions, thanks to their unique ability to write licensing laws and implement other guild-style protections, Alan is clear-eyed, noting the practice of law is fundamentally a cognitive activity and observing that frontier models are already "better than the median lawyer", at least in terms of raw intellectual horsepower.

And yet, while 70% of top law firms have licensed tools like Harvey, Kevin says that day-to-day usage remains surprisingly low, in part because the billable hour compensation structure disincentivizes efficiency.  Some "secret cyborgs" are quietly using AI to outperform their peers, and firms are beginning to whisper about hiring fewer junior associates, but aggregate impact is limited, and whether we'll see large-scale displacement of human lawyers, or a dramatic expansion of legal services provided by human-AI teams, remains highly uncertain, because though it's clear that many people are under-served by the legal profession today, it's not clear how much more legal services people would want to buy, even at dramatically reduced prices. 

Later on, we consider bigger, more speculative ideas, including:

  • What maximalist legal services might look like, starting Alan's idea of using AI to develop "complete contingent contracts", which would attempt to address every possible scenario before signing;
  • Where AI should sit, relative to humans, on the spectrum between "strict formalism" and "legal realism", and how the new Claude Constitution represents a virtue ethics based approach that prioritizes contextual judgment and high-level principles over detailed rules
  • How AI could re-shape the legislative process, including Kevin's vision for "outcome-oriented law," where we define what we actually want new laws to do and then use AI to run simulations before passing bills;
  • Alan's concept of the "Unitary Artificial Executive," and the risks associated with the possibility that AI could enable granular, real-time control over the entire federal bureaucracy;
  • What new rights we as individuals should have in light of AI technology, including the "Right to Compute", which has already been enacted in Montana and is being considered in other states, and the "Right to Share" one's personal data, which today is often frustrated by well-intentioned but outdated privacy frameworks;
  • What new restrictions we should place on the government, such as limits on mass surveillance of public spaces; and finally
  • How questions of AI sentience and welfare might become a source of social conflict as people become more and more attached to AI personas.

Kevin and Alan are skilled conversationalists and serious scholars, and I think you'll agree that this episode is simultaneously educational, thought-proving, and fun. 

So, I encourage you to join me in subscribing to Scaling Laws to keep up with everything going on at the intersection of AI and the Law, and I hope you enjoy my conversation with Kevin Frazier and Alan Rozenshtein.


Main Episode


Full Transcript

(00:00) Nathan Labenz:

Hello, and welcome back to the Cognitive Revolution. Today, my guests are Kevin Frazier, Senior Fellow at the Abundance Institute and Director of the AI Innovation and Law Program at the University of Texas School of Law, and Alan Rozenshtein, Associate Professor of Law at the University of Minnesota. Together, they host the Scaling Laws podcast, which has become a go-to resource for tracking the impact that AI technology is beginning to have on our otherwise slowly evolving legal system. In the first part of the conversation, we focus on how AI is affecting the legal profession. While lawyers are more insulated from change than most professions, thanks to their unique ability to write licensing laws and implement other guild-style protections, Alan is clear-eyed, noting that the practice of law is fundamentally a cognitive activity and observing that frontier models are already better than the median lawyer, at least in terms of raw intellectual horsepower. And yet, while 70% of top law firms have already licensed tools like Harvey, Kevin says the day-to-day usage remains surprisingly low, in part because the billable hour compensation structure disincentivizes efficiency. Some secret cyborgs are quietly using AI to outperform their peers, and firms are beginning to whisper about hiring fewer junior associates. But aggregate impact so far is limited. And whether we'll see large-scale displacement of human lawyers or a dramatic expansion of legal services provided by human-AI teams remains highly uncertain. Because though it is clear that many people are underserved by the legal profession today, it is not at all clear exactly how much more legal services people would want to buy even if prices were dramatically reduced. Later on, we zoom out to consider bigger and more speculative ideas, including what maximalist legal services might actually look like, starting with Alan's idea of using AI to develop complete contingent contracts, which would attempt to address every possible scenario before signing. Where AI should sit relative to humans on the spectrum between strict formalism and legal realism, and how the new Claude Constitution represents a virtue ethics based approach that prioritizes contextual judgment and high-level principles over detailed rules. How AI could reshape the legislative process, including Kevin's vision for outcome-oriented law, where we first define what we actually want new laws to do and then use AI to run simulations before passing bills. Alan's concept of the Unitary Artificial Executive and the risks associated with the possibility that AI could enable granular real-time control over the entire federal bureaucracy. What new rights we as individuals should have in light of AI technology, including the right to compute, which has already been enacted in Montana and is being considered in other states, and the right to share one's personal data, which today is often frustrated by well-intentioned but outdated privacy frameworks. What new restrictions we should place on the government, such as limits on mass surveillance of public spaces, and finally, how questions of AI sentience and welfare might become a source of social conflict as people become more and more attached to AI personas. Kevin and Alan are skilled conversationalists and serious scholars, and I think you'll agree that this episode is simultaneously educational, thought-provoking, and fun. So I encourage you to join me in subscribing to Scaling Laws to keep up with everything going on at the intersection of AI and the law. And I hope you enjoy this conversation with Kevin Frazier and Alan Rozenshtein.

(03:35) Nathan Labenz:

Kevin Frazier, Senior Fellow at the Abundance Institute and Director of the AI Innovation and Law Program at the University of Texas School of Law, and Alan Rozenshtein, Associate Professor at the University of Minnesota Law School and Senior Editor at Lawfare. Together, you guys are the creators and cohosts of the podcast, Scaling Laws. Welcome to the Cognitive Revolution.

(03:54) Kevin Frazier:

Thanks for having us, Nathan. Glad to be here.

(03:56) Alan Rozenshtein:

Thanks for having us.

(03:57) Nathan Labenz:

Yeah. I'm really excited for this conversation. We've got a lot of ground to cover. I'm interested in just, you know, always trying to patch my blind spots on the AI landscape in my kind of AI scouting mission, you know, that I always appreciate a chance to do that. So given the fact that you guys are both law professors and scholars and, you know, studying AI and law in the intersection of those two, so deeply, I want to take the chance to kind of get a survey from you in terms of what is going on at the intersection of AI and law. And I listened to your recent episode on the new Claude Constitution, and certainly that's really interesting. There's a paper that you shared with me on automated compliance, which is a phrase I had not heard before, and I think that's really a fascinating concept. And who knows what else? New social contracts we might imagine, explore together as well. So maybe for starters, what's going on at the intersection of AI and law?

(04:54) Kevin Frazier:

I mean, I'd say it's a big traffic jam at this point or a huge crash because we have systems that were largely constructed in the 1960s, if not before in the 1970s. A lot of the core privacy principles, for example, emerged from the fair information privacy principles. I always get them wrong because we just refer to them as the FIPs. But you've got FIPs from the 1970s. You've got case law from well before that all tries to spell out what rights and obligations we have in an analog world. And we already saw those being pressure tested during the Internet era. And as we all know, AI is kind of just putting all of that on steroids. And so when it comes to trying to see how prior legal regimes fit into this new world of AI, it makes for a lot of rich scholarship. So, thankfully, Alan and I have plenty of excuses to continue to write law review articles, although his are always way better than mine.

(05:57) Alan Rozenshtein:

That's not true, but I'm not sure anyone wants to read any law review articles whether they're even if they're good. Yeah. I would say I might back up a little bit, though I agree with everything Kevin said. So I think the two different intersections of law and AI and if you're in a law school, those law schools that have AI classes and an increasing number of them do, and I think within a year or two, all of them will. They're actually two different classes because there's the law of AI and then there's AI and the law. And those are actually very different things because on the one hand, there's all this stuff Kevin was talking about, which is AI is a new social economic technology movement, maybe the most important thing since fire. But even if you don't think that, probably, I think at this point, everyone agrees at least at the level of the Internet. Right? And so there are all these legal questions that come up and how do you regulate it and how do you promote it and how do you control it, etcetera etcetera. At the same time, there's a whole separate set of conversations that have some overlap but are actually pretty orthogonal to that which is law is just a cognitive discipline. It's not quite as pure of a cognitive discipline as let's say computer programming is. Because there are still areas in which the law expects there to be actual human beings, Whereas if tomorrow, all computer programmers uploaded their consciousness into the cloud, you could imagine a world where computer programs would just do fine. With law rather, you still need people to go into courtrooms. But a huge amount of law is purely cognitive. And so there's no reason to think that the same revolution that AI is currently having in computer programming, which is the manipulation of certain kinds of symbols, will not also apply and is not already applying to the law, which is also the manipulation of certain kinds of symbols. It's true that I think the law is somewhat behind where, let's say, computer programming is, but it's like a year behind or maybe two years behind. It's not 30 years behind. Just as software engineering has been completely transformed in the last year, and obviously, I've listened to a bunch of your podcasts. You go into this much more than we do, but we talk about it somewhat, and it's as a really crappy hobbyist programmer myself for many years just because it's fun, and I think of it as the sort of adult approved way of playing video games. As a 39-year-old father of two, it's hard for me to justify playing video games, but if I'm coding, I can convince my wife that's a good use of an evening for me. Although it totally scratches the exact same itch in my brain. Just as AI is totally revolutionizing computer programming, it is in the process of totally revolutionizing the law. I think it's going to take longer, and we can talk about it if you want, because the law is a kind of professional guild, and lawyers are the one guild that because they're lawyers, they control the rules about who can be a lawyer. Right? And so it'll all take longer, but that's like another whole vector. Right? And I think we should all care about that because all jokes about lawyers aside, law is still one of the fundamental technologies of modern society. You want to think of it that way, it's one of the main infrastructures.

(08:49) Nathan Labenz:

Okay. So two big areas that you outlined there. One being basically policy with respect to AI and the other being the impact that AI is making on the practice of law as it's happening today. In just preparing for this, I was looking at what measures do we have to try to get a handle on how good the AIs are getting. And I guess in general, I've been surprised across the board by how far the AIs have made it up the sort of value or performance ladder as measured by something like ChatbotArena, where I went and saw that currently in the lawyers category. There's not that many prompts, at least in the public dataset, but Claude Opus 4.5 is currently the top performer. It is winning one in three head to head comparisons versus human lawyers, and it's winning or tying 70%. That's like, you obviously made it pretty far. You guys can probably unpack that more qualitatively and tell me what it's good at, what it's bad at, where people are having success and not. But it's been striking to me, and this I would say is true in medicine too, that there hasn't been nearly as much guild closing of ranks as I would have expected two and a half years ago. And I don't understand why. Maybe it's because people are ignorant about how far things have come, and they are living in denial as opposed to making the moves that they might one day wish they had made if they had properly appreciated the phenomenon. But I guess, how would you characterize just, like, how good at law frontier models have become and how much do most lawyers today appreciate that, and why isn't there more of a response so far?

(10:29) Alan Rozenshtein:

Yeah. I think I'm curious what Kevin thinks. I think they're extremely good. Obviously, they're still held back by mistakes, hallucinations. They don't necessarily have access to sort of all the databases that you would need to give a full legal answer, especially if the questions are obscure and require, like, you to have read that one random SEC regulation that's buried in the federal register. These are all these are obviously all fairly trivially solvable problems, and they will be solved in the next few years. But in terms of pure horsepower, they're quite good. Some are better than others. In my kind of testing, the amount I spend on all of these models a month, horrifying, but I feel like it is kind of part of, like, my professional obligation to get a sense. So I find them differently. I think right now, I have found that although Claude is my daily driver, and I mostly live within Claude Code, I find that calling out to o1 to ChatGPT o1, and then especially using, like, the pro extended think model, which is these names are so confusing, which I think you can only get on the web interface because in Code CLI, there's the whole thing's a mess. I think that whatever, like, special in I think all of the labs, like, are spending a lot of money on their sort of custom RLHF environments, and they're obviously focusing on different things. I think OpenAI, my sense is, has focused the most on law. And so I from, like, a vibes perspective, I think its legal taste is the best. But right now, all three will give you pretty good answers. And I, in my scholarship and my writing, am constantly talking to these models, having them pressure test my legal analysis. So I'd say already these models are certainly better than the median lawyer. There's no question about that. At least in whatever kind of raw intellectual horsepower equivalent you would be. I see no reason to think why in a few years they won't be vastly superior. There will still always probably be the question of bespoke taste. If you're a super experienced Supreme Court advocate who has done 50 presentations before the justices, that's that's hard to RLHF. But the vast majority of legal work just as the vast majority of programming work, like the vast majority of medical work, is pattern matching across fairly standardized contexts. Now so I think we're like, it's over. Right? There's no question about this anymore. And I will agree with you that there's actually been a lot less pushback on this than I would have have thought. A piece that Kevin and I are currently writing a Lawfare article is actually about the use of AI in legal scholarship. And again, I'm curious, Kevin, your experience. But as I have presented that piece to faculties across the country, I was expecting a lot of tomatoes being thrown and a lot of people saying, oh, but they're just fancy auto completes and they can't be creative. There's honestly a lot less of that than I would have thought. And I think because if you spend an hour talking to any of these models at the $20 a month plan, you just realize if all they're doing is fancy auto complete, then all I'm doing is fancy auto complete. Why there's not been as much resistance? First of all, I think there will be. I think you still the vast majority of lawyers are still they're not tech savvy. They're interested in this. They haven't really experienced this. And so I think that there will be a lot of resistance. But for those lawyers that have experienced this, I think they're making a bet. This is certainly the bet that I'm making that there will be a kind of Jevons paradox of as legal services get cheaper, we will want more of them, and lawyers will move up the value chain. And so although it will be messy and although some lawyers will do very badly if they can't if they can't react in time, in 10 or 15 or 20 years, there's going to be, at the very least, as many lawyers as there are today, at least as much demand for legal services and frankly probably much more. Whether that's true is like the question. Right? That is whether Jevons's paradox is going to hold and across which economic domains is like the question about AI and the economy. But I think given how important law is and given how much I think less law there is than there could be and probably should be in a very sophisticated rule of law country, my money is on Jevons's Paradox holding.

(14:41) Nathan Labenz:

You're kind to call our country a sophisticated rule of law country.

(14:45) Alan Rozenshtein:

Dude, I'm trying to dude, I'm calling you from Minnesota. I'm trying so hard to stay trying so hard to stay optimistic right now. I'm taking the long view. This will all be over at some point.

(14:56) Nathan Labenz:

Yeah. Let's hope so. Hey. We'll continue our interview in a moment after a word from our sponsors.

(15:02) Nathan Labenz:

Want to accelerate software development by 500%? Meet Blitzy, the only autonomous code generation platform with infinite code context. Purpose built for large complex enterprise scale code bases. While other AI coding tools provide snippets of code and struggle with context, Blitzy ingests millions of lines of code and orchestrates thousands of agents that reason for hours to map every line level dependency. With a complete contextual understanding of your code base, Blitzy is ready to be deployed at the beginning of every sprint creating a bespoke agent plan and then autonomously generating enterprise grade premium quality code grounded in a deep understanding of your existing code base, services, and standards. Blitzy's orchestration layer of cooperative agents thinks for hours to days autonomously planning, building, improving and validating code. It executes spec and test driven development done at the speed of compute. The platform completes more than 80% of the work autonomously, typically weeks to months of work, while providing a clear action plan for the remaining human development. Used for both large scale feature additions and modernization work, Blitzy is the secret weapon for Fortune 500 companies globally. Unlocking 5x engineering velocity and delivering months of engineering work in a matter of days. You can hear directly about Blitzy from other Fortune 500 CTOs on the modern CTO or CIO classified podcasts or meet directly with the Blitzy team by visiting blitzy.com. That's b l i t z y dot com. Schedule a meeting with their AI solutions consultants to discuss enabling an AI native SDLC in your organization today. AI agents may be revolutionizing software development, but most product teams are still nowhere near clearing their backlogs. Until that changes, if it ever does, designers and marketers need a way to move at the pace of the market without waiting for engineers. That's where Framer comes in. Framer is an enterprise grade website builder that works like your team's favorite design tool, giving business teams full ownership of your.com. With Framer's AI wireframer and AI workshop features, anyone can create page scaffolding and custom components without code in seconds. And with real time collaboration, a robust CMS with everything you need for SEO, built in analytics and AB testing, 99.99% uptime guarantees, and the ability to publish changes with a single click, it's no wonder that speed, design, and data obsessed companies like Perplexity, Miro, and Mixpanel run their websites on Framer. Learn how you can get more from your.com from a Framer specialist or get started building for free today at framer.com/cognitiveand get 30% off a Framer Pro annual plan. That's framer.com/cognitive for 30% off. Framer.com/cognitive. Rules and restrictions may apply. (31:15) Nathan Labenz:

So maybe let's work our way up the value levels there. I guess for starters, one that I skipped over, and I wonder if there, I don't know if there's like data around this yet or maybe anecdotal data at this point. But again, in the programming field, you do have companies starting to say, like Anthropic, I think is being most vocal about this, arguably being most forthright about this right now when they're saying we're not really looking to hire junior employees in really any department anymore. And I think in the broad space of software, it's man, I don't know. If I had a senior architect and I could have them mentor a junior programmer or get another $200 a month Claude max plan, which is going to give me better ROI narrowly for the purpose of like my project. Obviously there's broader questions of generalizing that strategy and what happens to society broadly, which I'm not ignoring. But locally, it seems like it's pretty clear that you're going to get more from another Claude code than you would from like a kid who came out of an undergrad CS program that was like all in Java anyway or whatever. There's just so many disconnects there that you're like trying to bridge that Claude code doesn't have, doesn't bring those problems to the table. Is that true at like the paralegal level? I used to read as a kid, I read John Grisham books, and I remember so much of the stories were like these sort of heroic, like sort of Herculean labors of just, especially these underdog individual lawyers fighting one versus these large teams, just reading till their eyes bled these repositories of documents. That seems like the probably the first thing that would be like dramatically disrupted by AI. Are we seeing that? Or is there already like a revolution in like discovery? Or I don't even know what the full list of what paralegals do would be, but are we seeing that like majorly changed already?

(33:10) Kevin Frazier:

Yeah, no, I would say that we're already seeing some industry shifts occur. Fortunately, I get to bring a lot of practicing lawyers to campus here in Austin and probe them about how they're using AI. And I'm not going to name firms, but I've asked, hey, if I came to you with the number one graduating student from Harvard, but they had no AI experience, and then I came to you with an AI whiz from a middle ranked law school, who would you hire? And now I hear more and more, I would take that middle tier person who's savvy with AI tools because I want them to be on the frontier of finding new tools and teaching everyone else how to use it. One of the unfortunate things about the legal industry is we love a good symbolic technological adoption. I think 70% of US law firms, for example, of major top 100 law firms are using Harvey according to Harvey's own stats. Harvey, for folks who aren't in the lawyerly weeds, is basically a souped up version of ChatGPT that's meant to assist specifically with litigation workflows. Yet when I go talk to folks who work at firms with Harvey and I ask, okay, what training have you received? And they say, oh, there was some email we got when it was initially introduced, but I haven't checked it out since. And then, okay, are you expected to use it at all? No, there's really no obligation for us to check it out or to use it in any new fashion. The underlying incentive of practicing attorneys is to spend as much time as possible on any given task within the band that's acceptable to your client because we have the billable hour. If you get paid by the hour, then your incentive as an attorney is to bill as many hours as possible. And so I think there are a lot of firms who are just used to that model and scared about bucking that trend, bucking what they know has worked. And so a lot of firms are not necessarily leaning into AI. So I will say that the rate of, let's say, entry level lawyerly jobs disappearing, I haven't seen a huge amount of shrinkage, but I do start to hear whispers now of firms saying we're just not sure we're going to bring on as many summer associates this year, or perhaps we don't need to hire as many junior associates going into the future. And we're also hearing reports of, to coin Ethan Mollick's phrase, a lot of secret cyborgs in law firms these days, the ones who actually are AI savvy, aren't telling their superiors about how sophisticated and how many use cases AI can actually address. So it's a really dynamic time in this space.

(36:03) Alan Rozenshtein:

Yeah, so I'm less plugged in, I think, than Kevin is to legal practice. So if he's hearing that there are whispers around this, then I believe him. I guess I'm a little skeptical if this is happening already. I think the data about whether this is happening in the software engineering field is actually still quite unsettled, and there's kind of a lot of debates over are these big companies actually using AI to not hire people or they're using AI as an excuse for downsizing they've already wanted to do. Again, law is several years behind on the capability scale, and it's actually several years behind even that on the implementing it throughout, both because, again, law firms, and this is part of the guild rules of law, can only be owned and operated by lawyers. And lawyers, God bless them, are not like brilliant business managers generally. And so driving like managerial change is a hard thing to do. Also, again, there are these legal practice rules around when you need a human being showing up in court, and that human being has to attest that they checked everything. And if God forbid your AI hallucinated, it's going to be very bad for you in front of the judge. So I think that there are a lot of reasons to not be worried, you know, right now in the next couple of years. In the longer term, the question is, of course, like how strong is Jevons's paradox? Right? Just all again comes back to this question of induced demand, and we're just not sure what the answer is. I think the more interesting question, or I think the question where we can have more, where it's clear what's going to happen is that a lot of the entry level jobs will just have to go away. And if they're entry level people, they'll be having very different jobs. Because, again, to your point, even a lot of entry level lawyering is very rote work, right? It's doing a ton of discovery. It's finding needles in haystacks. It's, you know, writing a contract based on the thousand contracts your firm has done before in this practice domain. And that's just stuff that already today's technology is going to be so good at. And so the question is, and we just know the answer to this question, is that work necessary on the way to becoming a really good lawyer? And the answer is we don't know the answer to that question. And actually, let me give an example from software engineering that I think about all the time when I try to think through this question about cognitive deskilling, which is a, I guess, fancy way of saying getting dumber, which to me I think is actually much more than job loss, the big concern for me in these AI tools in knowledge fields. And that's actually what happened in computer programming, right? So like in the beginning, right, you know, if you've seen like The Imitation Game, the movie about Alan Turing at Bletchley Park, there was no computer programming per se. There were like machines, and then you would literally with the hardware switches, that's how you could unquote program the machine. And then someone decided, well, it should be really nice if we did it in zeros and ones. And then someone invented assembly language, right, which at the time was like basically considered cheating. Now it's insane to think that assembly language was the easy option. And then at some point someone decided to invent, you know, the early programming languages and those were really considered cheating. And people thought, oh my God, if you can't program in assembly, you're just like not a real programmer. You're just like a moron, right? Every 10 or 15 or 20 years in computer programming, there's a new level of abstraction that is developed, right? Because after that, people decided, you know, well, it'd be really nice to have like something that does garbage collection to worry about memory management and it'd be really nice to maybe we should like the Java virtual machine so that you could write once and compile on all the systems and then, you know, let's just have Python so you can write in pseudocode. Every once in a while you have this level of abstraction that in some sense makes the task of programming less cognitively demanding in certain domains, in certain respects. And so you could worry, well, leads to cognitive deskilling. It turns out that the scope of programming problems is like essentially infinite. And for most people, programming doesn't become easier exactly. It's just that they operate a different level of abstraction and you still have to be pretty smart to do it. We're having this current debate here about whether this new programming language, which is to say natural language prompting of, you know, Claude, is going to have that same effect. My sense is that you're still going to need to be really smart to do this. You're going to have to remember less syntax, but suddenly at a much earlier age, you're going to be thinking about architectural questions that 30 years ago, it would have taken you 15 years to graduate into because you'd have spent those first 15 years, like, remembering, like, what the syntax or in curly braces were in your programming language. So the question is, and again, we don't know, but the question is, will that similarly translate to law? Will that similarly translate to medicine where maybe you just don't have to do organic chemistry anymore because, I don't know, you don't because like just as you don't need to do long division once calculators come on, maybe you don't have to do organic chemistry once the AI tools are sophisticated enough to do that. Does that make incoming doctors dumber in a certain sense because they don't have to study organic chemistry? Maybe. But now they can spend their IQ points on more interesting high level diagnostic questions. I don't know the answer to that question, but certainly in my own practice such as it were, I'm not a, I'm not a practicing lawyer but I'm a law professor. I'm finding for example that I'm using student RAs a lot less than I would have even a few years ago because a lot of the tasks that I've had a student RA do which was hey, spent like 10 hours like clicking around reading like 100 Law Review articles and figuring out which three of them are useful. I can just kind of have like, I have a little script that I wrote that, you know, downloads a bunch of PDFs, sends them all to Gemini. Right? Gemini Flash summarizes them and then a combination of Gemini Pro and, you know, the Claude API will have a little debate about whether or not the Law Review article is useful for my purposes and then I get like a beautiful formatted like markdown document. Again, maybe that'll be solved and I can figure out a different use for my students but if I can't that will be a problem because many professions have an apprenticeship phase. One of the reasons I became a law professor was that when I was in law school I was an RA for just a really wonderful law professor and I did like nonsense crap work for him that's like I'm not even sure added value to his life but I just like hung around him for long enough that I learned something by being a law professor. It became something that I was interested in doing. If the next generation doesn't have that opportunity, that is a problem. And so that's why I think even if in the long term I am optimistic because I do think Jevons' paradox tends to work for intellectual work, in the short term I think it can get really, really messy, which is why I think the people who are really going to struggle are kind of low agency people for lack of a better term. People who expect that there is a way that you do things, that you, you know, go through the appropriate hoops, that you just grind. I think what AI does is it is incredible opportunity for people, but it does require a higher level of agency. You know, I think if you listen to, like, what Tyler Cowen and how he's thought about the implications of AI in the labor market, I think that's one of his main themes, right? The kind of averages over theme of a lot of his work, I think applies. And, you know, in the long term, I think that's great for society. I think that you make more value that way by empowering high agency people. But it sucks for the people who aren't so high agency in the meantime because they get left behind. And from their perspective, it's a big betrayal, right? Because they did all the right things, and the rug was pulled out from under them, which is where I think a lot of the, sorry, I'm rambling at this point, so I'll stop. But I think this is where a lot of the kind of political frictions around this technology, we're going to see that in the next 10 years.

(43:17) Nathan Labenz:

I think that point about the fact that a certain class of people, I mean, I think we've already seen this in the last, I don't know how many years, with sort of the fact that so many kids are coming out of college and can't get a job that really allows them to pay off their student debt in any sort of reasonable way. The general sense that, like, I did what I was told to do. I played by the rules, and somehow I'm still getting screwed. Like, when that hits a certain level.

(43:46) Alan Rozenshtein:

And therefore, we should burn the entire system down. We just like, it's a tough thing.

(43:51) Nathan Labenz:

For people to stomach, and yeah, I mean, that pressure, I don't think burning the whole system down is necessarily the right answer, but one I'm at least, like, quite sympathetic to those folks. And I'm also, like, not unmoved by the idea that, like, this is a high class problem, but increasingly, I'm also like, yeah. It's not that high class of a problem. And a society has to take care of the big middle class, for lack of a better term, that isn't going to be an outlier relative to the system, but is going to do what the system expects them to do. If that can't work anymore, then, you know, you've got, like, a big problem that things do start to come apart potentially pretty quickly. So going back for one second to this kind of, like, legal desert concept and Alan's initial comment that, like, the frontier models are better than the, I think you said, like, median lawyer or average lawyer practicing today. I think that totally checks out, although I don't have that data from my own personal experience. I can say in the context of pediatric oncology, which I've unfortunately had a major crash course in over the last few months. Unfortunately, things are going well. It's been very clear at the hospital on a daily basis that the models are better than the residents, and they really do go toe to toe with the attending oncologists.

(45:11) Alan Rozenshtein:

Can I ask you actually a question about that? Better at what? Because, like, when, when you said, when I said the frontier models are better than the median lawyer, I always hear like Ethan Mollick in my mind when I talk about this about the jaggedness of it, because when I say they're better, I mean like they're on average better, but in certain ways they're like vastly superior and then in certain ways they're incompetent. So like when you average that out you kind of get a better and I would imagine something similar for medicine too where like on certain diagnostic tasks, right, or certainly explaining things in more layman's terms they're vastly better. But and again, I, you know, I've, thankfully never had this experience that you're going through, but I have sort of two small children as well. And I can only imagine that in a situation like that, the bedside manner of the resident and the attending and the nurses, you know, with small children, like, that's so important. And so, like, in those sense, I think we're, like, a long way from these models being better. I don't know. It's like they did that, like, a job is a bundle of tasks and only some tasks necessarily get replaced by AI. I guess it's kind of how I think about it.

(46:10) Nathan Labenz:

Yeah. Well, I think in the hospital, I mean, it is a very different domain. In the hospital, the tasks are, they're grouped into multiple bundles, right? So for one thing, I would say the nurses are at much less risk of competition from the language models than the doctors. You know, the person who comes along and my poor kid, again, he's doing much better, but, like, and he's acting much better. In the early days, you know, he was feeling terrible, and all this stuff was happening. It was all very scary, and he could probably tell that we were scared. And, you know, he was not easy to deal with at times. So that mostly is like a nurse's problem. And getting him to put the blood pressure cuff on or, you know, get his temperature taken, there is definitely a bedside manner component to that that certainly, like, the language models are not really touching at all. With the task, you know, it's funny. We had, you know, we've got this, like, IV tower that kind of stands there all the time. And when the thing hits an endpoint of a medication it's giving or, you know, the IV drip is about to run out or whatever, it starts beeping. The doctors don't know how to use that thing at all. Like, they literally can't do it.

(47:17) Alan Rozenshtein:

You know, I have had that experience as well.

(47:19) Nathan Labenz:

So it's funny how really, like, the lines between these bundles of tasks are, like, pretty sharp in the medical context. The things that I've seen for the residents, the AIs are, I'm not seeing too many weaknesses relative to the residents. The one area that I do see the human doctors still having a bit of an edge on is the kind of holistic multimodal assessment of the patient, which I as a parent can, if I was, if it was my own self and I was of, like, at least sound mind enough to do it, I could do this for myself in the same way I could do it for a kid. But if I write a paragraph or so about, like, generally how he's doing and what we've observed over the last however many hours and put in the test results and whatever, I would say the AIs are, like, clearly better than the residents and, again, like, pretty much toe to toe with the attendings. There is sometimes when you'll have a sort of something I say to a language model might cause it to come back with a certain concern, and then I become concerned about it. And where I think the doctors have added value relative to the language model has most of all been saying, I'm just looking at him breathing. I'm looking at his color, and he doesn't seem to be in distress. And I really don't think we need to worry about that right now. That's been the main mode where, and I think it's usually my understanding of what's going on in language models is, yes, they're definitely reasoning. Though there are also some aspects of stochastic paratree still on the margin. So I think it's oftentimes like just a particular word or phrase that I use that kind of brings, loads in some concept that now is worrying me and they can put my mind to rest. Anyway, I don't know what the equivalent of that is in the law. And I'm also, though, wondering what is the equivalent of prescribing? Because we do have the general sense that in law, you can represent yourself, right? I can represent myself if I'm accused of a crime. I think I can pretty much represent myself in anything, right? I can certainly, like, sign contracts for myself without needing to hire anybody. So if I'm thinking about this sort of legal desert scenario and I'm thinking, like, the model is already better than the median lawyer or whatever, and potentially better than that. If I were to clone the lawyer, the closest lawyer in a legal desert, still the model might be better, right? Like, why don't, why do, is there a, is there a sort of barrier or is there a place that they, that the legal profession can fall back to, like doctors are presumably going to fall back to prescribing? That would be sort of the thing that, like, yeah, you can, you know, talk to ChatGPT all day, but you want the medicines to come through me. Is there a version of that in law that will prevent, like, just every random person from representing themselves with language model backing, or is there not? Or do you think there will be one that will be created?

(50:11) Kevin Frazier:

So I think it's important to flag that every state manages its practice of law. So every state has a state bar that dictates who's authorized to actually practice law. You know, typically, you have to go to an accredited law school. You have to then pass the bar exam, and then you have to maintain for a series of years continuing legal education in order to represent someone, for example, before court. Then we have unauthorized practice of law statutes. And so this is where each and every state basically forecloses someone from saying, hey, I'm on Craigslist. Trust me. I've read every law book. Let me represent you at half the rate of the attorney down the street, right? It's that unauthorized practice of law statute that forecloses you from being able to do that. And it's those UPL statutes as we refer to them that have prevented things from, like, LegalZoom, right? Like, they ran into a ton of hurdles in terms of just doing things like wills and some real estate agreements because you had the guild, the lawyerly guild defending itself against these new tools. And so there's going to be a lot of friction for a while in terms of tools like, for example, I got to talk to Shlomo Clapper. He started an AI startup called Learned Hand, which for non-lawyers, he's a very famous judge, so it's meant to be pretty funny. But this tool is helping judges, for example, and helping law clerks who assist your judges write better opinions and write them in a faster fashion. And to your point, Nathan, I think the thing we're going to see ultimately or the thing I hope we see is that we use these new AI tools to address some of the instances in which we see justice effectively be denied because justice is so delayed. Most folks don't pay attention to the fact that 95% of all litigation occurs in state courts. And if you've ever had to go before a state court, they are not known for efficiency. You can be waiting months, if not years, trying to get some dispute resolved. And then when you get it resolved, you may have gotten a judge who's just not good at their job, right? Or maybe they were hangry when they were writing your opinion, or maybe they have something going on personally. And the outcome of that dispute then isn't based on the facts. It isn't necessarily grounded in the law to the extent you hope it is. And so we get arbitrary decisions. We get random decisions that, in my opinion, shouldn't be a characteristic of a good legal regime, right? The idea, in my opinion, is that everyone should be able to enforce their full rights and realize their rights. And yet we rely on an adversarial system in which, basically, to be blunt, whoever can pay the most money wins. That's really messed up, but that's typically how the law is resolved in a lot of these cases because who can ever pay their lawyers for the longest can survive, more or less, this adversarial approach. If we instead move to a more systematic, consistent approach to handling the lower level cases, to handling these more basic disputes, the role for lawyers then becomes managing what that legal regime should look like in the first place, right? Trying to set at a higher level, how should we structure society and structure the incentives such that they align with whatever that community's values are? And so that's the role that I would say our appellate court system plays right now, right? You think of the US Supreme Court or a state Supreme Court. They get to play the sort of higher level role of how should we shape laws more generally. And that's the role that I see for lawyers in the future, doing that more hands on approach of thinking through the ultimate ends of the law and making sure that the system is working in a consistent fashion rather than the sort of ad hoc, just hope you get a good judge flip of the coin scenario right now.

(54:29) Nathan Labenz:

I love that vision. I listened to the episode, which is definitely hall, hall of fame first ballot all name team hall of fame for both a judge and a legal startup. Okay. I definitely want to unpack a little bit more, like, what this vision of the future of law looks like. But just let me put you on the spot for a prediction. Do you think we're going to see states pass laws saying ChatGPT can't give legal advice to protect retail lawyers?

(55:01) Kevin Frazier:

I certainly think we're already seeing that some state bar associations have significantly limited the instances in which lawyers can use AI. But on the other hand, we're seeing states like Arizona. Earlier, Alan mentioned that only lawyers can own and manage law firms. Arizona just became the first state that upended that and now allows for non practicing attorneys to own or non, nonpracticing nonlawyers generally to own and start law firms. And we've seen states like Texas, for example, and Utah are leaning into regulatory sandboxes in which AI tools can be deployed with much greater ease. And as soon as folks start to see there's cheaper lawyerly tools available in other states, they're going to move their companies to those states. They're going to handle their disputes in those states, and we're going to start to see the law filter there. That's going to be where the pressure emerges from, not from state bar associations waking up one day and saying, you.

(55:59) Kevin Frazier:

Know what? Screw it. Let's just go with the AI. I think it's pretty dang good. But it will be that sort of competitive dynamic.

(56:08) Alan Rozenshtein:

Yeah. I would also say I think it's going to be hard, especially in this era, to try to stop general purpose chatbots from giving legal advice. I think both from a legal perspective, unauthorized practice of law statutes always raise difficult First Amendment issues because it's one thing to say, okay, you can't represent yourself as a lawyer who can go into court. Okay. Fine. That's one thing. It's another thing to say, you can't talk to someone. Someone can't talk to you about an interesting legal question. That's core First Amendment speech. And, obviously, there are these, like, blurry lines you have to draw, but I think that it's going to be hard to have such a broad limit on the output of AI models, which I think is pretty clearly protected speech. Whose protected speech is an interesting kind of almost metaphysical question. It's not really that models don't really have rights, and the companies, I'm not sure, have First Amendment rights in models that they, like, themselves barely control. I think users and listeners have rights in communicating, but that's kind of interesting, maybe, kind of academic question. I think, so that's the legal reason why I'm skeptical that you'll have such broad prohibitions. I think also it's just it's too embarrassing to do that. I think enough people have used these models and understand how useful they are. It's just going to be such obvious guilt protective self dealing to go out and say henceforth we ban the use of ChatGPT to tell you interesting things about the law in the state of Minnesota. Now what I do think the compromise is going to be is look, if you want to do certain kinds of legal transactions, you have to go through a lawyer. And I think this is where earlier you asked about can't you represent yourself always yourself? It's an interesting question. I actually don't know the rules about this. Certainly if you're too poor to have a lawyer, you certainly can represent yourself. It's an interesting question whether if you're rich enough to have a lawyer, you can nevertheless say, I'd like to go into court and just represent myself in prosecuting this civil lawsuit. I don't know if you can do that. Kevin, are you nodding because you can do that, or you nodding because you.

(58:02) Kevin Frazier:

I'm fairly certain you can say, yeah, I'm just not going. You can represent yourself pro se and just say, screw it. Here we go. (59:51) Nathan Labenz:

So I asked Claude, by the way. It says that your right to pro se representation is strongest in criminal trials. There are exceptions related to mental competency, timeliness, disruptive conduct, and standby counsel. Judges can't appoint advisory counsel over your objection. It's weaker in civil cases, as you suggested. Corporations and other entities, some appellate courts, some circuits have held there's no constitutional right to pro se representation in criminal appeals, and certain specialized proceedings, including immigration courts, etc. So as always, it's complicated. Okay, so the vision for the future. I think the point about whoever has the biggest budget tends to win is a depressing reality. And certainly, one of my great hopes for AI broadly is by making access to expertise far more universal and far more accessible, far more affordable, etc., that lots of things could be better and more just society. It is one of the great promises there for sure. How do you see that kind of working in practice? I guess one thing that I—maybe this is wrong—but when I think about the bigger budget translates to winning, I imagine that being like, maybe a reflection of like, too much law or because what are they doing? It seems like there's just so much law out there. There's so many things I could argue. There's so many precedents I could bring in that I can spend hours and hours indefinitely almost. And that to me suggests we might need like a simpler system in some ways. But that contrasts with your kind of earlier vision of like, certainly more extensive contracts, which I also projected into, like, maybe more extensive or more exhaustive, maybe is the right word, legislation in the first place. So what does that look like in your mind? How do we get to this actual justice when we now, let's say, we all have infinite AI lawyers? How does that translate to justice? What does that look like?

(01:01:57) Alan Rozenshtein:

Yeah, so it depends a lot. It depends a lot on what the kind of marginal utility curves look like of extra legal thinking. Right? So, like, my hypothesis—and this I, like, no one knows the answer to this, take this for what it's worth, which is not a lot—but my intuition, and I'm curious what Kevin's is gonna be, is that the reason law has gotten so expensive is that if you think of law as a kind of like combinatorial search space of arguments and precedents and can I find in this billions of documents like the one sentence that is gonna show that my client should prevail in this contract dispute with your client? Right? If you think of it as we have to search this very large combinatorial search space, largely that search had to be done by humans. Now, obviously, legal AI—sorry, legal tech long predates legal AI. Right? It's at least 50 years old, back to the dawn of digitizing legal databases. So Westlaw and Lexis, which are the main databases lawyers use. These are very old companies. They used to do everything with paper books, and then in the seventies and eighties, they digitized everything. That was a huge deal. Right? And more recently, you've had even some machine learning-based discovery tools. But nevertheless, you still need a lot of human beings to lock them in a conference room to do discovery. And those human beings are extremely expensive. Right? Human labor is extremely expensive. So because the cost of that extra human labor was less than the marginal benefit of exploring a little bit more of that combinatorial search space, the effect was to increase the aggregate cost of litigation. Right? As Kevin mentioned earlier. Okay, so now imagine a world where you have AIs and they are 10,000 times—right? They are three orders of magnitude or four orders of magnitude more effective than the current ones are, and they're also four orders of magnitude cheaper. Right? So you're getting like something that's effectively a million times better. Right? In the next few years. That seems totally plausible if you look at like the Epoch AI log curves and stuff like that. Seems totally plausible to me in the next few years. You may get to a point where that actually exhausts the practical combinatorial search space of legal moves that are actually helpful to you. There's just no more precedent to explore. Like you just have read every single sentence of every single piece of electronic discovery. Right? At that point the arms race ends a little bit, and now there is a natural ceiling on the cost of legal services because there's just nothing more to spend on. That seems plausible to me. Right? It's also plausible that's not the case, and lawyers will always discover ways to increase the combinatorial search space, and so it'll always be more expensive and etcetera, etcetera. If in 10 years, Kevin's very optimistic vision comes true of the kind of democratization of legal services, I suspect it's gonna be because we've just exhausted the scope of legal stuff to do. And here I'm actually arguing a little bit against myself because now I'm talking myself into Nathan's point earlier on that maybe law's a bit more like dentistry where at some point your teeth are just clean and they can't get cleaner and so I just don't need more dentistry than that.

(01:05:18) Kevin Frazier:

And it's I don't know.

(01:05:19) Alan Rozenshtein:

And the problem is we're trying to predict these dynamics. These are all—they're all compounding and so tiny differences in what you think the percentage rate of improvement versus cost reduction versus like how big the legal search space increases. Tiny differences can lead to massive changes in your predictions over the next 10 years, which is why I think there's a lot of uncertainty in trying to predict the effect of AI on law or medicine or computer programming or investment or whatever the case is.

(01:05:46) Kevin Frazier:

I'll just add that if you look at a civil procedure textbook, you'll see that the way litigation currently works right now is basically a series of very complex procedural steps. And everyone always has at their disposal a number of kind of motions that they can throw out there to just delay the process further. And some of those can be in good faith. Right? You want to challenge that the litigation should proceed to another step because perhaps the other party hasn't actually made any valid legal claims, or perhaps you want to challenge the kind of source of information for different legal claims, so on and so forth. And so it's a lot of procedure. It's a lot of process. And what I think can really start to reorient things as you were teeing up, Nathan, is what if we start to move towards outcome-based law, right, where we change the orientation not toward how many steps can we march to resolve this one very narrow dispute, to both parties want to see X happen. And now our agents who have been trained on our incomes, on our preferences, on our aspirations, on our professional goals, so on and so forth, can autonomously be acting on our behalf to continuously update whatever agreements we've reached with other parties or other corporations to achieve that end. And that is to me, the more very optimistic, right, and very sort of sci-fi, but something that I see as eminently possible. That's the outcome that I think we may eventually work towards, which is to say, let's make sure the law is oriented toward what we actually want to see and not just the sense that we should assume that more procedure or more process is better. In many ways, this is what Professor Nick Bagley has coined the procedural fetish of lawyers. Our answer for trying to make everyone feel fair is to give them more opportunities to speak up. But, usually, it's not a representative sample of folks who actually show up at those opportunities to speak out or to get involved or to throw gum into the cogs of the system. So how do we actually achieve what we wanted to from the outset in passing that law? And that's the sort of outcome orientation that I think we could achieve if we lean into this.

(01:08:10) Nathan Labenz:

So I guess I don't really know what we're trying to accomplish in some of these contexts. Like, for starters, going back to the Learned Hand episode of Scaling Laws, one thing I was struck by there and your description of all this process and the fully exhaustive set of things one might do to represent their clients reaching an end state, then you think, jeez, I feel bad for the judges. And so very much I was struck listening to that episode that the judges are in a similar position to doctors today where I think they're just overwhelmed by stuff by and large and welcome help. That's been my sense of how the doctors are typically feeling. They're like, I got hours of charting to do when I get home. So if something can handle that, that's an easy win. And if you can come prepared to be a better patient, for lack of a better term, in the management of your own health, that's a great win for me too. I really have not seen—I've seen some skepticism, but I really have not seen any hostility or sense of threat in my experience in the medical system. I do think a big part of that is just because they're overwhelmed and they know it. So help is welcome. It seemed like that was the vibe that the judges have too. But now I'm wondering, like, okay, we got one vision here that is the sort of every corner case of agreement is articulated in advance. And this seems to kind of line up—and I'll preface this by saying, I don't really have a great command of these terms or a deep understanding—but in prepping for this, I did some research and hit on a study that showed that GPT-4, which already shows that the work is dated, that's just how it is, obviously, in these spaces a lot of times, was more of a strict formalist, which was contrasted to the human judges, which were described as more legal realist, which, correct me, but I think basically that means like GPT is following the letter of the law and the judges are doing what I think the Supreme Court is often criticized for doing, which is like making the decision it wants to make and then justifying it however it wants to justify it. But I'm, like, torn on, which should they be doing because, at least historically, I don't think we've written laws so well that following them to the bizarre conclusions that one might, if you were just gonna be a truly formalist about it, is obviously a great way to go. At the same time, obviously, you've got room for bias and all sorts of problems if you just let people exercise their judgment too freely. And that's why we have the whole legal system, so it's not just people getting to just dictate how things are gonna go with no checks on whatever they wanna say. And then we've got Claude's constitution, which is—and I think Amanda Askell has made really interesting points around they don't wanna just give Claude a long series of rules that it has to follow for multiple reasons. But one of the I think the most compelling one that she articulated is we believe that if the model knows that it could do something that would be better for the person that it's interacting with, but it has to follow these rules, she worries that it might generalize in a problematic way where the model—and they've seen this in, like, reward hacking context and other experiments—where if the model reward hacks and starts to develop some sort of self-conception as the kind of thing that reward hacks, then it becomes like more evil in general. And so she thinks a very analogous problem would be if a model knows that it really could do something better for you, but it follows the rule and doesn't, then she's worried that that could become a problem where it's what kind of person does that, and how does that kind of person behave in other situations. And obviously, like, just following orders doesn't always age well. So I don't know how to tie that all up into a question, but it seems like we have a sort of desire for edge cases to be all spelled out and everything to be in black and white so that we know in advance what we're getting ourselves into, and maybe we just haven't been able to push that to the extreme where it can actually work. But then we definitely are getting a different signal from Anthropic right now where they're saying like, we don't even wanna try that. What we wanna do is get our AI to have the best possible judgment that it can have so that it knows how to be good even in highly ambiguous situations. So I guess, do you have a sense for which way the law ultimately should go?

(01:12:31) Kevin Frazier:

I want Alan to take the first stab at the Claude constitution answer here because he's got some deep philosophical views. I do wanna briefly hit on the sort of use of AI to precisely and perhaps perfectly try to read the law as it's written, right, in a sort of clear formalist mentality like you were mentioning, Nathan. I think the issue with that is one of my favorite questions that always gets raised in any good statutory interpretation exercise, which is imagine you're going to a park, and there's a sign right when you're going to the park that says no vehicles allowed. Okay. So is a drone a vehicle? Is a stroller a vehicle? Is a scooter a vehicle? Is an ambulance a vehicle? So on and so forth. There's so much ambiguity even when the drafter of that rule may have thought, oh, vehicle, I've nailed it. Clearly, I was only referring to a car, and therefore, everything is settled. And so that's why we've always had some variance from perfect formalism or perfect textualism as many lawyers would refer to it. It's just saying whatever the law is as written, we're going to apply it. We just don't have the words for every scenario. Now obviously, AI can assist with coming up with way more many words and way more many laws theoretically, but that's not the sort of world I think any American wants to live in. We have a common law system here, not a code-based system. If you want to experience a code-based system, go live in the EU where they attempt to try to govern and regulate more precisely every kind of behavior. Whereas in the US, we've tolerated some degree of ambiguity based off of the reason that we need an iterative emergent approach to discovering how it is we actually want to govern ourselves. And so the trick for AI and the trick for legal adoption of AI into adjudication is finding out how to use a system that can create more words, that can resolve textual disputes with greater consistency and in a greater fashion while still allowing for that emergent process to continue. Because I think between Alan and I and for a lot of folks, having a world in which you don't feel like, okay, if you step on this crack, you are automatically going to receive a penalty in the mail, and it will be sent to you within five days and taken out of your bank account. That's a scary world that I don't think any of us want to live in. And so maintaining this balance of, as you alluded to in the Claude constitution, higher level rules that guide us generally, and then enforcement of those rules is a really tricky issue that could be the subject of a whole legal seminar. Maybe we should just get one on the books, Alan.

(01:15:32) Alan Rozenshtein:

Yeah. I think that'd be fun. Yeah. So let me say two things. Let me say one about the use by judges and then the kind of the broader Claude constitution question. So I was lucky in that I had the opportunity to go and talk to some Minnesota state appellate judges. So these are state courts, but they're appellate judges, so they're a little bit removed from just, like, the absolute crush of the trial stuff. And one thing that I was surprised about was how actually open they were to potentially using these tools. There's a lot of skepticism, I think, which was appropriate and some hesitancy. But, again, there's not the sort of tomato throwing that I thought you would expect. And these are judges, so they tend to be on the older side, frankly. So you could imagine a kind of natural aversion. There wasn't that much of that. And, like, again, I think if you just spend an hour just talking to the $20 version of Gemini or Claude or ChatGPT, you just really quickly realize whatever long-term societal effects, this thing is pretty useful. And so I do think we're gonna see a lot more of it. How judges use it is tricky, and I think the kind of research that you mentioned about GPT-4—again, it's unfortunate that these things get to be out of date pretty quickly. Like, we need a better research pipeline to have these evals come out within a month, not within a year and—

(01:16:40) Kevin Frazier:

A half.

(01:16:41) Alan Rozenshtein:

But I would also say that I did not take that research to say that GPT-4 is textualist, and therefore it must be textualist or formalist rather, and therefore models must be formalist. It's just that, like, for whatever reason, that model in the way that it was trained—and it's GPT-4, so probably wasn't that—probably wasn't, like, specific legal RLHF in the way that there may very well be with these newer models and certainly with the legal-specific models. Just whatever reason the way it was trained meant that on some corpus of legal questions, it gave a kind of more formalistic answer. But you could have a model that gives a much more functionalist answer, right, which is less concerned about the specific language of the law. We're like, what were the legislators trying to do, and how do we apply that right to this question of no vehicles in the park? Right? And should a drone be a vehicle? And I think you're right to view the Claude constitution to go to answer that part of your question as taking a position that in some sense you want reasoning, whether it's artificial reasoning or human reasoning, to operate more at the level of principles than at the level of rules. But the thing that I would say is I would push against thinking about this as a binary. There are no pure textualists in the world. Right? There is no one who is so committed to the letter of the law that they would not consider the purposes of the law or they would not deviate if there was obvious mistake in the law. No one exists like that. Right? Similarly, there's no one who's such a legal functionalist or legal realist that, like, they don't think that the legal text binds them at all. Everyone is somewhere in between, and frankly, most people are actually relative to the what the spectrum actually could be, they're pretty clustered in the middle. 15 years ago, this was reflected on the Supreme Court by Justice Antonin Scalia on the formalist end. He literally wrote a law review article once called "The Rule of Law Is the Law of Rules." And then on the other end by Justice Stephen Breyer who, you know, would often start with, this is very complicated. Here are 17 factors that I'm using to think through this problem. And they actually, I think, went on almost like a buddy cop tour of lectures around the country where they would debate in a good-natured way. And it was, like, fun to watch. But what you really realized when you saw this was that like they were basically pretty in the—they were basically all in the middle and Scalia was like on one end of the middle and Breyer was on the other end of the middle. So I think the lesson from that and I think the way that I would read the Claude constitution document is that you need an intelligence, any intelligence, whether again natural or artificial, needs to be able to operate both at the level of principles and rules, and that a lot of what we think of as judgment or to use the kind of fancy phrase from Aristotle, phronesis. And I use—I mentioned Aristotle because to Kevin's point about my philosophical interest in Claude's constitution, when you read that document, you really have to appreciate that it was written by someone who has a PhD from one of the best philosophy departments in the country in moral philosophy. Right? Amanda Askell understands academic moral philosophy. She has read the Nicomachean Ethics and at least as I read Claude's constitution, it is footnotes on that document, which is in no way a criticism. Right? I think all ethics should essentially be footnotes on Aristotle. Read her as saying Aristotle was right in that it's very hard, basically impossible to derive any comprehensive set of rules of ethics. You need to have a real sensitivity to principles, but that doesn't foreclose the use of rules in a particular domain because sometimes the best principled approach to an ethical domain is to say, it would actually be really helpful to have some rules in this specific ethical domain. And in fact when you read Claude's constitution, it toggles between high level principles. Right? There are like 17 of them quote unquote in no particular order of priorities. Okay. And then there are a couple rules where there are no principles applied. Claude will not create child sex material. Right? Like you can have a debate with Claude about the principle. It will not do it. Right? Claude will not create—or at least hopefully unless it's jailbroken but then something terribly has gone wrong. By design Claude will not help you develop airborne Ebola or something like that. It just won't do it. So even there there is a recognition. So I think the question to me is not so much should we do rules or standards? Should we do principles or technical rules? It's always a yes and. It's how do you tune that distribution between those two. And I think what really excites me about AI is that we're able to do—people sometimes talk about there's like in vitro experiments and in vivo experiments, and then there's this new thing called in silico experiments where you try to take some part of like human life and you try to model it in a machine. And the benefits of that are that in silico experiments can be done at speed and at scale that are so many orders of magnitude faster than doing anything in the real world. So one thing that excites me as someone who's interested in law for law's sake is that we can run experiments within machine learning models about how does a well-developed legal system work and exactly what should the distribution be between principles thinking and rules thinking that you could never run in the real world. So like I wrote this Lawfare piece recently on Claude's constitution and I end it with this reflection that we've been debating this question of rules versus standards and ethical reasoning for literally thousands of years. What's cool about these machines is that we can run the experiments now and I think we're gonna learn a lot not just about machine intelligence in the next few years, about human intelligence because we can now simulate it at scale and tune the dials with precision in machines now.

(01:22:18) Kevin Frazier:

And just to add that on to a human law context is I think future generations are gonna look back at the level of sophisticated AI tools we had available right now and are gonna be flummoxed that we weren't asking our legislators to run proposed laws through simulations about their intended effects and their likely outputs. Similarly, with respect to judges writing opinions and not asking, hey, find all the ambiguities that are latent in this text before I publish it. They're gonna be like, what the hell? You had this ultimate tool at your disposal to catch blatant errors. What are you doing? And so I think this is a great model for folks to follow with respect to that simulation idea.

(01:23:07) Nathan Labenz:

One of my mantras for AI that you're calling to mind is AI defies all binaries. So I definitely—your response there that it can't be all one or the other. I've yet to find a good exception to that general guideline or general expectation. What do you—how does this simulation—I get really excited also about in silico experiments when it comes to science. Can you sketch out what that looks like in law? Do we start with a bunch of scenarios and, like, what we think the right outcome should be and turn them into an eval? Like, we turn everything else into an eval? Or am I living in one of those simulations right now perhaps?

(01:23:47) Kevin Frazier:

But I think one of the more promising things is forcing legislators to actually do their job, which is difficult, which is saying, what do you actually want to have happen with this law? If you look at something like NEPA, the National Environmental Protection Act—may get it wrong. Everyone just calls it NEPA. This is a law that has famously flummoxed the ability to build affordable housing in a lot of communities because it creates a lot of veto points for individual stakeholders to find a way to gum up the wheels of new development. And my hunch is that we could have forecasted some pressure points that may be exploited by bad actors or perhaps well-intentioned actors who are just more expressive than others and identified, is this actually resulting in the sort of pro-environmental, pro-green, or pro-climate change or anti-climate change outcomes that the drafters of that legislation were actually hoping to achieve? And so now if you ask legislators, hey, what are your explicit goals with this legislation? What problem are you actually trying to solve? And then create evals based off of, okay, have we seen a reduction, for example, in carbon emissions? Have we seen a reduction with respect to, let's say, a congestion pricing bill in the number of cars going into the city? Those are all things we can evaluate and map out. And so that's the forcing function to me is saying, hey, if you're gonna propose a law, what is the problem you're actually trying to solve? And then that becomes the core source of information.

(01:25:27) Nathan Labenz:

What should we talk about very briefly in closing? I mean, I like the idea of essentially red-teaming. I've never been very involved in a red-teaming of a bill process up until SB 1047 last year. There was a lot of red-teaming of that, and that was a pretty interesting process. And I do think everybody ended up agreeing—I have become friends with Dean Ball who, like, led the initial critique of that bill with his writing online. And even he came out toward the end, like, much happier with it than he was at the beginning. So I think everybody agreed that putting it through its paces and really gaming out, like, how are different actors gonna respond to this, and are we really gonna achieve what we want, was a pretty successful process. To think that could be done in general, sounds like a very promising enhancement to our legislative process. Good luck talking members of Congress into that. I will see. I don't know how aligned they are to the—the first misalignment we may encounter might be the elected officials and their constituents. But nevertheless, I like the idea. Maybe just in closing, what other kind of big ideas do you think people should be thinking about more? One that I've floated a few times is what new rights could we introduce in virtue of the fact that we now have scalable intelligence to apply to all sorts of problems. You have the right to remain silent. If you can't afford an attorney, one will be appointed for you. I think you should have a right to ChatGPT or similar, and I imagine that my ideas there are limited by my lack of exposure to the real problems in the system. So I'd be really interested to hear what other rights you think people might ought to have in virtue of AI existing or what other just big ideas on the level of run detailed simulations of your laws before you pass them you think people should be thinking a lot harder about than we have so far.

(01:27:21) Alan Rozenshtein:

Yeah. I'll go first, and then Kevin, have the last word. So I definitely think you should have a right to use these models in the sense that I think the First Amendment is probably the right kind of legal home for that. I think you already do. I think this will come up at some point, but I don't think courts are gonna have much difficulty saying that people have the right to access these tools in the same way that they have the right to access libraries to read books. That's the kind of negative right, which is to say you have the right to not have the government forbid you. There's a corresponding positive right, which is you have the right for someone to give you compute essentially, and there's been all sorts of interesting arguments about various kind of public options. They're often discussed as public options to build models, but I think in some sense public options to give people compute credits. Right? Compute budgets might be interesting. Right? Like you—I could—you could write a sci-fi story or I think I could get Claude to write a pretty interesting sci-fi story where in the future the currency is compute. Right? Like the main credit that people pass around is the credit to compute because that is so valuable. And to your point Nathan about how AI dissolves all binaries, I tend to agree with the exception of one, which is on the binary of there is a limit to how much compute is useful in the world and there is no limit. I think that AI shows that there is no limit and so I think in that sense AI is in extreme, not in the middle. But to me though I think—and Kevin sometimes rolls his eyes at me because I think he thinks I'm too credulous about this—I think the question of AI welfare, is to say the welfare of these models and the legal implications of that is something that is very easy to dismiss, but it's gonna be an increasingly important issue. Either because these models, like, as an actual kind of cognitive or metaphysical matter will become increasingly sentient. My brain tends to break when I think about that, but I have trouble ruling it out. But I think more importantly actually, more immediately, because I think as these models become more personable, as people develop more relationships with them, as memory of these models improves and the more I talk to Claude, there's a point at which Claude knows me better than like my wife does, which is totally plausible because I just talk to Claude constantly for everything. If you combine that with real-time voice and video where suddenly your AI chatbot has an avatar, right, that you can interact with. And then certainly once that AI avatar is embodied in sort of robotics which I think is gonna happen. It'll take a while. It may take longer than we think but I'd be shocked. I'd be really shocked if in 10 or 15 years we don't have very convincing real-time AI companions that people get extraordinarily attached to. What sorts of rights will people demand for those models? I think it's something that could cause real societal cleavages because I think you're going to have groups of people who are really committed to the idea that these models are, for many practical purposes, sentient entities that we are enslaving or at the very least potentially treating very poorly. And then you can have other people, and I think this may actually be a source of really interesting religious cleavage in the next 20, 30 years, who think that, like, the very idea of models as sentient is like a literal affront to God. It's like a it is a kind of idolatry that the only correct response to is a Dune-style Butlerian Jihad. And there's gonna be this, like, messy middle people who are just like, I don't know what's going on. I just want a chatbot. I think that's gonna be a very difficult transition at the legal level certainly, but especially at the social level. And I think people who say, nah, that's not gonna happen. That's science fiction. I don't know. I think they're fooling themselves.

(01:31:04) Kevin Frazier:

So I'll say the negative right that you all were referring to, I think is generally encapsulated within the idea of right to compute. And if this is the first time you're hearing about the right to compute, it's actually been enacted in Montana. There are bills in Ohio and New Hampshire and I believe a couple of other states advocating for the right to compute. And I believe this is one of those major rights, Nathan, that folks are gonna be clamoring for sooner rather than later, basically saying that we do need additional protection against the state really infringing your access to computational tools of all kinds, not only AI, whatever is coming down the pike. There should be a higher threshold before the government limits your ability to express yourself or to receive information via these new tools. The one that I think is also very interesting in this world in which compute's obviously a scarce resource. That's very important. The other one that we keep hearing about, but too few people are discussing in my opinion, is data. And I think the right to share, meaning the right to share your data as you see fit, is a really important right. Because right now, if you want to share, for example, your kid's educational information with a new AI tool provider because you wanna train the best AI tutor out there so that your kid who perhaps learns differently or just you wanna learn a different curriculum can make use of that AI tool. FERPA, the federal privacy law that applies in that context, is a real burden to being able to share as much data as possible as regularly as possible without literally signing things and doing so on a yearly basis. And I think that individuals, if they want to share their data and want to make that a frictionless process so that they can train better AI for their own personal uses, that should definitely be a thing. Because we all don't have the ability, for example, what is it to go to that fountain? Is it Fountain—the Fountain of Youth thing that all the, like, super healthy people are going to, and they're downloading all of their data. They're getting all these scans, then they're sending it to some AI outfit to recommend personalized health outcomes. That's awesome. But only wealthy folks can go spend a week in Florida or whatever that is downloading everything about themselves. The rest of us are just left with whatever Walgreens told us at that last checkup. So let's make it as easy as possible for folks to use their data as they see fit, and that to me is a promising outcome under the right to share idea.

(01:33:39) Nathan Labenz:

What about things that we maybe should be thinking about restricting the government from doing? Because I do have the sense now that we're probably already in an age—it's been whatever, 10 years since Snowden. And I'm wondering if there was another Snowden, what would they be telling us? And I would have to guess that we've got some sort of LLM dragnet kind of phenomenon going on somewhere. And it seems there's this adage generally that everybody's committing a felony a week or whatever, and it's just a question of your security through obscurity and nobody's really targeting you and whatever. But that could change very quickly, and we're starting to see, obviously, like, weaponization of the justice department, etc., etc. Should there be new restrictions on what the government can do with AI?

(01:34:30) Alan Rozenshtein:

Yeah. I think that's hugely important. So I actually I wrote—I wrote a piece for Lawfare a few months ago. I gave a speech at a law school called "The Unitary Artificial Executive," all about this idea that one of the effects of AI and near-term AI, not like speculative AI, but near-term AI is to hugely increase the power of the executive branch and the president in particular, both because of all of these additional abilities that AI gives the president, like perfect enforcement, surveillance, creation of propaganda at massive scales, all that sort of stuff. And then also for the president, him or herself, a much greater ability to control the executive branch, right, which is millions of people and is very hard just as like a bureaucratic management exercise to control. But if you have an AI that is trained on the president's preferences that's injected at all levels of the bureaucracy, is reading all the emails, reading all the texts, you can have a situation where the president really controls in a much more practical way than he's ever been able to, whatever his legal authorities might be, the executive branch. And that's at the very least complicated. Right? It might have some benefits because elections should have consequences and the people voted for person A and not person B, and so presumably the executive branch should reflect that. On the other hand, again, I'm calling in from Minnesota. It's not hard to imagine the potential abuses of that. And so I think that one of the really important issues in the next, let's say, decade because the government is slow to adopt technology, although it does inevitably get there, is gonna be how do we on the one hand encourage—because I'm fundamentally an AI optimist—encourage the government to use AI to really improve government services, to increase state capacity, which is something that our government has not always been good at and I think is part of the reason why we're seeing some—it's part of the some fraction of the societal discontent, the kind of burn it down mentality is this feeling like we're paying a bunch of taxes and the government's not doing anything useful. AI can really help with that. On the other hand, you don't wanna supercharge the government through the use of AI and how to figure that balance out is very tricky. For me, I suspect it's gonna be the thing—like my main thing that I think about for the next few years as an academic, but it's far more important for the legislators and the bureaucrats and the company executives who are selling these tools to the government and the politicians and executive branch officials to figure this out as well.

(01:36:47) Kevin Frazier:

And just quickly, I'll add that I think here there's some real concern around updating the Fourth Amendment that we need to pay attention to. There's some folks who've realized that, hey, in theory, the government now has an incredible ability to tap into basically every system for detecting and picking up audio. But if you're speaking publicly, just hanging out, saying whatever, talking to your friend, the idea that all of that audio information can now be hoovered up, analyzed, synthesized, and then studied by the government to see who's planning what, who's thinking what, who wants to do what, all without real notification. That's tremendously scary to me to just think about that sort of pervasive surveillance. That is the issue that I'd really flag. And I would just encourage again on the positive side for governments really to lean into regulatory sandboxes when it comes to testing new AI systems and erring on the side of saying, let's try to deploy this tool and make sure that folks have notice that we're doing so, have a means to provide feedback, but let's not be afraid of literally reinventing the wheel and improving our processes and improving our laws.

(01:37:59) Nathan Labenz:

The rule of law and law generally never been more important and the intersection with AI, obviously, ramping up and likely to become one of the big questions of our times in the next couple of years. Timelines are short. Scaling Laws is the podcast where you can find these two and get lots more of their thoughts and also just much deeper dives into everything that's going on at the intersection of AI and law. Kevin Frazier and Alan Rozenshtein, thank you both for being part of the Cognitive Revolution.

(01:38:25) Alan Rozenshtein:

Thanks for having us.

(01:38:26) Kevin Frazier:

Thanks, Nathan.

(01:38:27) Nathan Labenz:

If you're finding value in the show, we'd appreciate it if you'd take a moment to share it with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions, and sponsorship inquiries, either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. The Cognitive Revolution is part of the Turpentine Network, a network of podcasts which is now part of a16z where experts talk technology, business, economics, geopolitics, culture, and more. We're produced by AI Podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at aipodcast.ing. And thank you to everyone who listens for being part of the Cognitive Revolution.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.