AI & The Law: Changing Practice, Claude Constitution, & New Rights, w/ Kevin & Alan of Scaling Laws

Kevin Frazier and Alan Rozenshtein discuss how AI is changing legal practice and careers, its role in legislation and governance, proposals like AI-written contracts and new digital rights, and future conflicts over surveillance, AI sentience, and welfare.

AI & The Law: Changing Practice, Claude Constitution, & New Rights, w/ Kevin & Alan of Scaling Laws

Watch Episode Here


Listen to Episode Here


Show Notes

Kevin Frazier and Alan Rozenshtein explore how AI is reshaping the legal profession, from “secret cyborg” lawyers using tools like Harvey to the uncertain future of junior associates and access to legal services. They discuss maximalist legal services, AI-written “complete contingent contracts,” and where AI should fall between strict formalism and legal realism, including Claude’s virtue-ethics-inspired constitution. The conversation then turns to AI’s role in legislation and governance, including outcome-oriented law, the “Unitary Artificial Executive,” and new rights like the Right to Compute and the Right to Share personal data. They close by examining limits on government surveillance and how future debates over AI sentience and welfare could spark social conflict.

LINKS:

Sponsors:

Blitzy:

Blitzy is the autonomous code generation platform that ingests millions of lines of code to accelerate enterprise software development by up to 5x with premium, spec-driven output. Schedule a strategy session with their AI solutions consultants at https://blitzy.com

Framer:

Framer is an enterprise-grade website builder that lets business teams design, launch, and optimize their.com with AI-powered wireframing, real-time collaboration, and built-in analytics. Start building for free and get 30% off a Framer Pro annual plan at https://framer.com/cognitive

Serval:

Serval uses AI-powered automations to cut IT help desk tickets by more than 50%, freeing your team from repetitive tasks like password resets and onboarding. Book your free pilot and guarantee 50% help desk automation by week four at https://serval.com/cognitive

Tasklet:

Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai

CHAPTERS:

(00:00) About the Episode

(03:35) Surveying AI-law landscape

(14:56) Legal deserts and demand (Part 1)

(15:02) Sponsors: Blitzy | Framer

(18:06) Legal deserts and demand (Part 2) (Part 1)

(28:25) Sponsors: Serval | Tasklet

(31:14) Legal deserts and demand (Part 2) (Part 2)

(31:14) AI and legal careers

(45:10) AI counsel and self-representation

(59:50) Maximalist law and outcomes

(01:12:30) Rules, principles, and Claude

(01:25:26) New rights and restraints

(01:38:26) Outro

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Website: https://www.cognitiverevolution.ai

Twitter (Podcast): https://x.com/cogrev_podcast

Twitter (Nathan): https://x.com/labenz

LinkedIn: https://linkedin.com/in/nathanlabenz/

Youtube: https://youtube.com/@CognitiveRevolutionPodcast

Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431

Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.


Introduction

Hello, and welcome back to the Cognitive Revolution!

Today my guests are Kevin Frazier, Senior Fellow at the Abundance Institute and Director of the AI Innovation and Law Program at the University of Texas School of Law, and Alan Rozenshtein, Associate Professor of Law at the University of Minnesota. 

Together, they host the "Scaling Laws" podcast, which has become a go-to resource for tracking the impact that AI technology is beginning to have on our otherwise slowly-evolving legal system.

In the first part of the conversation, we focus on how AI is affecting the legal profession.  

While lawyers are more insulated from change than most professions, thanks to their unique ability to write licensing laws and implement other guild-style protections, Alan is clear-eyed, noting the practice of law is fundamentally a cognitive activity and observing that frontier models are already "better than the median lawyer", at least in terms of raw intellectual horsepower.

And yet, while 70% of top law firms have licensed tools like Harvey, Kevin says that day-to-day usage remains surprisingly low, in part because the billable hour compensation structure disincentivizes efficiency.  Some "secret cyborgs" are quietly using AI to outperform their peers, and firms are beginning to whisper about hiring fewer junior associates, but aggregate impact is limited, and whether we'll see large-scale displacement of human lawyers, or a dramatic expansion of legal services provided by human-AI teams, remains highly uncertain, because though it's clear that many people are under-served by the legal profession today, it's not clear how much more legal services people would want to buy, even at dramatically reduced prices. 

Later on, we consider bigger, more speculative ideas, including:

  • What maximalist legal services might look like, starting Alan's idea of using AI to develop "complete contingent contracts", which would attempt to address every possible scenario before signing;
  • Where AI should sit, relative to humans, on the spectrum between "strict formalism" and "legal realism", and how the new Claude Constitution represents a virtue ethics based approach that prioritizes contextual judgment and high-level principles over detailed rules
  • How AI could re-shape the legislative process, including Kevin's vision for "outcome-oriented law," where we define what we actually want new laws to do and then use AI to run simulations before passing bills;
  • Alan's concept of the "Unitary Artificial Executive," and the risks associated with the possibility that AI could enable granular, real-time control over the entire federal bureaucracy;
  • What new rights we as individuals should have in light of AI technology, including the "Right to Compute", which has already been enacted in Montana and is being considered in other states, and the "Right to Share" one's personal data, which today is often frustrated by well-intentioned but outdated privacy frameworks;
  • What new restrictions we should place on the government, such as limits on mass surveillance of public spaces; and finally
  • How questions of AI sentience and welfare might become a source of social conflict as people become more and more attached to AI personas.

Kevin and Alan are skilled conversationalists and serious scholars, and I think you'll agree that this episode is simultaneously educational, thought-proving, and fun. 

So, I encourage you to join me in subscribing to Scaling Laws to keep up with everything going on at the intersection of AI and the Law, and I hope you enjoy my conversation with Kevin Frazier and Alan Rozenshtein.


Main Episode

[00:00] Nathan Labenz: Kevin Frazier, Senior Fellow at the Abundance Institute and Director of the AI Innovation and Law Program at the University of Texas School of Law. And Ellen Rosenstein, Associate Professor at the University of Minnesota Law School and Senior Editor at Law Fair. Together, you guys are the creators and co-hosts of the podcast Scaling Laws. Welcome to The Cognitive Revolution.

[00:18] Kevin Frazier: Thanks for having us, Nathan. Glad to be here.

[00:20] Alan Rozenshtein: Thanks for having us.

[00:21] Nathan Labenz: Yeah, I'm really excited for this conversation. We've got a lot of ground to cover. I'm interested in just you know, always trying to patch my blind spots on the AI landscape in my kind of AI scouting mission, you know, that I always appreciate a chance to do that. So given the fact that you guys are both law professors and scholars and, you know, studying AI and law in the intersection of those two, So deeply, I want to take the chance to kind of get a survey from you in terms of what is going on at the intersection of AI and law. I listened to your recent episode on the new Claude Constitution, and certainly that's really interesting. There's a paper that you shared with me on automated compliance, which is a phrase I had not heard before, and I think that's really a fascinating concept. And who knows what else new social contracts we might imagine explore together as well. So maybe for starters, What's going on at the intersection of AI and law?

[01:18] Kevin Frazier: I mean, I'd say it's a big traffic jam at this point or a huge crash because we have systems that were largely constructed in the 1960s, if not before, in the 1970s. A lot of the core privacy principles, for example, emerged from the fair information privacy principles. I always get them wrong because we just refer to them as the FIPS. But you've got FIPS from the 1970s, you've got case law from well before that all tries to spell out what rights and obligations we have in a analog world. And we already saw those being pressure tested during the internet era. And as we all know, AI is kind of just putting all of that on steroids. And so when it comes to trying to see how prior legal regimes fit into this new world of AI, It makes for a lot of rich scholarships. So thankfully, Alan and I have plenty of excuses to continue to write law review articles, although his are always way better than mine.

[02:22] Alan Rozenshtein: That's not true, but I'm not sure anyone wants to read any law review articles, even if they're good. Yeah, I would say, I might back up a little bit, though I agree with everything Kevin said. So I think the two different intersections of law and AI, and if you're in a law school, those law schools that have AI classes and An increasing number of them do. And I think within a year or two, all of them will. They're actually two different classes because there's the law of AI and then there's AI and the law. And those are actually very different things because on the one hand, there's all the stuff Kevin was talking about, which is AI is a new social economic technology movement, maybe the most important thing since fire. But even if you don't think that, probably, I think at this point, everyone agrees, at least at the level of the internet. Right? And so there are all these legal questions that come up and how do you regulate it and how do you promote it and how do you control it, et cetera, et cetera. At the same time, there's a whole separate set of conversations that have some overlap, but are actually pretty orthogonal to that, which is law is just a cognitive discipline. It's not quite as pure of a cognitive discipline as let's say computer programming is. Because there are still areas in which the law expects there to be actual human beings, whereas if tomorrow all computer programmers uploaded their consciousness into the cloud. You could imagine a world like computer program would just do fine. With AI, with law rather, you still need people to go into courtrooms. But a huge amount of law is purely cognitive. And so there's no reason to think that the same revolution that AI is currently having in computer programming, which is the manipulation of certain kinds of symbols, will not also apply and is not already applying to the law, which is also the manipulation of certain kinds of symbols. It's true that I think the law is somewhat behind where, let's say, computer programming is, but it's like a year behind, or maybe two years behind. It's not 30 years behind. Just as software engineering has been completely transformed in the last year, and obviously I've listened to a bunch of your podcasts, you go into this much more than we do, but we talk about it somewhat, and it's as a really crappy hobbyist programmer myself for many years, just because it's fun, and I think of it as the sort of adult-approved way of playing video games. As a 39-year-old father of two, it's hard for me to justify playing video games, but if I'm five coding, I can convince my wife that's a good use of an evening for me, although it totally scratches the exact same itch in my brain. Just as AI is totally revolutionizing computer programming, it is in the process of totally revolutionizing the law. I think it's going to take longer, and we can talk about it if you want, because the law is a kind of professional guild, and lawyers are the one guild that because they're lawyers, they control the rules about who can be a lawyer, right? And so it'll all take longer, but that's like another whole vector, right? And I think We should all care about that because all jokes about lawyers aside, law is still one of the fundamental technologies of modern society. If you want to think of it that way, it's one of the main infrastructures.

[05:14] Nathan Labenz: Okay, so two big areas that you outlined there, one being basically policy with respect to AI and the other being the impact that AI is making on the practice of law as it's happening today. In just preparing for this, I was looking at what measures do we have to try to get a handle on how good the AIs are getting. And I guess in general, I've been surprised across the board by how far the AIs have made it up the sort of value or performance ladder as measured by something like GDPVal, where I went and saw that currently in the lawyers category. There's not that many prompts, at least in the public data set. But clawed. Opus 45 is currently the top performer. It is winning one in three head-to-head comparisons versus human lawyers, and it's winning or tying 70%. That's obviously made it pretty far. You guys can probably unpack that more qualitatively and tell me what it's good at, what it's bad at, where people are having success and not. But it's been striking to me, and this I would say is true in Medicine 2, that there hasn't been nearly as much guild closing of ranks as I would have expected two and a half years ago and I don't understand why. Maybe it's because people are ignorant about how far things have come and they are living in denial as opposed to making the moves that they might one day wish they had made if they had properly appreciated the phenomenon. But I guess how would you characterize just like how good at law frontier models have become and how much do most lawyers today appreciate that and why isn't there more of a response so far?

[06:53] Alan Rozenshtein: Yeah, I think I'm curious what Kevin thinks. I think they're extremely good. Obviously, they're still held back by mistakes, hallucinations. They don't necessarily have access to sort of all the databases that You would need to give a full legal answer, especially if the questions are obscure and require like you to have read that one random SEC regulation that's buried in the Federal Register. These are obviously all fairly trivially solvable problems, and they will be solved in the next few years. But in terms of pure horsepower, they're quite good. Some are better than others. In my kind of testing, the amount of money I spend on all of these models a month, horrifying, but I feel like it is kind of part of like my professional obligation to get a sense. So I find them differently. I think right now I have found that although Claude is my daily driver and I mostly live within Claude code, I find that calling out to 5.2, to ChatGPT 5.2, and then especially using like the pro extended think model, which is These names are so confusing, which I think you can only get on the web interface because in Codex CLI, there's the XI. The holding's a mess. I think that whatever special-- I think all of the labs are spending a lot of money on their custom our LHF environments, and they're obviously focusing on different things. I think OpenAI, my senses, has focused the most on law. And so I, from like a vibes perspective, I think it's legal taste is the best. But right now all three will give you pretty good answers. And I, in my scholarship and my writing, am constantly talking to these models, having them pressure test my legal analysis. So I'd say already these, these models are certainly better than the median lawyer. There's no question about that, at least in whatever kind of raw intellectual horsepower equivalent you would be. I see no reason to think why in a few years they won't be vastly superior. There will still always probably be the question of bespoke taste. If you're a super experienced Supreme Court advocate who has done 50 presentations before the justices, that's, that's hard to RLHF. But the vast majority of the legal work, just as the vast majority of programming work, like the vast majority of medical work is pattern matching across fairly standardized contexts. Now, so I think we're like, it's over, right? There's no question about this anymore. And I will agree with you that there's actually been a lot less pushback on this than I would have thought. A piece that Kevin and I are currently writing a lot of your article is actually about the use of AI in legal scholarship. And again, I'm curious, Kevin, your experience, but as I've presented that piece to faculties across the country, I was expecting a lot of tomatoes being thrown and a lot of people saying, oh, but they're just fancy autocompletes and they can't be creative. There's honestly a lot less of that than I would have thought. And I think because if you spend an hour talking to any of these models at the $20-a-month plan, you just realize if all they're doing is fancy autocomplete, then all I'm doing is fancy autocomplete. Why there's not been as much resistance, first of all, I think there will be. The vast majority of lawyers are still, they're not tech-savv, they're interested in this, they haven't really experienced this. And so I think that there will be a lot of resistance. But for those lawyers that have experienced this, I think they're making a bet, this is certainly the bet that I'm making, that there will be a kind of Jevons paradox of as legal services get cheaper, we will want more of them and lawyers will move up the value chain. And so although it will be messy and although some lawyers will do very badly, if they can't react in time in 10 or 15 or 20 years, there's gonna be at the very least as many lawyers as there are today, at least as much demand for legal services and frankly, probably much more. Whether that's true is like the question, right? That is, whether Jevons paradox is going to hold and across which economic domains is like the question about AI and the economy. But I think given how important law is and given how much, I think, less law there is than there could be and probably should be in a very sophisticated rule of law country, my money is on Jevons paradox holding.

[11:06] Nathan Labenz: You're kind to call our country a sophisticated rule of law country.

[11:10] Alan Rozenshtein: Dude, I'm trying, dude, I'm calling you from Minnesota.

[11:12] Nathan Labenz: I'm trying so hard to think.

[11:14] Alan Rozenshtein: I'm trying so hard to stay optimistic right now.

[11:17] Nathan Labenz: I'm taking the long view.

[11:18] Alan Rozenshtein: This will all be over at some point.

[11:21] Nathan Labenz: Yeah, let's hope so. Let's unpack that latent demand concept. I have no idea for law, but the way I think about this, and you can tell me if you think about it a different way than how you apply it to law specifically, is on the spectrum from dentistry on the one hand to possibly like software creation on the other hand, which is certainly a, if not the most extreme, it's one that's being tested maybe in the most extreme way right now. Dentistry, I want zero dentistry services for the rest of my life if I can possibly maintain that and whatever I have to have, I'll get, but I won't be opting into any dentistry just for fun, right? I'm going to buy the minimum that's required for me to have a good life. Accounting, I put on that end of the spectrum too, where I'm like, And accountants maybe will have a different argument, but I will buy the minimum accounting that I need to buy to be compliant and to know what's going on. And then beyond that, I'm not really looking for more. If you could give me 10 times the accounting for the same price versus the same amount of accounting at a tenth the price, I know which one I would pick, and I would pick the savings. Computer programming, on the other hand, there's a lot of optimism that, hey, maybe we do. have latent demand for 10 times or 100 times as much software and everything will be bespoke and whatever. And we can imagine a whole new software abundance paradigm. I guess for me as like somebody who's a relatively simple person and has a relatively uncomplicated life, my intuition is that law would fall more on the accounting side. Like I do find so often that AI is a GDP destroyer in the sense that When I, for example, last went through a little contract negotiation, it wasn't anything super complicated, but I just took what I got to a couple language models, asked what I should be concerned about, shared my take, and we iterated through it, and I didn't have to hire an attorney, obviously. If there's going to be 10 times more legal services provided at the same cost, what are we not doing today that you would imagine us doing in the future?

[13:29] Kevin Frazier: It's important for non-lawyers to understand that we have a whole concept in the field of law referred to as legal deserts, which are areas in the country that have about one lawyer for every 1,000 residents. And so there's a whole lot of folks who just have no one to turn to when it comes to signing that lease, forming that small business, starting a nonprofit, getting out of that marriage, so on and so forth. There's maybe one person with a single shingle and waiting for any clients that walk down Main Street to try their best to help them out with a legal dispute, but they're often not a specialist or they often charge too high of fees. And so I think there's a tremendous amount of latent demand just for better, higher quality, faster lawyerly services that suddenly we're going to see a lot of lawyers be able to provide across the US. And that to me is incredibly optimistic because If you look, for example, at landlord tenant disputes, there have been some trials where if you just provide a little bit of legal counsel, for example, to a tenant, they have a much higher rate of doing well in that dispute than absent having some degree of legal counsel. So I, I would say there's a tremendous amount of latent demand. The other thing I'll add is that lawyers often like to refer to themselves as counselors and not in the way of being like a therapist or something like that, but with some degree of we want to provide wisdom and judgment and foresight about how you're going to operate your business in this new legal domain, or how you should begin to think about legal architectures more broadly. And that's where I think we'll have a kind of new track of legal education. I see a sort of bifurcation happening in the legal industry where we're going to have the folks who do hang up that single shingle And they go represent folks in landlord-tenant disputes and take care of the rote tasks that lawyers need to do, but that AI will take a big chunk of work from. And then I see a track that I like to refer to, and I didn't coin this, but legal architects. And they're operating at a bit of a higher, more abstract level, trying to analyze how should systems of law, how should our regulatory structure even begin to work and operate? And that's where I see a huge room for creativity and new training and a new sort of lawyering, where, for example, we have folks like Jillian Hatfield, who's done work with Fathom and- Pascas. Yes, yeah, and Andrew Friedman there, thinking about novel approaches to regulatory design. I am so excited about that sort of work and really think that's going to be a new frontier of legal education that we should embrace and try to foster. So I'm not worried about my students having job opportunities, for example, but I will say for the schools that are falling behind AI adoption, that's tremendously concerning to me because to just touch briefly on the last question, There's still a number. I think Alan just gets invited to better law schools than I do when he talks about our paper. I've had to dodge a tomato or two, figurative tomatoes, from faculty who just don't want to hear about AI or want to make sure that it's not a part of certain courses or that it's not introduced until their later years. The reality, though, is that kids in high school are using AI, if not well before that. And so by the time they come to law school, this is something we just have to adjust to and acclimate to so that they can succeed when they go into a law firm. So we have a huge obligation as a legal education industry to make sure we're thinking about that future of law and preparing students for being successful in that domain.

[17:22] Alan Rozenshtein: Yeah, I agree with everything Kevin said. What I would add to that is, on the point of latent demand, in addition to the fact that there's actually a lot of stuff that a lot of people who are not getting legal services, I think, again, there's a popular sense that there's too much law and it's too litigious of a society, and in some domains, that's absolutely true, but that's not an across-the-board thing. There's so many people that can't get wills or divorces or whatever the case is, and even so many of us. interactions have you had, for example, sort of in your business dealings that you kind of did as an e-mail because actually writing a contract was just too much of a pain in the ***. I mean, I certainly have done so. In the law, there's a, when you take contracts, which is like your standard kind of 1L course, and I think it's actually in some ways maybe the most foundational legal course there is because it's about fundamentally the question of being precise in agreements, which is kind of ultimately what the law is meant to facilitate. There's this concept of the, I think it's called the complete contingent contract. And that's the idea that if you and your business counterparty had infinite time and infinite energy and zero opportunity costs, Your contract would be not infinitely long, but almost infinitely long. Because you would go through and you would figure out, what do I think about every single possible eventuality? And how do I negotiate that to make a win-win situation with my counterparty across every possible contingency? And you can put in some economic theory and determine that if you could do that, that would be socially optimal, et cetera, et cetera, that'd be great. But of course, no one does that because you can't do that. And so the law has all these default rules, which are, you know, fine, but they're default rules, which means they misfire a bunch. Now imagine a world in which we each have our own very sophisticated agent. And when I want to engage with someone in any kind of transaction, my agents can go and have a conversation, right, at the speed of whatever inference speeds are, you know, at the speed of 400 tokens a second. And they can come to an agreement. You can have orders of more magnitude, orders of magnitude more legal demand there in a way that could actually be quite beneficial to society. Now, I don't know if you get more lawyers in the end, but it's not obvious you get fewer lawyers. The other thing I would add is it's true that you don't want any more dentistry than you need. But law is a little different because law is a more competitive activity. You have a counterparty on the other side who is looking out for their own interests in a way that you and your teeth are fundamentally on the same side. So once you get enough dental care, you got enough dental care. Law doesn't quite work that way because no matter how good your legal services are, if the other guy thinks that they can get better legal services, then they'll do that. So there are these arms race dynamics, which is again why I'm not saying that there's an infinite demand for legal services, but I think it's a pretty big one.

[20:11] Kevin Frazier: And just to build on that really quickly, because in addition to thinking about improving the basics of law, like contracts, Nathan, in our kind of pre-recording session, when we were all just hanging out, we were talking about what does AI and the future of governance look like? And one thing that I've been shocked by is the more you dig into laws and the more you realize what technology is capable and what AI is capable of, you realize our laws really suck. We're writing laws in the same way with the same degree of expectations and in the same format as we would have seen centuries ago, right? And yet, to Alan's point, and as we were discussing, Nathan, we can use AI, for example, to create new triggers for, hey, if, for example, the unemployment rate goes to 7% in this field, then we want to see this new economic policy. Or if we see tariffs are imposed by this country, then we want to automatically see this response. There's so much room for smarter legislation that we're not even scraping the surface of. And that to me is It's another exciting field that lawyers haven't really, in earnest, began to explore. Professor Hatfield is obviously leading the way in that regard, but we need a lot of little gillians hanging around and going and emulating that study of what does the future of law look like?

[21:36] Nathan Labenz: So maybe let's work our way up the value levels there. I guess for starters, one that I skipped over, and I wonder if there, I don't know if there's like data around this yet or maybe anecdata at this point. But again, in the programming field, you do have companies starting to say, like Anthropic, I think is being most vocal about this, arguably being most forthright about this right now when they're saying, we're not really looking to hire junior employees in really any department anymore. And I think in the broad space of software, it's, man, I don't know. If I had a senior architect and I could have them mentor a junior programmer or get another $200 a month Claude max plan, which is going to give me better ROI narrowly for the purpose of like my project. Obviously, there's broader questions of the generalizing that strategy and what happens to society broadly, which I'm not ignoring, but locally, it seems like it's pretty clear that you're going to get more from another Claude code than you would from like a kid who came out of a. undergrad CS program that was like all in Java anyway or whatever. It's just there's so many disconnects there that you're like trying to bridge that clawed code doesn't have, doesn't bring those problems to the table. Is that true at like the paralegal level? I used to read, as a kid, I read John Grisham books and I remember so much of the stories were like these sort of heroic, like sort of Herculean labors of just especially these underdog individual lawyers fighting one versus these large teams, just reading till their eyes bled these repositories of documents. And that seems like the probably the first thing that would be like dramatically disrupted by AI. Are we seeing that? Or is there a already like a revolution in like Discovery or, I don't know, even know what the full list of what paralegals do would be, but are we seeing that like majorly changed already?

[23:31] Kevin Frazier: Yeah, I would say that we're already seeing some industry shifts occur. Fortunately, I get to bring a lot of practicing lawyers to campus here in Austin and probe them about how they're using AI. And I'm not gonna name firms, but I've asked, hey, if I came to you with the number one graduating student from Harvard, but they had no AI experience. And then I came to you with a AI whiz from a middle ranked law school. Who would you hire? And now I hear more and more, I would take that middle tier person who's savvy with AI tools because I want them to be on the frontier of finding new tools and teaching everyone else how to use it. One of the unfortunate things about the legal industry is that we love a good symbolic technological adoption. I think 70% of US law firms, for example, are of major top 100 law firms are using Harvey, according to Harvey's own stats. Harvey, for folks who aren't in the lawyerly weeds, is basically a souped up version of ChatGPT that's meant to assist specifically with litigation workflows. Yet when I go talk to folks who work at firms with Harvey and I ask, Okay, what training have you received? And they say, Oh, there was some email we got when it was initially introduced, but I haven't checked it out since. And then, Okay, are you expected to use it at all? No, there's really no obligation for us to check it out or to use it in any new fashion. The underlying incentive of practicing attorneys is to spend as much time as possible on any given task within the band that's acceptable to your client because we have the billable hour. If you get paid by the hour, then your incentive as an attorney is to bill as many hours as possible. And so I think there are a lot of firms who are just used to that model and scared about bucking that trend, bucking what they know has worked. And so a lot of firms are not necessarily leaning into AI. So I will say that the rate of let's say, entry-level lawyerly jobs disappearing. I haven't seen a huge amount of shrinkage, but I do start to hear whispers now of firms saying, we're just not sure we're going to bring on as many summer associates this year, or perhaps we don't need to hire as many junior associates going into the future. And we're also hearing reports of, to coin Ethan Mollick's phrase, a lot of secret cyborgs in in law firms these days, the ones who actually are AI savvy, aren't telling their superiors about how sophisticated and how many use cases AI can actually address. So it's a really dynamic time in the space.

[26:24] Alan Rozenshtein: Yeah, so I'm less plugged in, I think, than Kevin is to legal practice. So if he's hearing that there are the whispers around this, then I believe him. I guess I'm a little skeptical that this is happening already. I think the data about whether this is happening in the software engineering field is actually still quite unsettled. And there's kind of a lot of debates over, are these big companies actually using AI to not hire people, or they're using AI as an excuse for downsizing they've already wanted to do? Again, law is several years behind on the capability scale. And it's actually several years behind even that on the implementing it throughout. Both because, again, law firms, and this is part of the guild rules of law, can only be owned and operated by lawyers. And lawyers, God bless them, are not like brilliant business managers generally. And so driving like managerial change is a hard thing to do. Also, again, there are these legal practice rules around, well, you need a human being showing up in court, and that human being has to attest that they checked everything, and if, God forbid, your AI hallucinated, it's going to be very bad for you in front of the judge. So I think there are a lot of reasons to not be worried right now in the next couple of years. In the longer term, the question is, of course, how strong is Jevons's paradox? It all comes back to this question of induced demand, and we're just not sure what the answer is. I think the more interesting question, or I think the question where we can have more, where it's clear what's going to happen is that a lot of the entry-level jobs will just have to go away. And if they're entry-level people, they'll be having very different jobs. Because again, to your point, Nathan, a lot of... And entry-level lawyering is very rote work, right? It's doing a ton of discovery. It's finding needles in haystacks. It's, you know, writing a contract based on the thousand contracts your firm has done before in this practice domain. And that's just stuff that already today's technology is gonna be so good at. And so the question is, and we just know the answer to this question, is that work necessary on the way to becoming a really good lawyer? And the answer is we don't know the answer to that question. And actually, let me give an example from software engineering that I think about all the time when I try to think through this question about cognitive de-skilling, which is a fancy way of saying getting dumber, which to me I think is actually much more than job loss, the big concern for me in these AI tools in knowledge fields. And that's actually what happened in computer programming. If you've seen The Imitation Game, the movie about Alan Turing at Bletchley Park, there was no computer programming per se. There were machines, and then you would literally, with the hardware switches, that's how you could unquote program the machine. And then someone decided, well, it actually would be really nice if we did it in zeros and ones. And then someone invented assembly language. Which at the time was basically considered cheating. Now it's insane to think that assembly language was the easy option.

[30:00] Alan Rozenshtein: And then at some point, someone decided to invent the early programming languages. And those were really considered cheating. And people thought, oh my God, if you can't program in assembly language, you're just not a real programmer. You're just a moron. Every 10 or 15 or 20 years in computer programming, there's a new level of abstraction that is developed, right? Because after that, people decided, well, it'd be really nice to have something that does garbage collection, worry about memory management. And it'd be really nice, maybe we should like the Java virtual machine so that you could write once and compile on all the systems. And then let's just have Python so you can write in pseudocode. Every once in a while, you have this level of abstraction that, in some sense, makes the task of programming less cognitively demanding in certain respects. And so you could worry, well, that leads to cognitively skilling. It turns out that the scope of programming problems is essentially infinite, and for most people, programming doesn't become easier exactly, it's just that they operate a different level of abstraction, and you still have to be pretty smart to do it. We're having this current debate here about whether this new programming language, which is a natural language prompting of Claude, is gonna have that same effect. My sense is that you're still gonna need to be really smart to do this, you're gonna have to remember less syntax, but suddenly at a much earlier age, you're gonna be thinking about, architectural questions that 30 years ago, it would have taken you 15 years to graduate into because you'd have spent those first 15 years like remembering like what the syntax or in curly braces were in your programming language. So the question is, and again, we don't know, but the question is, will that similarly translate to law? Will that's only translate to medicine where maybe you just don't have to do organic chemistry anymore because I don't know, you don't, because like, Just as you don't need to do long division, once calculators come on, maybe you don't have to do organic chemistry once the AI tools are sophisticated enough to do that. Does that make incoming doctors dumber in a certain sense because they don't have to study organic chemistry? Maybe, but now they can spend their IQ points on more interesting, higher level diagnostic questions. I don't know the answer to that question, but certainly in my own practice, such as it were, and I'm not a practicing lawyer, but I'm a law professor, I'm finding, for example, that I'm using student RAs a lot less than I would have even a few years ago. Because a lot of the tasks that I've had a student RA do, which was, Hey, spend 10 hours clicking around, reading 100 LawReview articles and figuring out which three of them are useful. I have a little script that I wrote that downloads a bunch of PDFs, sends them all to Gemini, Gemini Flash summarizes them, and then a combination of Gemini Pro and the Claude API will have a little debate about whether or not the LARView article is useful for my purposes, and then I get a beautiful formatted markdown document. Again, maybe that'll be solved and I can figure out a different use for my students, but if I can't, that will be a problem because many professions have an apprenticeship phase. One of the reasons I became a law professor was that when I was in law school, I was an RA for just a really wonderful law professor, and I did nonsense crap work for him that I'm not even sure added value to his life, but I just hung around him for long enough that I learned something about being a law professor, it became something that I was interested in doing. If the next generation doesn't have that opportunity, that is a problem. That's why I think even if in the long-term I am optimistic because I do think Jevons's paradox tends to work for intellectual work, in the short-term I can get really, really messy, which is why I think the people who are really gonna struggle are kind of low agency people, for lack of a better term, people who expect that there is a way that you do things, that you go through the appropriate hoops, that you just grind. I think what AI does is it is incredible opportunity for people, but it does require a higher level of agency. I think if you listen to what Tyler Cowan and how he's thought about the implications of AI in the labor market, I think that's one of his main themes, the kind of averages over theme of a lot of his work. work, I think, applies. And in the long term, I think that's great for society. I think that you make more value that way by empowering high agency people, but it sucks for the people who aren't so high agency in the mean term because they get left behind. And from their perspective, it's a big betrayal, right? Because they did all the right things and the rug was pulled out from under them, which is where I think a lot of the... Sorry, I'm rambling at this point, so I'll stop, but I think that's where a lot of the political frictions around this technology, we're going to see that in

[33:38] Nathan Labenz: the next 10 years. The fact that a certain class of people, I mean, I think we've already seen this in the last, I don't know how many years with sort of the fact that so many kids are coming out of college and can't get a job that really allows them to pay off their student debt in any sort of reasonable way. The general sense that like, I did what I was told to do. I played by the rules. And somehow I'm still getting screwed. Like when that hits a certain level.

[34:08] Alan Rozenshtein: And therefore we should burn the entire system down.

[34:11] Nathan Labenz: It's a tough thing for people to stomach. And yeah, I mean that pressure, I don't think burning the whole system down isn't necessarily the right answer, but when I'm at least like quite sympathetic to those folks. And I'm also like not unmoved by the idea that like this is a high class problem, but increasingly I'm also like, yeah, it's not that high class of a problem. And you've a society has to take care of the big middle class, for lack of a better term, that isn't going to be an outlier relative to the system, but is going to do what the system expects them to do. If that can't work anymore, then, you know, you've got like a big problem. The things do start to come apart potentially pretty quickly. So going back for one second to this kind of like legal desert concept and Alan's initial comment that like, The frontier models are better than the, I think you said, like median lawyer or average lawyer practicing today. And I think that totally checks out. Although I don't have that data from my own personal experience, I can say in the context of pediatric oncology, which I've unfortunately had a major crash course in over the last few months. Unfortunately, things are going well. It's been very clear at the hospital on a daily basis that the models are better than the residents. And they really do go toe to toe with the attending oncologists.

[35:32] Alan Rozenshtein: Can I ask you actually a question about that? Better at what? Because like when you said, when I said the frontier models are better than the median lawyer. I always hear Ethan Malik in my mind when I talk about this, about the jaggedness of it. Because when I say they're better, I mean they're on average better. But in certain ways, they're vastly superior. And then in certain ways, they're completely incompetent. So when you average that out, you kind of get a better. And I would imagine something similar for medicine, too, where on certain diagnostic-- tasks, or certainly explaining things in more layman's terms, they're vastly better. And again, I've thankfully never had this experience that you're going through, but I have two small children as well. And I can only imagine that in a situation like that, the bedside manner of the resident and the attending and the nurses with small children, that's so important. And so in those sense, I think we're a long way from these models being better. It's the idea that a job is a bundle of tasks, and only some tasks necessarily get replaced by AI. I guess it's kind of how I think about it.

[36:31] Nathan Labenz: Yeah. Well, I think in the hospital, I mean, it is a very different domain. In the hospital, the tasks are, they're grouped into multiple bundles, right? So for one thing, I would say the nurses are at much less risk of competition from the language models than the doctors. You know, the person who comes along and my poor kid, again, he's doing much better, and he's acting much better. In the early days, he was feeling terrible and all this stuff was happening. It was all very scary. And he could probably tell that we were scared. And he was not easy to deal with at times. So that mostly is like a nurse's problem. And getting him to put the blood pressure cuff on or get his temperature taken, there is definitely a bedside manner component to that, that certainly like the language models are not really touching at all. With the task, you know, it's funny, we've got this like IV tower that kind of stands there all the time. And when the thing hits a endpoint of a medication it's giving or, you know, the IV drip is about to run out or whatever, it starts beeping, the doctors don't know how to use that thing at all. Like they literally can't do it.

[37:38] Alan Rozenshtein: I have had that experience as well.

[37:40] Nathan Labenz: So it's funny how really like the lines between these bundles of tasks are like pretty sharp in the medical context. The things that I've seen for the residents, the AIs are, I'm not seeing too many weaknesses relative to the residents. The one area that I do see the human doctors still having a bit of an edge on is the kind of holistic multimodal assessment of the patient, which I as a parent can And if it was my own self and I was of at least sound mind enough to do it, I could do this for myself in the same way I could do it for a kid. But if I write a paragraph or so about generally how he's doing and what we've observed over the last however many hours and put in the test results and whatever, I would say that AIs are clearly better than the residents. And again, pretty much toe for toe with the attendings. There is some times when you'll have a sort of Something I say to a language model might cause it to come back with a certain concern, and then I become concerned about it. And where I think the doctors have added value relative to the language model has most of all been saying, I'm just looking at him breathing. I'm looking at his color, and he doesn't seem to be in distress, and I really don't think we need to worry about that right now. That's been the main mode where... And I think it's usually my understanding of what's going on in language models is yes, they're definitely reasoning, Though there are also some aspects of stochastic paratree still on the margin. So I think it's oftentimes like just a particular word or phrase that I use that kind of bring loads in some concept that now is worrying me and they can put my mind to rest. Anyway, I don't know what the equivalent of that is in the law. And I'm also though wondering what is the equivalent of prescribing? Because we do have the general sense that in law you can represent yourself, right? I can represent myself If I'm accused of a crime, I think I can pretty much represent myself in anything, right? I can certainly like sign contracts for myself without needing to hire anybody. So if I'm thinking about this sort of legal desert scenario, and I'm thinking like, the model is already better than the medium lawyer, whatever, and potentially better than that, if I were to clone the lawyer, the closest lawyer in a legal desert, still the model might be better, right? Like, why don't, why do, is there a, is there a, sort of barrier or is there a place that they that the legal profession can fall back to like doctors are presumably going to fall back to prescribing? That would be sort of the thing that like, yeah, you can, you know, talk to ChatGPT all day, but you want the medicines who come through me. Is there a version of that in law that will prevent like just every random person from representing themselves with language model backing, or is there not? Or do you think there will be one that will be created?

[40:32] Kevin Frazier: So I think it's important to flag that every state manages its practice of law. So every state has a state bar that dictates who's authorized to actually practice law. You know, typically you have to go to an accredited law school, you have to then pass the bar exam, and then you have to maintain for a series of years continuing legal education in order to represent someone, for example, before court. Then we have unauthorized practice of law statutes. And so this is where each and every state basically forecloses someone from saying, Hey, I'm on Craigslist. Trust me, I've read every law book. Let me represent you at half the rate of of the attorney down the street, right? It's that unauthorized practice of law statute that forecloses you from being able to do that. And it's those UPL statutes, as we refer to them, that have prevented things from LegalZoom, right? Like they ran into a ton of hurdles in terms of just doing things like wills and some real estate agreements because you had the guild, the lawyerly guild, defending itself against these new tools. And so there's going to be a lot of friction for a while in terms of Tools like, for example, I got to talk to Shlomo Clapper. He started a AI startup called Learned Hand, which for non-lawyers, he's a very famous judge, so it's meant to be pretty funny. But this tool is helping judges, for example, and helping law clerks who assist your judges write better opinions and write them in a faster fashion. And to your point, Nathan, I think the thing we're going to see ultimately, or the thing I hope we see, is that we use these new AI tools to address some of the instances in which we see justice effectively be denied because justice is so delayed. Most folks don't pay attention to the fact that 95% of all litigation occurs in state courts. And if you've ever had to go before a state court, they are not known for efficiency. You can be waiting months, if not years, trying to get some dispute resolved. And then when you get it resolved, you may have gotten a judge who's just not good at their job, right? Or maybe they were hangry when they were writing your opinion, or maybe they have something going on personally. And the outcome of that dispute then isn't based on the facts. It isn't necessarily grounded in the law to the extent you hope it is. And so we get arbitrary decisions, we get random decisions that, in my opinion, shouldn't be a characteristic of a good legal regime, right? The idea, in my opinion, is that everyone should be able to enforce their full rights and realize their rights. And yet we rely on an adversarial system in which basically, to be blunt, whoever can pay the most money wins. That's really messed up, but that's typically how The law is resolved in a lot of these cases because who can ever pay their lawyers for the longest can survive more or less this adversarial approach. If we instead move to a more systematic, consistent approach to handling the lower level cases, to handling these more basic disputes, the role for lawyers then becomes managing what that legal regime should look like in the first place. Right? Trying to set at a higher level, how should we structure society and structure the incentives such that they align with whatever that community's values are? And so that's the role that I would say our appellate court system plays right now, right? You think of the US Supreme Court or a state Supreme Court, they get to play the sort of higher level role of how should we shape laws more generally? And that's the role that I see for lawyers in the future, doing that more hands-on approach of thinking through the ultimate ends of the law and making sure that the system is working in a consistent fashion rather than the sort of ad hoc, just hope you get a good judge, flip of the coin scenario right now.

[44:50] Nathan Labenz: I love that vision. I listened to the episode, which is... Definitely a Hall of Fame, first ballot, all name team Hall of Fame for both a judge and a legal startup. Okay, I definitely want to unpack a little bit more like what this vision of the future of law looks like. But just let me put you on the spot for a prediction. Do you think we're going to see states pass laws saying ChatGPT can't give legal advice to protect retail lawyers.

[45:21] Kevin Frazier: I certainly think we're already seeing that some state bar associations have significantly limited the instances in which lawyers can use AI. But on the other hand, we're seeing states like Arizona. Earlier, Alan mentioned that only lawyers can own and manage law firms. Arizona just became the first state that upended that and now allows for non-practicing attorneys to own, or non-practicing, non-lawyers generally, to own and start law firms. And we've seen states like Texas, for example, and Utah are leaning into regulatory sandboxes in which AI tools can be deployed with much greater ease. And as soon as folks start to see there's cheaper lawyerly tools available in other states, They're gonna move their companies to those states. They're gonna handle their disputes in those states, and we're gonna start to see the law filter there. That's gonna be where the pressure emerges from, not from state bar associations waking up one day and saying, You know what? Screw it. Let's just go with the AI. I think it's pretty dang good. But it will be that sort of competitive dynamic.

[46:29] Alan Rozenshtein: Yeah, I would also say, I think it's gonna be hard, especially in this era, to try to stop general purpose chat bots from giving legal advice. I think both from a legal perspective, unauthorized practice of law, statutes always raise difficult First Amendment issues because it's one thing to say, okay, you can't represent yourself as a lawyer who can go into court and okay, fine, that's one thing. It's another thing to say, You can't talk to someone. Someone can't talk to you about an interesting legal question. That's core First Amendment speech. And obviously there are these like blurry lines you have to draw, but I think that it's gonna be hard to have such a broad limit on the output of AI models, which I think is pretty clearly protected speech. Who's protected speech is an interesting kind of almost metaphysical question. Models don't really have rights, and the companies, I'm not sure, have First Amendment rights and models that they like themselves barely control. I think users and listeners have rights in communicating, but that's kind of interesting, maybe academic question. I think, and so that's the legal reason why I'm skeptical that you'll have such broad prohibitions. I think also it's just, it's too embarrassing to do that. I think enough people have used these models and understand how useful they are. It's just gonna be such obvious guilt protective self-dealing to go out and say, henceforth, we banned the use of ChatGPT to tell you interesting things about the law in the state of Minnesota. Now, what I do think the compromise is going to be is, look, if you want to do certain kinds of legal transactions, you have to go through a lawyer. And I think this is where earlier you asked about, can't you represent yourself always yourself? It's an interesting question. I actually don't know the rules about this. Certainly if you're too poor to have a lawyer, you certainly can represent yourself. It's an interesting question whether if you're rich enough to have a lawyer, you can nevertheless say, I'd like to go into court and just represent myself in prosecuting this civil lawsuit. I don't know if you can do that. Kevin, are you nodding because you can do that.

[48:23] Kevin Frazier: I'm fairly certain you can say, I'm just not going. You can represent yourself pro se and just say, screw it, here we go.

[48:31] Alan Rozenshtein: But my question is, and I just want to answer this, if you're in a civil context, And you say, Hey judge, I'm gonna represent pro se. Can the judge say, No, you're not, right? Because I don't wanna deal with you pro se, and you're not a poor person, so you can afford to have a lawyer, so I'm gonna make you have a lawyer. I just don't know the answer to that question. It's not something that people have really had to think about, because if you were rich, or let's put it this way, if you were not poor, the chances of you getting a good outcome representing yourself was so low that you just paid for a lawyer. The thing about AI is that it changes that equation, right? Where even if you're rich, the marginal benefit of a real lawyer is not always necessarily gonna be that high. Maybe you just pay for your $20 a month ChatGPT subscription, or if you wanna be really fancy, your $200 a month subscription so that you can have the pro model and get really good legal advice. So maybe the compromise is going to be that there's a lot more free-floating chat legal advice out there, but the bar associations and the state courts get a little more restrictive on Yeah, but at some point in the process, you need a human lawyer, either because they think that actually adds value and provides consumer protection or just improves the legal system, or just as pure guilt protectionism, or as is usually the case with these things, a mixture of the two. I think you're seeing something similar with medicine and mental health treatment, where it's very hard, I think, to say ChatGPT can't give you, medical advice. We're not going to let ChatGPT, we're not going to let you upload your test results or your kids' test results to ChatGPT so you can get a second or third opinion. But we are going to hold the line on, yes, but if you want the morphine, there has to be a human doctor that writes a prescription for that.

[50:12] Nathan Labenz: So I asked Claude, by the way. It says that your right to pro se representation is strongest in criminal trials. There are exceptions related to mental competency, timeliness, disruptive conduct. And standby counsel, judges can't appoint advisory counsel over your objection. It's weaker in civil cases, as you suggested, corporations and other entities, some appellate courts, some circuits have held there's no constitutional right to pro se representation in criminal appeals and certain specialized proceedings, including immigration courts, et cetera. So as always, it's complicated. Okay, so the vision for the future, I think the point about whoever has the biggest budget tends to win is depressing reality. And certainly one of my great hopes for AI broadly is by making access to expertise far more universal and far more accessible, far more affordable, et cetera, that lots of things could be better and more just society is one of the great promises there for sure. How do you see that kind of working in practice. I guess one thing that I... Maybe this is wrong, but when I think about the bigger budget translates to winning, I imagine that being like maybe a reflection of like too much law or... Because what are they doing? It seems like there's just so much law out there. There's so many things I could argue. There's so many precedents I could bring in that I can spend hours and hours indefinitely almost. And that to me suggests we might need like a simpler system in some ways, but that contrasts it with your kind of earlier vision of like certainly more extensive contracts, which I also projected into like maybe more extensive or more exhaustive, maybe is the right word, legislation in the first place. So what does that look like in your mind when, how do we get to this actual justice when we now, let's say we all have infinite AI lawyers, how does that translate to justice? What does that look like?

[52:17] Alan Rozenshtein: Yeah, so it depends a lot. It depends a lot on what the kind of marginal utility curves look like of extra legal thinking, right? So like my hypothesis, and this, I like, no one knows the answer. So take this for what it's worth, which is not a lot, but my, my intuition, and I'm curious what Kevin's is going to be, is that the reason law has gotten so expensive is that if If you think of law as a kind of like combinatorial search space of arguments and precedents, and can I find in this billions of documents, like the one sentence that is going to show that my client should prevail in this contract dispute with your client, right? If you think of it as we have to search this very large combinatorial search space, largely that search had to be done by humans. Now, obviously, legal AI, sorry, legal tech long predates legal AI, right? It's at least 50 years old, back to the dawn of digitizing legal databases. So Westlaw and Lexis, which is the main databases lawyers use, these are very old companies that used to do everything with paper books. And then in the seventies and eighties, they digitized everything. That was a huge deal, right? And more recently, you've had even some machine learning based like discovery tools. But nevertheless, you still need a lot of human beings to lock them in a conference room to do discovery, and those human beings are extremely expensive, right? Human labor is just extremely expensive. So because there were still the cost of that extra human labor was less than the marginal benefit of exploring a little bit more of that combinatorial search space, the effect was to increase the aggregate cost of litigation, right, as Kevin mentioned earlier. Okay, so now imagine a world where you have AIs and they are 10,000 times, right? They are three orders of magnitude or four orders of magnitude more effective than the current ones are. And they're also four orders of magnitude cheaper, right? So you're getting like something that's effect a million times better, right, in the next few years. That seems totally plausible if you look at like the the epic AI log curves and stuff like that. It seems totally plausible to me in the next few years. You may get to a point where that actually exhausts the practical combinatorial search space of of legal moves that are actually helpful to you. There's just no more precedent to explore. Like you just have read every single sentence of every single piece of electronic discovery, right? At that point, the arms race ends a little bit. And now there is a natural ceiling on the cost of legal services because there's just nothing more to spend on. That seems plausible to me, right? It's also plausible that's not the case, and lawyers will always discover ways to increase the combinatorial search space, and so it'll always be more expensive, and et cetera, et cetera. If in 10 years, Kevin's very optimistic vision comes true of the kind of democratization of legal services, I suspect it's going to be because we've just exhausted the scope of legal stuff to do. And here I'm actually arguing a little bit against myself because now I'm now talking myself into, Nathan, your point earlier on that maybe law is a bit more like dentistry, where at some point your teeth are just clean and they can't get cleaner. And so I just don't need more dentistry than that. And it's, I don't know. And the problem is we're trying to predict these dynamics. These dynamics are all, they're all compounding. And so tiny differences in what you think the percentage rate of improvement versus cost reduction versus like how big the legal search space increase, tiny differences can lead to massive changes in your predictions over the next 10 years. Which is why I think there's a lot of uncertainty in trying to predict the effect of AI on law or medicine or computer programming or investment or whatever the case is.

[56:07] Kevin Frazier: I'll just add that if you you look at a civil procedure textbook, you'll see that the way litigation currently works right now is basically a series of very complex procedural steps. And everyone always has at their disposal a number of kind of motions that they can throw out there to just delay the process further. And some of those can be in good faith, right? You want to challenge that the litigation should proceed to another step because perhaps the other party hasn't actually made any valid legal claims, or perhaps you want to challenge the kind of source of information for different legal claims, so on and so forth. And so it's a lot of procedure. It's a lot of process. And what I think can really start to reorient things as you were teeing up, Nathan, is what if we start to move towards outcome-based law, right? Where we change the orientation not toward how many steps can we march to resolve this one very narrow dispute to both parties want to see X happen. And now our agents. who have been trained on our incomes, on our preferences, on our aspirations, on our professional goals, so on and so forth, can autonomously be acting on our behalf to continuously update whatever agreements we've reached with other parties or other corporations to achieve that end. And that is, to me, the more very optimistic, right, and very sort of sci-fi, but something that I see is eminently possible. That's the outcome that I think we may eventually work towards, which is to say, let's make sure the law is oriented toward what we actually want to see, and not just the sense that we should assume that more procedure or more process is better. In many ways, this is what Professor Nick Bagley has coined the procedural fetish of lawyers, our answer for trying to make everyone feel fair is to give them more opportunities to speak up. But usually it's not a representative sample of folks who actually show up at those opportunities to speak out or to get involved or to throw gum into the cogs of the system. So how do we actually achieve what we wanted to from the outset in passing that law? And that's the sort of outcome orientation that I think we could achieve if we lean into this.

[58:31] Nathan Labenz: So I guess I don't really know what we're trying to accomplish in some of these contexts. Like for starters, going back to the Learned at Hand episode of scaling laws, one thing I was struck by there and your description of all this process and the fully exhaustive set of things one might do to represent their clients reaching an end state, then you think, geez, I feel bad for the judges. And so Very much I was struck listening to that episode that the judges are in a similar position to doctors today where I think they're just overwhelmed by stuff, by and large, and welcome to help. That's been my sense of how the doctors are typically feeling. They're like, I got hours of charting to do when I get home. So if something can handle that, that's an easy win. And if you can come prepared to be a better patient, for lack of a better term, in the management of your own health, that's a great win for me too. I've seen some skepticism, but I really have not seen any hostility or sense of threat in my experience in the medical system. And I do think a big part of that is just because they're overwhelmed and they know it. So help is welcome. It seemed like that was the vibe that, that the judges have too. But now I'm wondering like, okay, we got one vision here that is the sort of every corner case of a agreement is articulated in advance. And this seems to kind of line up, and I'll preface this by saying I don't really have a great command of these terms or a deep understanding, but in prepping for this, I did some research and hit on a study that showed that GPT-4, which already shows that the work is dated, that's just how it is obviously in these spaces a lot of times, was more of a strict formalist, which was contrasted to the human judges, which were described as more legal realist, which, correct me, but I think basically that means GPT is following the letter of the law and the judges are doing what I think the Supreme Court is often criticized for doing, which is making the decision it wants to make and then justifying it however it wants to justify it. But I'm like torn on which they should be doing, because at least historically, I don't think we've written laws so well that following them to the bizarre conclusions that one might, if you were just going to be truly formalist about it, is Obviously a great way to go. At the same time, obviously you've got room for bias and all sorts of problems if you just let people exercise their judgment too freely. And that's why we have a whole legal system. So it's not just people getting to just dictate how things are going to go with no checks on whatever they want to say. And then we've got Claude Constitution, which is I think Amanda Askell has made really interesting points around they don't want to just give Claude a long series of rules that it has to follow. for multiple reasons, but one of the, I think the most compelling one that she articulated is we believe that if the model knows that it could do something that would be better for the person that it's interacting with, but it has to follow these rules, she worries that it might generalize in a problematic way where the model, and they've seen this in like reward hacking context and other experiments where if the model reward hacks and starts to develop some sort of self-conception as the kind of thing that reward hacks, then it becomes like more evil in general. And so she thinks a very analogous problem would be if a model knows that it really could do something better for you, but it follows the rule and doesn't, then she's worried that that could become a problem where it's what kind of person does that? And how does that kind of person behave in other situations? And obviously like just following orders doesn't always age well. So, I don't know how to tie that all up into a question, but it seems like we have a sort of desire for edge cases to be all spelled out and everything to be in black and white so that we know in advance what we're getting ourselves into, and maybe we just haven't been able to push that to the extreme where it can actually work. But then we definitely are getting a different signal from Anthropic right now where they're saying like, we don't even want to try that. What we want to do is get our AI to have the best possible judgment that it can have so that it knows how to be good, even in highly ambiguous situations. So I guess do you have a sense for which way the law ultimately goes?

[1:02:52] Kevin Frazier: I want Alan to take the first stab at the fraud constitution answer here because he's got some deep philosophical views. I do want to briefly hit on the sort of use of AI to precisely and perhaps perfectly try to read the law as it's written, right? In a sort of clear formalist mentality, like you were mentioning, Nathan. I think the issue with that is one of my favorite questions that always gets raised in any good statutory interpretation exercise, which is imagine you're going to a park and there's a sign right when you're going to the park that says no vehicles allowed. Okay, so is a drone a vehicle? Is a stroller a vehicle? Is a scooter a vehicle? Is an ambulance a vehicle? So on and so forth. There's so much ambiguity, even when the drafter of that rule may have thought, oh, vehicle, I've nailed it. Clearly, I was only referring to a car and therefore everything is settled. And so that's why we've always had some variance. from perfect formalism or perfect textualism as many lawyers would refer to it. It's just saying, whatever the law is as written, we're going to apply it. We just don't have the words for every scenario. Now, obviously, AI. can assist with coming up with way more many words and way more many laws, theoretically. But that's not the sort of world I think any American wants to live in. We have a common law system here, not a code-based system. If you want to experience a code-based system, go live in the EU where they attempt to try to govern and regulate more precisely every kind of behavior. Whereas in the US, we've tolerated some degree of ambiguity based off of the reason that we need an iterative emergent approach to discovering how it is we actually want to govern ourselves. And so the trick for AI and the trick for legal adoption of AI into adjudication is finding out how to use a system that can create more words, that can resolve textual disputes with greater consistency and in a greater fashion, while still allowing for that emergent process to continue. Because I think between Alan and I, and for a lot of folks having a world in which you don't feel like, okay, if you step on this crack, you are automatically going to receive a penalty in the mail and it will be sent to you within five days and taken out of your bank account. That's a scary world that I don't think any of us want to live in. And so maintaining this balance of as you alluded to in the Claude Constitution, higher level rules that guide us generally. And then enforcement of those rules is a really tricky issue that could be the subject of a whole legal seminar. Maybe we should just get one on the books, Alan.

[1:05:52] Alan Rozenshtein: Yeah, I think that'd be fun. Yeah, so let me, let me say two things. Let me say one about the use by judges and then the kind of the broader Claude Constitution question. So I was lucky in that I had the opportunity to go and talk to some Minnesota state appellate judges. So these are state courts, but they're appellate judges. So they're a little bit removed from just like the absolute crush of the trial stuff. And one thing that I was surprised about was how actually open they were to potentially using these tools. There's a lot of skepticism, I think, which was appropriate and some hesitancy, but again, there's not the sort of tomato throwing that I thought you would expect. And these are judges, so they tend to be on the older side, frankly. So you can imagine a kind of natural aversion. There wasn't that much of that. And again, I think If you just spend an hour just talking to the $20 version of Gemini or Claude or ChatGPT, you just really quickly realize whatever long-term societal effects, this thing is pretty useful. And so I do think we're going to see a lot more of it. How judges use it is tricky. And I think the kind of research that you mentioned about GPT-4, again, it's unfortunate that these things get to be out of date pretty quickly. We need a better research pipeline to have these evals come out within a month, not within a year and a half. But I would also say that I did not take that research to say that GPT-4 is textualist and therefore it must be textualist, or formalist rather, and therefore models must be formalist. It's just that like, for whatever reason, that model, in the way that it was trained, and it's GPT-4, there probably wasn't specific legal RLHF in the way that there may very well be with these newer models, and certainly with the legal-specific models. Just whatever reason, the way it was trained meant that on some corpus of legal questions, it gave a kind of more formalistic answer. But you could have a model that gives a much more functionalist answer, right, which is less concerned about the specific language of the law. And we're like, what were the legislators trying to do and how do we apply that right to this question of no vehicles in the parks, right? And should a drone be a vehicle? And I think you're right to view the Claude Constitution, to go to answer that part of your question, as taking a position that in some sense you want reasoning, whether it's artificial reasoning or human reasoning, to be very, to operate more at the level of principles and at the level of rules. But the thing that I would say is I would push against thinking about this as a binary. There are no pure textualists in the world, right? There is no one who is so committed to the letter of the law that they would not consider the purposes of the law or they would not deviate if there was obvious mistake in the law. No, no one exists like that, right? Similarly, there's no one who's such a legal functionalist or legal realist that, like, they don't think that the legal text binds them at all. Everyone is somewhere in between, and frankly, most people are actually relative to what the spectrum actually could be, they're pretty clustered in the middle. 15 years ago, this was reflected on the Supreme Court by Justice Antonin Scalia on the formalist end. He literally wrote a law review article once called The Rule of Law is the Law of Rules.

[1:09:15] Alan Rozenshtein: And then on the other end by Justice Stephen Breyer, who, you know, would often start with, This is very complicated. Here are 17 factors that I'm using to think through this problem. And they actually, I think, went on almost like a buddy cop tour, of lectures around the country where they would debate in a good-natured way. And it was fun to watch. But what you really realized when you saw this was that they were basically all in the middle, and Scalia was on one end of the middle, and Breyer was on the other end of the middle. So I think the lesson from that, and I think the way that I would read the Claude Constitution document, is that you need... An intelligence, any intelligence, whether again, natural or artificial, needs to be able to operate both at the level of principles and rules, and that a lot of what we think of as judgment, or to use the kind of fancy phrase from Aristotle, phronesis, and I mentioned Aristotle because to Kevin's point about my philosophical interest in Claude's Constitution, when you read that document, you really have to appreciate that it was written by someone who has a PhD from one of the best philosophy departments in the country in moral philosophy. Amanda Askell understands academic moral philosophy, she has read the McKean ethics, and at least as I read Claude's constitution, it is footnotes on that document, which is in no way a criticism, right? I think all ethics should essentially be footnotes on Aristotle. I read her as saying Aristotle was right in that it's very hard, basically impossible, to derive any set of comprehensive set of rules of ethics. You need to have a real sensitivity to principles, but that doesn't foreclose the use of rules in a particular domain, because sometimes the The best principled approach to an ethical domain is to say, it'd actually be really helpful to have some rules in this specific ethical domain. And in fact, when you read Claude's constitution, it toggles between high level principles, right? There are like 17 of them, quote unquote, in no particular order of priority. Okay. And then there are a couple rules where there are no principles applied. Claude will not create child sex material. Like, you can have a debate with Claude about the principle. It will not do it, right? Claude will not create, or at least hopefully, unless it's jailbroken, but then something terribly has gone wrong. By design, Claude will not help you develop airborne Ebola or something like that. It just won't do it. So even there, there is a recognition. So I think the question to me is not so much, Should we do rules or standards? Should we do principles or technical rules? It's always a yes and. It's how do you tune that distribution between those two? And I think what really excites me about AI is that we're able to do, people sometimes talk about, there's like in vitro experiments and in vivo experiments, and then there's this new thing called in silico experiments, where you try to take some part of like human life and you're trying to model it in a machine. And the benefits of that are that in silico experiments can be done at speed and at scale that has so many orders of magnitude faster than doing anything in the real world. So one thing that excites me as someone who's interested in law for law's sake, is that we can run experiments within machine learning models about how does a well-developed legal system work, and exactly what should the distribution be between principles thinking and rules thinking that you could never run in the real world. So like I wrote this lawfare piece recently about Claude's Constitution, and I end it with this just reflection that we've been debating this question of rules versus standards and ethical reasoning for literally thousands of years. What's cool about these machines is that we can run the experiments now. And I think we're going to learn a lot, not just about machine intelligence in the next few years, but about human intelligence because we can now simulate it at scale and tune the dials with precision in machines now.

[1:12:39] Kevin Frazier: And just to add that on to a human law context is I think future generations are going to look back at the level of sophisticated AI tools we had available right now and are going to be flummoxed that we weren't asking our legislators to run proposed laws through simulations about their intended effects and their likely outputs. Similarly, with respect to judges writing opinions and not asking, Hey, find all the ambiguities that are latent in this text before I publish it. They're going to be like, What the hell? You had this ultimate tool at your disposal to catch blatant errors. What are you doing? And so I think this is a great model for folks to follow with respect to that simulation idea.

[1:13:28] Nathan Labenz: One of my mantras for AI that you're calling to mind is AI defies all binaries. So I definitely, that your response there that it can't be all one or the other, I'm yet to find a good exception to that general guideline or general expectation. What do you, how does this simulation, I get really excited also about in silico experiments when it comes to science. Can you sketch out what that looks like in law? Do we start with a bunch of scenarios and like what we think the right outcome should be and turn them into an eval? Like we turn everything else into an eval? Or is it am I living in one of those simulations right now, perhaps?

[1:14:09] Kevin Frazier: But I think one of the more promising things is forcing legislators to actually do their job, which is difficult, which is saying, what do you actually want to have happen with this law? If you look at something like NEPA, the National Economic Protection Act may get it wrong. Everyone just calls it Environmental Protection Act. Everyone just calls it NEPA. This is the law that is famously flummoxed the ability to build affordable housing in a lot of communities because it creates a lot of veto points for individual stakeholders to find a way to gum up the wheels of new development. And my hunch is that we could have forecasted. some pressure points that may be exploited by bad actors or perhaps well-intentioned actors who are just more expressive than others and identified, huh, is this actually resulting in the sort of pro-environmental, pro-green or pro-climate change or anti-climate change outcomes that the drafters of that legislation were actually hoping to achieve? And so now if you ask legislators, hey, what are your explicit goals? with this legislation? What problem are you actually trying to solve? And then create evals based off of, okay, have we seen a reduction, for example, in carbon emissions? Have we seen a reduction with respect to, let's say, a congestion pricing bill in the number of cars going into the city? Those are all things we can evaluate and map out. And so that's the forcing function to me is saying, hey, if you're going to propose a law, what is the problem you're actually trying to solve? And then that becomes the core source of information.

[1:15:48] Nathan Labenz: What should we talk about very briefly in closing? I mean, I like the idea of essentially red teaming. I've never been very involved in a red teaming of a bill process. Up until SB 1047 last year, there was a lot of red teaming of that. And that was pretty interesting process. And I do think everybody ended up agreeing. I have become friends with Dean Ball, who led the initial critique of that bill with his writing online. And even he came out toward the end much happier with it than he was at the beginning. So I think everybody agreed that putting it through its paces and really gaming out how are different actors going to respond to this and are we really going to achieve what we want was a pretty successful process to think that could be done in In general, sounds like a very promising enhancement to our legislative process. Good luck talking members of Congress into that. I will see. I don't know how aligned they are to the first misalignment we may encounter. It might be the elected officials and their constituents, but nevertheless, I like the idea. Maybe just in closing, what other kind of big ideas do you think people should be thinking about more. One that I've floated a few times is, what new rights could we introduce in virtue of the fact that we now have scalable intelligence to apply to all sorts of problems? You have the right to remain silent. If you can't afford an attorney, one will be appointed for you. I think you should have a right to ChatGPT or similar. And I imagine that my ideas there are limited by my lack of exposure to the real problems in the system. So I'd be really interested to hear what other rights you think people might ought to have in virtue of AI existing, or what other just big ideas on the level of run detailed simulations of your laws before you pass them, you think people should be thinking a lot harder about than we have so far.

[1:17:42] Alan Rozenshtein: Yeah, I'll go first, and then Kevin, you have the last word. right to use these models in the sense that I think the First Amendment is probably the right kind of legal home for that. I think you already do. I think this will come up at some point, but I don't think courts are going to have much difficulty saying that people have the right to access these tools in the same way that the right to access libraries to read books. That's the kind of negative right, which is say you have the right to not have the government forbid you. corresponding positive right, which is you have the right for someone to give you compute, essentially. And there's all interesting arguments about various kind of public options. They're often discussed as public options to build models, but I think in some sense, public options to give people compute credits, right? Compute budgets might be interesting, right? Like you could write a sci-fi story, or I think I could get Claude to write a pretty interesting sci-fi story where in the future the currency is compute, right? Like the main credit that people pass around is the credit to compute because that is so valuable. And to your point, Nathan, about how AI dissolves all binaries, I tend to agree with the exception of one, which is on the binary of there is a limit to how much compute is useful in the world and there is no limit. I think that AI shows that there is no limit. And so I think in that sense, AI is in extreme, not in the middle. To me though, I think, and Kevin sometimes rolls his eyes at me because I think he thinks I'm too credulous about this. I think the question of AI welfare, which is to say the wealth of these models and the legal implications of that is something that is very easy to dismiss. But it's going to be an increasingly important issue, either because these models, like as an actual kind of cognitive or metaphysical matter will become increasingly sentient. My brain tends to break when I think about that, but I have trouble ruling it out. But I think more importantly, actually, more immediately, because I think as these models become more personable, as people develop more relationships with them as memory of these models improves. And the more I talk to Claude, there's a point at which Claude knows me better than like my wife does, which is totally plausible because I just talking to Claude constantly for everything. If you combine that with real time voice and video, or suddenly your AI chatbot is, has an avatar, right, that you can interact with. And then certainly once that AI avatar is embodied in sort of robotics, which I think is going to happen. It'll take a while. It may take longer than we think, but I'd be shocked. I'd be really shocked if in 10 or 15 years we don't have very convincing real time AI companions that people get extraordinarily attached to. What sorts of rights will people demand for those models? I think it's something that could cause real societal cleavages, because I think you're going to have groups of people who are really committed to the idea that these models are, for many practical purposes, sentient entities that we are enslaving, or at the very least, potentially treating very poorly. And then you can have other people, and I think this may actually be a source of really interesting religious cleavage in the next 20, 30 years, who think that like the very idea of models as sentient is like a literal affront to God. It's like it is a kind of idolatry that the only correct response to is a Dune-style Butlerian jihad. And there's going to be this messy middle people who are just like, I don't know what's going on. I just want a chatbot. I think that's going to be a very difficult transition at the legal level, certainly, but especially at the social level. And I think people who say, nah, that's not going to happen. That's science fiction. I don't know. I think they're fooling themselves.

[1:21:25] Kevin Frazier: So I'd say the negative right that you all were referring to, I think is generally encapsulated within the idea of right to compute. And if this is the first time you're hearing about the right to compute, it's actually been enacted in Montana. There are bills in. Ohio and New Hampshire, and I believe a couple of other states advocating for the right to compute. And I believe this is one of those major rights, Nathan, that folks are going to be clamoring for sooner rather than later, basically saying that we do need additional protection against the state, really infringing your access to computational tools of all kinds, not only AI, whatever's coming down the pike, there should be a higher threshold before the government limits your ability to express yourself or or to receive information via these new tools. The one that I think is also very interesting in this world in which compute's obviously a scarce resource, that's very important. The other one that we keep hearing about, but too few people are discussing, in my opinion, is data. And I think the right to share, meaning the right to share your data as you see fit is a really important right. Because right now, if you want to share, for example, your kid's educational information, with a new AI tool provider because you want to train the best AI tutor out there so that your kid who perhaps learns differently or just you want to learn a different curriculum can make use of that AI tool. FERPA, the federal privacy law that applies in that context, is a real burden to being able to share as much data as possible, as regularly as possible, without literally signing things and doing so on a yearly basis. And I think that individuals, if they want to share their data and want to make that a frictionless process so that they can train better AI for their own personal uses, that should definitely be a thing because we all don't have the ability, for example, what is it to go to that fountain? Is it fountain, the fountain of youth thing that all the like super healthy people are going to and they're downloading all of their data, they're getting all these scans, then they're sending it to some AI outfit to recommend personalized health health outcomes. That's awesome. But only wealthy folks can go spend a week in Florida or whatever that is, downloading everything about themselves. The rest of us are just left with whatever Walgreens told us at that last checkup. So let's make it as easy as possible for folks to use their data as they see fit. And that to me is a promising outcome under the right to share idea.

[1:24:00] Nathan Labenz: What about things that we maybe should be thinking about restricting the government from doing? because I do have the sense now that we're probably already in an age. It's been, whatever, 10 years since Snowden. And I'm wondering if there was another Snowden, what would they be telling us? And I would have to guess that we've got some sort of LLM dragnet kind of phenomenon going on somewhere. And it seems there's this adage generally that everybody's committing a felony a week or whatever, and it's just a question of, your security through obscurity and nobody's really targeting you and whatever. But that could change very quickly. And we're starting to see, obviously, like weaponization of the Justice Department, et cetera, et cetera. Should there be new restrictions on what the government can do with AI?

[1:24:50] Alan Rozenshtein: Yeah, I think that's hugely important. So I actually, I wrote a piece for a law fair a few months ago. I gave a speech at a law school called the Unitary Artificial Executive, all about this idea that one of the effects of AI and near-term AI, not like speculative AI, but near-term AI is to hugely increase the power of executive branch and the president in particular, both because of all of these additional abilities that AI gives the president, like perfect enforcement, surveillance, creation of propaganda at massive scales, all that sort of stuff. And then also for the president, him or herself, a much greater ability to control the executive branch, right? Which is millions of people and it's very hard just as like a bureaucratic management exercise to control. But if you have an AI that is trained on the president's preferences, that's injected at all levels of the bureaucracy, is reading all the emails, reading all all the texts, you can have a situation where the president really controls in a much more practical way than he's ever been able to, whatever his legal authorities might be, the executive branch. And that's at the very least complicated, right? It might have some benefits because elections should have consequences, and the people voted for some, for person A and not person B, and so presumably the executive branch should reflect that. On the other hand, again, I'm calling in from Minnesota. It's not hard to imagine the potential abuses of that. And so I think that one of the really important issues in the next, let's say, decade, because the government is slow to adopt technology, although it does inevitably get there, is gonna be how do we, on the one hand, encourage, because I'm fundamentally an AI optimist, encourage the government to use AI to really improve government services, to increase state capacity, which is something that our government has not always been good at, and I think is part of the reason why we're seeing some, part of the, Some fraction of the societal discontent, the kind of burn it down mentality is this feeling that we're paying a bunch of taxes and the government's not doing anything useful. AI can really help with that. On the other hand, you don't want to supercharge the government through the use of AI and how to figure that balance out is very tricky. For me, I suspect it's going to be my main thing that I think about for the next few years as an academic. But it's far more important for the legislators and the bureaucrats and the company executives who are selling these tools to the government. and the executive branch officials to figure this out as well.

[1:27:08] Kevin Frazier: And just quickly, I'll add that I think here there's some real concern around updating the Fourth Amendment that we need to pay attention to. There's some folks who've realized that, hey, in theory, the government now has an incredible ability to tap into basically every system for detecting and picking up audio. But if you're speaking publicly, just hanging out, saying whatever, talking to your friend, The idea that all of that audio information can now be hoovered up, analyzed, synthesized, and then studied by the government to see who's planning what, who's thinking what, who wants to do what, all without real notification. That's tremendously scary to me to just think about that sort of pervasive surveillance. That is the issue that I'd really flag. And I would just encourage, again, on the positive side, for governments really to lean into regulatory sandboxes when it comes to testing new AI systems and erring on the side of saying, let's try to deploy this tool. and make sure that folks have noticed that we're doing so, have a means to provide feedback. But let's not be afraid of literally reinventing the wheel and improving our processes and improving our laws.

[1:28:19] Nathan Labenz: The rule of law and law generally never been more important. And the intersection with AI obviously ramping up and likely to become one of the big questions of our times in the next kind Timelines are short. Scaling Laws is the podcast where you can find these two and get lots more of their thoughts and also just much deeper dives into everything that's going on at the intersection of AI and law. Kevin Frazier and Alan Rosenstein, thank you both for being part of the Cognitive Revolution.

[1:28:45] Kevin Frazier: Thanks for having us. Thanks, Nathan.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.