The Consumer Rights Revolution with Joshua Browder of DoNotPay
Exploring AI in law with Joshua Browder, discussing AI regulation, ethics, and the future of robo-lawyers in consumer advocacy on The Cognitive Revolution.
Watch Episode Here
Video Description
Nathan sits down with Joshua Browder of DoNotPay, the world’s first robot lawyer. They chat about the current state of AI use in law, what policymakers should consider in regulating AI, and the ethics of robo-lawyers for consumer use. Please note we are rereleasing this episode after catching a technical issue in the version released earlier.
The Cognitive Revolution is a part of the Turpentine podcast network. Learn more: Turpentine.co
TIMESTAMPS:
(00:00) Episode Preview
(03:22) Joshua Browder and the story of DoNotPay
(04:50) The value incurred through an AI lawyer
(07:17) What are the legal and financial injustices that motivates Joshua’s work?
(12:46) How does the AI negotiate with a Comcast chat agent?
(16:28) Consumer security
(17:12) Sponsor: Omneky
(21:08) The ethics of DoNotPay
(22:27) Should AI need to disclose that it’s AI?
(24:47) How much human intervention is necessary in AI tooling?
(27:15) Where does the burden of proof lie for AI regulation and ethics?
(31:47) Advice to policymakers
(35:45) AI Liability
(38:05) AI Arms Race
(43:46) Productive bot interactions
(45:00) AI-powered arbitration
(49:28) How the consumer experience might change two years from now
(51:45) AI Superapps
(55:25) Moats in AI
(57:11) How much money does DoNotPay save people today?
(59:30) Consumer and enterprise timelines given AI
(01:06:30) What will AI replace?
(01:09:34) Impact of AI in different industries
(01:11:10) The legal field’s view of AI
(01:15:50) Deflationary factors in running DoNotPay
(01:25:00) What are non-obvious ways to save money?
(01:27:55) Joshua’s favorite AI tools
(01:28:17) Would Joshua get a Neuralink implant?
LINKS:
https://donotpay.com/
TWITTER:
@jbrowder1 (Joshua)
@DoNotPay (DoNotPay)
@CogRev_Podcast
@labenz (Nathan)
@eriktorenberg (Erik)
SPONSOR:
Thank you Omneky (www.omneky.com) for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
NEWSLETTER:
More show notes and reading material released in our Substack: https://cognitiverevolution.substack.com
Full Transcript
Transcript
Joshua Browder: 0:00 My admission is if the big companies are using AI, consumers should have access to it too. I think the whole financial system is stacked against an average person where if you get a wire transfer, it charges a fee unless you have a lot of money in your account. To cash a check, they charge a fee, all of this stuff. And so what angers me the most is these large banks and financial institutions not paying any people any interest, but also at the same time charging huge amounts of fees to ordinary people. And that's also a good job for AI where you can link your bank account, and AI goes in and finds all the areas that people are being ripped off. Our AI said, I'm not an AI. I'm just the consumer that knows my rights. And so the AI is lying to keep it going. And so it really is an arms race to keep getting these successes for consumers.
Nathan Labenz: 0:48 Hello, and welcome to the Cognitive Revolution, where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas and together we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz, joined by my cohost, Eric Torrenberg. Hello, and welcome back to the Cognitive Revolution. Today, I'm talking with Joshua Browder, founder and CEO of DoNotPay, the world's first robot lawyer. Josh has been called by the BBC the Robin Hood of the Internet. He and the DoNotPay team have been helping consumers fight for what's rightfully theirs for a number of years already and have been aggressive early adopters of LLMs as they've recognized that nobody has time to spend filling out forms, waiting on hold, processing paperwork, or negotiating for refunds and discounts, making all of this activity the perfect job for AI. In this conversation, we cover a range of topics, including how DoNotPay works today, how that's changing with the implementation of advanced AI systems, how larger companies are responding and using AI in their own right. What an AI powered consumer rights arms race dynamic might look like in the medium term? Why Chinese consumers have so much higher standards than Americans? What sorts of regulations and liability standards might make sense for AI? The future of high cost sectors like education and medicine, as well as the future of special interest protectionism and employment in general. We also touch on whether we may soon have a right to an AI lawyer, the case for universal basic income, how DoNotPay is stretching its own AI budget, and lots more along the way. I think this conversation is a great opportunity to begin to game out and plan for possible midterm future scenarios. And while we definitely shouldn't become overconfident or attached to any particular possibility, as the saying goes, plans may be useless, but planning is essential. Sharpening up my own midterm worldview and helping you develop yours, while hopefully improving AI discourse in general, is the mission of the Cognitive Revolution. So if you're finding this valuable, I would ask you to take a moment to share the show with a friend who you think might also benefit from a deeper, more multifaceted understanding of AI. With that, I hope you enjoy this conversation with Josh Browder of DoNotPay.
Ads: 3:20 Joshua Browder, welcome to the Cognitive Revolution.
Joshua Browder: 3:25 Thank you for having me.
Ads: 3:26 Very excited to have you. I have been a follower of your company, DoNotPay, the world's first robot lawyer, for a number of years now. And I think you're one of these entrepreneurs who's in an extremely interesting position where you've been building this business before the modern AI moment really took off. But obviously, you've seen the potential of it and really rushed to embrace it and already gotten yourself out to the frontier of what is possible. So maybe for starters, can you just introduce the company, your original vision for it, and how that has grown over time as AI has come onto the scene?
Joshua Browder: 4:04 Yeah. So I started the company DoNotPay in 2015, almost 8 years ago, so I'm getting old. And I started it because I got a bunch of parking tickets. I have a British accent. I moved from England to study at Stanford. I will use the excuse that everyone drives on the other side of the road, but in reality, I was a terrible driver. But I learned something remarkable, which is if you know the right things to say, you can save a lot of money. And I created the first version just with templates to help my friends, and I could never have imagined that it would be so popular, appealing hundreds of thousands of tickets. And that's what made me realize that this idea is bigger than just tickets, and I should expand to all of consumer rights. So I spent the past few years working on these templates to help people with consumer rights. But now what's really exciting in the AI era is we're going through AI, and that's increasing the value of the disputes we can fight and also our success rate.
Ads: 4:58 Give us a little bit of a sense for how that worked in the early days because I think the contrast is super interesting. And I've been in a similar spot with my company, Waymark. We help people create video content. And the hornet's nest of rules based approaches that we had created up until a few years ago that has now been washed away entirely by language models that we fine tune for the purpose, it's sometimes hard to communicate to people just how much of a nightmare it was before and what a delight it is now. So love to hear your take on that transition.
Joshua Browder: 5:28 Yeah. So rules based approaches can have some success. Otherwise, no one would do them at all. With consumer rights and very simple legal disputes, it makes sense because the law is very formulaic. So imagine with a ticket, you can pick a defense, for example, the signage was not correct, and it would pick a template letter that said that that pulled the San Francisco signage law and insert that into a letter and send it off to the right place. Similarly, if the in flight Wi Fi doesn't work, we had a great template that would send the most aggressive letter ever to United Airlines to get your money back, and the letter was scary enough that the big companies like United would do so. The problem with templates is that they get old quickly. If you submit 10,000 templates to a single source, they'll start ignoring them. And so what we started to do was randomize the templates. But even that, you could kind of tell that it was a template. And, also, it wouldn't be versatile. Users might not pick the right template. They might pick the wrong template, all of that sort of stuff. Whereas AI, you can just match natural conversation to what should actually be done on the back end.
Ads: 6:37 Your comment there about the companies beginning to react to the template letters as they would start to detect that they're getting a lot of these that look the same, I think is an incredible foreshadowing of a lot of the stuff that I really want to get into with you, which is how the world evolves as AI comes online and not just people start to use it, but then people start to react to that. And there's going to be all these escalations and dynamics that I find hard to predict, but I think you probably have one of the best viewpoints on trying to anticipate where that's all going to go. People say sometimes, and I think it's definitely true, that life is a lot more expensive paradoxically when you're poor. And there's all these petty injustices that people have a hard time fighting. What are the ones that you are just super outraged about and super focused on as a result?
Joshua Browder: 7:31 Yeah. So at a high level, DoNotPay is a robot lawyer that helps consumers fight for their rights, and we have over 200 use cases since we started with parking tickets many years ago. Things like getting refunds, canceling subscriptions, negotiating bills, getting out of bank fees. I think the whole financial system is stacked against an average person where if you get a wire transfer, it charges a fee unless you have a lot of money in your account. To cash a check, they charge a fee, all of this stuff. And so what angers me the most is these large banks and financial institutions not paying any people any interest, but also at the same time charging huge amounts of fees to ordinary people. And that's also a good job for AI where you can link your bank account, and AI goes in and finds all the areas that people are being ripped off. I think we have a big problem in society of concentrated benefit but spread out harm. So what I mean by that is Wells Fargo can charge 1,000,000 people a $10 fee. They make $10,000,000, but the people which are being charged $10 can't afford to fight back because the money amount is so small, and everyone is busy with their jobs and life, they don't have time to do so.
Ads: 8:39 Yeah. Interesting. So I kind of think of you as the AI Robin Hood. I don't know if I am the first to use that label for you, but how do you feel about that? Is that a badge you would wear with pride, or would you revise the notion somehow?
Joshua Browder: 8:56 That's my goal. It's something hard to live up to. But I think the great tragedy with AI and any new technology is that it typically gets in the hands of the most powerful first. You see that large corporations are using AI. You see actually in sentencing guidelines for judges where the reports are generated using expert systems and some AI already to say literally putting people in prison. But for consumers, there's not really being much done. And so my goal is to give power to the people and fight back. And interestingly, we have AI products where it negotiates bills with utility companies like Comcast. I know you submitted a Comcast dispute, and our bot will go on Comcast chat and negotiate with them to get a bill reduced. And it's obvious that on their end, they're using AI. So the 2 AIs are talking to each other. And so my admission is if the big companies are using AI, consumers should have access to it too.
Ads: 9:53 I think it is really maybe emblematic of the future that we're all headed toward. So I go into DoNotPay as a customer. And I'm thinking to myself, Okay, what are the things that are annoying me most financially right now that I might otherwise just live with? And shout out to Xfinity for topping that list at the moment. I've had the same plan for whatever, however long, couple years that I've lived in this house. I bought the best internet package that they offered at the time. I was pretty sure it was unlimited. And all of a sudden, somehow I've started to get these $15 incremental overages when I use more data that I guess maybe they changed the terms, maybe I didn't understand the terms, whatever. So now I'm, Alright, it's time to renegotiate. Couple of things stood out to me about the beginning of the process. One is that you're using a chat like experience to guide me through that process. It's not a traditional, here's a web form where you're going to fill everything out, but it's a little bit more of an interactive, dialogue type experience just to even get the ticket submitted. So I'd love to hear a little bit more about that. And then really curious to hear, okay, now what's going to happen behind the scenes as you guys go engage Comcast for me?
Joshua Browder: 11:10 Another transition with AI is we're looking to go from reactive to proactive. So what I mean by that is you submitted your bill to Comcast. We'll get a refund because Comcast, we're very successful with. But you have to submit the bill. In the future, what we want to do is you just wake up one day and the AI says, hey. I saved you $50 because I noticed you are overcharged. So we're going more in that proactive direction. In terms of how it works, it will go to Comcast and chat with them, and a GPT-4 bot will go in and negotiate a bill for you. And I think we have your account because you sent it to us. So what I'll do is I'll actually send you the video of what goes on behind the scenes with Comcast on the back end. We've been doing this since GPT-3, and interestingly, GPT-4 pushes back a lot harder during these bill negotiations. So with GPT-3, Comcast would give some noble offer. They would say, okay. I'll give you $20 off. I'm sorry you didn't know about the overage charges. And GPT-3 would say, great. Thank you so much. And GPT-4 now says, no. That's not enough. I want more. And we managed to get a few $100 back, a 10 times improvement in success rate from 3 to 4 in GPT. Interestingly, we had another utility company we were doing this with, and they said, we don't accept AI disputes. And our AI said, I'm not an AI. I'm just a consumer that knows my rights. And so the AI is lying to keep it going. And so it really is an arms race to keep getting these successes for consumers.
Ads: 12:53 Okay. A lot to unpack there for sure. And the ethical question's probably the most interesting, but I'm also a big fan of the practical. So starting there, when this happens, are you running an in browser thing where you sort of set up your agent to click the buttons and type into the inputs?
Joshua Browder: 13:15 Yeah. So we have a Selenium bot, and this is another rules based versus AI thing. In the past, we would manually trade running the bot with DOM elements on a web page and things like that. But even the Selenium bot is now having AI to improve. So if Comcast changed their website layout slightly, it still works and things like that.
Ads: 13:36 The breadth and the challenge of that product alone seems pretty substantial, as I'm sure you're well aware. Right? There's a number of companies that are trying to make your browser co pilot or your agent that you maybe kick off in the browser that runs sort of in the background. We've talked to a couple of different AI agent companies on the show and have a couple more, I think, coming up as well. You're in a sense in that space, although with more tailored use cases, it seems. How do you think about where you want to draw the line on your product there? Because presumably you're not trying to be the everything agent, but it sounds like you're also headed that direction.
Joshua Browder: 14:21 I've worked on my company long enough to know one has to make idiot proofs. And the problem with these agents is the people, ordinary people in Middle America and Ohio, aren't going to download a Chrome extension and have an AI agent. And if they are, it will just be their phone or the operating system, and it'll be so simple. And so what we think about DoNotPay is the problem rather than some fancy AI agent. The problem is that once a consumer downloads a Chrome extension and logs in and all of this stuff, they might as well just chat with Comcast themselves. The whole point of them using this technology is that they don't have time to do it, and so it should just work for them in the background. And that's what we're trying to get to with the proactive approach. I'm very skeptical of the AI agent companies because I think that eventually it will just be built into the phone or the Apple Glasses coming out soon and things like that. So we do want to do a lot. We want to do all of consumer rights, which is a huge challenge because America is a very broken country. But at the same time, we're not going to help people order pizzas.
Joshua Browder: 14:21 I've worked on my company long enough to know one has to make it idiot proof. And the problem with these agents is that ordinary people in Middle America and Ohio aren't going to download a Chrome extension and have an AI agent. And if they are, it will just be their phone or the operating system, and it'll be so simple. And so what we think about DoNotPay is the problem rather than some fancy AI agent. The problem is that once a consumer downloads a Chrome extension and logs in and all of this stuff, they might as well just chat with Comcast themselves. The whole point of them using this technology is that they don't have time to do it, and so it should just work for them in the background. And that's what we're trying to get to with the proactive approach. I'm very skeptical of the AI agent companies because I think that eventually it will just be built into the phone or the Apple Glasses coming out soon and things like that. So we do want to do a lot. We want to do all of consumer rights, which is a huge challenge because America is a very broken country. But at the same time, we're not going to help people order pizzas.
Nathan Labenz: 15:27 Prior to AI, I've done a bunch of things with Selenium, bots and web scraping. Again, my company, Waymark, we have this challenge where business owners typically show up at our product. And one of, I don't know why nobody else really seems to do this, but a huge challenge is just the friction of creating your account, setting up your profile. People are going to create videos. They need to import their color palettes and their imagery and whatever. So we've, for a long time, tried to automate that process, and it's also been transformed by AI from rules based to much more AI based. So I definitely understand that stuff. But one of the challenges I always find so hard is authentication. When you want to do something in the Selenium bot, do you log in to, I noticed when I went through the Comcast thing, I gave a screenshot of the bill, which was interesting. And I assume you're OCRing that and parsing, structuring that data out, but then also gave my account ID and even my password. And I was kind of like, well, I've followed this guy for a long time. I trust him. But I wonder how you think about that security layer where people are kind of, first of all, is that something that generally people are willing to do, or do you get hesitation there? And then is there a better way than this kind of handoff of the password, account controls to the bot?
Joshua Browder: 16:55 DoNotPay would not have been possible 10 years ago. 10 years ago, we didn't have Plaid. Consumers weren't really comfortable sharing any of their passwords. But now I think the culture has changed, and so a lot of people are comfortable sharing their passwords because it gets results for them and saves them money. In terms of forces that we have against us, you're right. We have a lot. These companies do not want to give up the refunds, and so they have all these authentications, taps, and things like that. One tailwind we do have, though, is we're focusing on legal rights. So there are a lot of companies in the past era that focused on automating customer service disputes. One company that comes to mind is something called Paribus, which the RAMP founders previously founded. And what it did was it helped automate credit card price protection policies. And it was hugely successful. Consumer would log in, provide their bank info, Amazon info, and it would scrape all of their purchases and initiate price protection with their Chase card. The problem is Chase only offered price protection because they thought no one would use it. And AI comes and automates it, and then they decide to shut down all of price protection on all of their cards, and Paribus had to sell because of that. What we focus on in contrast is US law. And because we focus on the law, once we have a certain amount of information, we can almost guaranteed submit a correct dispute. So one example would be credit report disputes. There's a great law. It's called the Fair Credit Reporting Act, which says that if you submit a dispute and it's signed and it has the address and it's correctly formatted, the company has to accept the dispute. So our only issue is how can we get that data with as little friction as possible, and that's what we're working on. Maybe it can scrape your email. Maybe it can get public records so you don't even have to give us your last 4 of your social, and so there's not even that trust element there. But that's our superpower where we have the legal system behind us, which has existed for hundreds of years, and laws aren't going to change overnight just because the company is unhappy.
Nathan Labenz: 18:59 So is that also the case in the Comcast example specifically? Do I have legal rights there, or am I just in a test of wills?
Joshua Browder: 19:07 No. So the legal rights would be FTC statutes. It would say, I'm not sure if you said why you're upset with Comcast. If you did, it will convert that into FTC laws. And FTC laws are so broad. It's like any sort of misleading stuff, it is against the law. And so we'll say, you've broken this misleading FTC law, and Comcast received this, and it's just some agent or maybe even some AI. They don't want to have to deal with this, so they just press the refund button.
Nathan Labenz: 19:35 How do you think today when your GPT-4 bot is going in and talking to Comcast? As you said, the laws don't exist for this yet. Right? And so we have, people are exploring all these different scenarios. Right? You've got, I think ChatGPT is the anchor example of, while it will chat with you about anything largely, they have branded it and even named it, and certainly sculpted its behavior in such a way where you get pretty frequent reminders that you're dealing with an AI, and it's not trying to be your buddy. And I think the interaction there is pretty clear. Then we see all these other things where we're like, that could be kind of weird or problematic. Right? We had, for example, the CEO of Replica on an earlier episode. And I think they also take a very responsible approach to making clear to users what the deal is here. But it's definitely more fraught territory because what is the future of AI friends going to look like? That's anybody's guess. You're doing something that I would normally think might be outside of what I would consider appropriate. However, given the power dynamics, I'm a lot more inclined to consider it where you're not telling Comcast you're an AI. As you said, even willing to lie about it if it comes to that. So how do you think about the ethics of that, first of all, just today?
Joshua Browder: 20:58 I think that these disputes are so simple. It really saves everyone time, and when DoNotPay first launched, and we launched city by city, and we launched in Los Angeles, and NPR asked Los Angeles what do you think of a service like DoNotPay helping to appeal parking tickets. And the head of the parking ticket division of Los Angeles said in the most great spin I could have imagined, they said, we actually like it because people write such gibberish in their parking ticket appeals. At least when it comes from a templated service, it's standardized and easy to process. And I think that ultimately it's a positive sum. As I mentioned, these companies are already using AI. I don't think they're telling consumers if there's AI on their end, and so consumers should have the option to use the same, and it can save everyone time. No one has time to argue over $20 or $100, not even the big companies, and so it can make society more efficient. For consumers, though, one really does have to disclose that AI is being used, and there's actually a new legal theory. There's all these class actions being filed in California against chatbots because they're saying that AI is secretly reading the messages against customer support bots and things like that. And so it'll be interesting to see where that goes.
Nathan Labenz: 22:13 Yeah. One of the very simple candidate regulations that I've thought generally positively about has been, at least championed by Yuval Noah Harari, who has said first regulation should be that AI must disclose that it's AI. And I guess I wonder, first of all, would you support something like that? And even if you had to then abide by it. And how do you think that would start to shape the landscape if a rule like that were dropped into place?
Joshua Browder: 22:43 I think that it's surprising that it's coming from someone so smart because I think that it's impossible to do that. If you use autocomplete on iMessage, that's coming from AI, but should that have a in brackets, this was written by AI? I think AI will be so ubiquitous in our lives that any sort of disclaimers is just holding back the technology. If I were to support it, I would say that it should only be for consumers, but communication to businesses shouldn't have AI. Any sort of regulation I've learned the hard way will be abused by lawyers who come up with these theories, and they'll just go crazy, and it will just hold back innovation because regulation is easy to comply with for the big companies and the Apples and Googles of the world and OpenAI. But smaller companies, not just DoNotPay, but other customer support startups and things like that, will have a hard time. So I'm not a big fan of regulation in general. I don't think there's a problem where consumers are being tricked by AIs. And to the extent that they are with scammers, with their relatives calling them up, the criminals are not going to comply with the laws, and it will really just hurt businesses.
Nathan Labenz: 23:54 So I think your autocomplete point is a good one. I definitely take that. And the fuzzy line of what's autocomplete versus what you actually write is getting fuzzier all the time as the autocomplete suggestions get longer and more robust. So I think that's apt. I wonder if maybe there could be a line still drawn around whether or not there's a human in the loop.
Nathan Labenz: 24:20 Push the send button. We have these systems, for example, where a human agent sits there and gets candidate messages, but still is operating controls versus something where there's just no human in the loop at all, and you're just totally interacting with a bot. Anything there where you think you could delineate something meaningful?
Joshua Browder: 24:41 Yeah. So I think for serious issues, that definitely has to be the case. There's a lawyer right now in New York Federal Court who's gone huge trouble because he used ChatGPT, I'm not sure if you heard about this, to generate one of his briefs. And ChatGPT made up cases, and he just trusted the AI blindly and filed the case. And it turned out all the cases were fake and the judge is not pleased, and so he might go to prison or lose his law license. And so I think blindly trusting AI and signing off on it in serious issues like federal court is a bad idea. But I think for minor issues, it can help us a lot, and every time someone has to disclose something, it almost reduces the benefit of AI. I also think that no one really cares. They just care about the results. I'm sure Replika now has disclaimers, and ChatGPT also has disclaimers. This is an AI. Don't trust anything you're saying. But people still just jump over their disclaimers and stuff. So, and DoNotPay even has disclaimers as well when you sign up. It says we're not a lawyer. Things are generated automatically, things like that. But people just want results. They in Middle America, they just want to save their utility bill. And so all of this over intellectualization is kind of an academic problem versus a problem that real people care about, in my opinion.
Nathan Labenz: 26:03 Yeah. I think that's certainly for today, I think that's fair. And I can imagine a sort of even more dystopian future of the web where it's like, now I have to accept all cookies and accept that I'm going to talk to AI, it's on every website. And it's just sort of like, I'm numb to those things already. And I still, though, do feel like as I extrapolate this farther out, the dynamics become very hard to predict. This is a really interesting, I wouldn't say we're in this conversation pattern quite yet, but a conversation pattern that I often observe in the world is people are like, hey, shit, we just created AIs that are better than the average human on most tasks, and they're getting close to expert performance on a lot of tasks. And that seems like a highly unpredictable technology that could play out in a lot of ways, good and bad. And then other people are like, well, unless you can give me a specific way that it's going to be bad, then I don't want to hear anything about it. Where do you think the burden of proof ultimately should be on questions of AI regulation and AI ethics? Who owns the ball or who should own the ball?
Joshua Browder: 27:12 I completely agree with you. It's not enough to just rest on our laurels and say everything will be great, and I think we should ban AI for evil use cases. So one example, lots of debt collectors are using AI right now. They just have these prerecorded AI voice messages, and they instead of being able to call 1000 people a day, they can call 1000000 people a day powered by AI. I think we should ban AI in debt collection. I think AI should be completely banned in sentencing people. In The UK, the country where I'm from, expert systems are being used to give people speeding tickets. So now in London, it's not that you just drive past a speeding camera. If you go from 2 cities too quickly, the AI says, wait a minute. You got to the second city too quickly. So it's not even just driving past a camera anymore. It's the entire road network with AI is giving people tickets. I think that's inappropriate. So I think we need to come up with all of the evil things that happen in society and create regulations to stop AI being used for evil. And this is not a new thing. In 1991, they passed a law against robocalls, and in that time, recorded prerecorded voices were a big problem, where someone would record a human, and that recording would be dialed 1000000 times. But now we need to update that law to say AI phone calls from debt collection, and it'll be sales calls and things are also a problem. So I think just ban for evil.
Nathan Labenz: 28:39 Could you just use OpenAI to do debt collection interaction, or would they shut you off, you think?
Joshua Browder: 28:46 So there are tons of service. They've thought about a lot of these use cases. I don't think they explicitly ban debt collection at the moment. I know they ban lobbying, and that's another evil use case. Imagine someone pretending to be a politician phoning you up with Hillary Clinton's voice or something like that. So I know that is banned, but I'm not sure about debt collection. Joshua Browder: 28:46 So there are tons of service. They've thought about a lot of these use cases. I don't think they explicitly ban debt collection at the moment. I know they ban lobbying, and that's another evil use case. Imagine someone pretending to be a politician phoning you up with Hillary Clinton's voice or something like that. So I know that is banned, but I'm not sure about debt collection.
Nathan Labenz: 29:07 But the stack there. So maybe they can use OpenAI, maybe they can't. They certainly could go, especially if they're not too concerned about a commercial license, which I suspect most are not. They can certainly go grab a Llama and fine tune it and kind of power their dialogue that way. And then use just a commercial text to speech generator and kind of spin up a little bot and unleash it and see what happens. How far do you think they are in terms of their sophistication right now?
Joshua Browder: 29:45 I think a lot of them, there's 10 different ways to get access to GPT-4. You can go through Microsoft. You can go through OpenAI. You can go through lots of third party kind of aggregators of APIs. So they're probably using the best stuff out there. The open source stuff is catching up. I don't think we can rely on gatekeepers and platforms to stop this because the technology is improving so quickly. I think we have to rely on the law to stop it. I think the political thing is a perfect example of where you definitely want it saying it's AI. So I do agree with you. I don't want to be too relaxed about this issue. There are definitely areas where it has to say it's AI. Otherwise, it's not good.
Nathan Labenz: 30:28 You're in this such a fascinating place because you're fighting against all of the sort of regulatory state bullshit and unfairness that falls on the general public. But then when you imagine relying on the law, relying on government, it seems pretty clear that at best, we're going to get an every so often update to the actual law, which then means we're almost certainly going to end up in this kind of agency run regime where there will be an authority established, or maybe that authority gets divided up across some existing agencies or whatever. But ultimately, there's going to be some bureaucrat that's going to have to kind of create and implement rules and sort of prosecute the law, I guess. Do you have any sense for how you think that could go such that it could actually be kind of responsive and serve the public? I mean, sounds extremely hard. Not that that means we have any way around it, but what would your advice be to the policymakers as they try to think about getting that right?
Joshua Browder: 31:37 I think they should regulate the extremely large businesses so that they do the hard work for us. And if we go back to robocall legislation, that's what they do. Robocalls have been such a big problem over the past few years that Congress has said, okay, it's the responsibility of the telephone companies to solve this issue. And so now they have something called SHAKEN/STIR, which means that AT&T is responsible for identifying the caller. So when you see on your iPhone, it says spam likely because it's probably a robocaller. That's the law that's behind that, that forces the company, the big telephone company, to take responsibility and solve this issue. So I think the same could be true for the Microsofts and OpenAIs of the world, where they're the ones being regulated to stop this bad action happening. But I think for startups and consumers and businesses, they should be allowed to use this technology to benefit them without having to get a license because it's just going to be regulatory capture otherwise. No one should have to hire a lawyer or a lobbyist to innovate.
Nathan Labenz: 32:39 Yeah. I think I agree with that. And I would even say it seems OpenAI largely agrees with that. There's been a lot of cynical reads of their calls for high level regulation, but seems they're not really trying to squash the consumer or the indie hacker use case either, despite accusations to the contrary. In practice, do you think that that is kind of a monitoring system that sits on top of your usage? If I'm an OpenAI user in the future, I maybe use it knowing that there's some additional AI layer that kind of assesses my account at some scale and kind of monitors for any number of harmful uses, where it's like, okay, if it detects that I'm generating a lot of a certain kind of content, I get flagged. And is that kind of what you expect the future experience of that to be?
Joshua Browder: 33:37 I think the way America works, it's the best country in the world, is it's all about liability and lawsuits. So if you shift some of the liability onto the platforms, they will self regulate. So if you say that if they know or should have known about something going on with their big AI models, then they will be much stricter in stopping the debt collections and things. And I imagine the lawyers will keep them honest with endless lawsuits and class actions to make sure it happens. And then beyond that, I think some sort of licensing for very large, very sophisticated models combined with some liability shift would probably stop 95% of the bad use on their platform.
Nathan Labenz: 34:20 I think one of the common patterns right now which we see even among academics in unpublished results, but I imagine must be going on on a more kind of invisible level as well, is people will go use ChatGPT or even GPT-4 to create their training dataset that they'll then use to fine tune their own model, which they can host and run privately. And I don't want to give anybody bad ideas here, but increasingly, it doesn't take that many examples to do a reasonably good fine tuning. If you have a model that's already kind of chat or instruction tuned, a hundred examples, a thousand examples, it's probably in that range where you can fine tune for a lot of kind of more specific tasks. So if I go to OpenAI, and let's say I do, I come up with some scenario, maybe it's a jailbreak, maybe it's just kind of not super obvious what I'm doing, and I create a hundred examples, and then I go fine tune my own model and prosecute some scam campaign or whatever. Would you extend that liability back to OpenAI for kind of having enabled me in the first place, or how would you think about drawing that line?
Joshua Browder: 35:39 I think that at every step of the implementation, it's only effective if AI is being used every step. So to do the voice calls, you need to have a synthetic voice company that has huge, that has really good synthetic voices. So they are also liable. You have to have a telephone dialing system with AI, so the kind of Twilio AI is liable. And if you create gatekeeping and liability every step, it becomes a lot harder, and you'll have to sign up. They'll be know your business or know your customer to find out what you're doing and things like that. It won't stop the truly evil people because they can just try fine tune their own models. But one thing I'll say about scammers is they're not very sophisticated. That's why they're scammers. If they were so smart, they could make money in the economy their usual way. So they just want the low hanging fruit to get things done very quickly, and that's why I think it would stop 95%. Because anyone who knows how to fine tune their own model and train it and do all of this engineering work is probably doing well building their AI company or working an AI company. They don't have to kind of pick up the bottom of the barrel with these scams. So just making it harder for dumb people to scam has a surprisingly big impact.
Nathan Labenz: 36:56 Yeah. Interesting. Okay. Yeah. Certainly no doubt that if you have demonstrated any aptitude with AI today, there's plenty of honest work out there for you. You don't necessarily need to be scamming people over the phone.
Joshua Browder: 37:12 I think big companies need to create tools to protect us from AI. So in the same way it says spam call likely, if you pick up the phone and there's an AI on the other end and it's a fake AI voice of your grandparent, maybe the phone should vibrate and AT&T should protect you, saying, this is not who you think it is. And they need to build their own AI model to do that. And so there's a lot of things they're going to have to work through the system, but I think putting the responsibility on these big companies is always the way to go.
Nathan Labenz: 37:40 So how does this kind of shape up in the next couple of years as you mentioned this term arms race. For many people, that is the most scary thing, certainly at the level of US China military rivalry. An AI arms race sounds like maybe the worst thing ever. An AI arms race between even just the biggest companies, a lot of people think is a real disaster. And then there's also this AI arms race potentially between kind of consumers and the companies that are overcharging us, nickel and diming us, so on and so forth. What do you think that arms race ends up looking like? I've got DoNotPay, and I've got your agent, and they've got their agents. How does this kind of settle into some stable equilibrium?
Joshua Browder: 38:28 I know we've spoken about a lot of the dangers of AI, but I think it'll be hugely positive and hugely deflationary. So if you look at customer service spend, it's estimated to be around 10% of total corporate costs. And so that's the connection between the price of a hamburger and ChatGPT, where McDonald's can take orders fully with AI. United Airlines can lower their plane tickets, in theory, by 10%, even though planes have nothing to do with AI because they don't have to hire all these customer service agents. So things will be cheaper and more efficient without AI agents negotiating with each other. And that's great because no one likes to pay high prices. Lower prices are good. And so I think that's the largest positive benefit where things are just more efficient and simple. You compare it to China, and China is using a lot more AI right now in customer service, and it's just much, much better. You call an agent in the US and they're, hold on, wait, let me just take down your account number, and it takes an hour to get something done. In China, you don't really have that. Things just get done more quickly. People live simpler, cheaper lives because of AI. So I think that's the equilibrium where things are just a lot cheaper.
Nathan Labenz: 39:43 Can you give me a little bit more on the China reality today? Because I'm contrasting that in my head against the recent regulatory statement that they came out with that was kind of seemingly a sort of chill for a large language from a large language model perspective. There were some provisions in the, it wasn't the final rule, as I understand, but kind of the guidance from the CCP was, you need to have, as a large language model, a good handle on even just the training data that you're using to ensure that it is reliable, good quality content, not violating people's copyrights, etcetera. And the general response to that seemed to be, China is not racing into this LLM future, but you're describing an LLM reality that is seemingly quite different from that.
Joshua Browder: 40:30 Yeah. Well, this is an interesting thing, which I'm sure you guys know. This AI stuff has existed for up to 2 years now, kind of good LLMs, GPT-3, GPT-3.5. It was only when ChatGPT came out that the whole world imagination, and it became the AI hype. But there's been really great machine learning and AI models out for a while, and now everyone is focused on these ChatGPT style LLMs where there's other use cases of AI that can be beneficial. I think that China is concerned with LLMs from a content perspective where it tells people the government in China is not good or communism is not good and things like that, and that's what they're concerned about. But they're certainly not that concerned about corporate efficiency where disputes are being processed. And as a result, Chinese consumers have much higher standards than US consumers. In the US, we're very angry and vocal, but ultimately, we still accept companies like Planet Fitness, where the only way to cancel is a signed letter or going into the store. That would be unacceptable in China. And so because of all how efficient everything is with technology, I think they have a lot higher standards.
Nathan Labenz: 41:46 Coming back then to the US and how, or the West, but the US specifically. As our equilibrium potentially starts to emerge, I guess you could imagine a lot of different scenarios. You could imagine kind of a new liability regime, but still kind of everybody interacting with each other on a one to one, my bot talks to your bot. You can imagine the company's kind of escalating through some, we don't, you said that toward the top, somebody hits the idea. We don't accept AI disputes. That may, you could imagine a law that says, well, that's not illegal. That's not legal. You must accept AI disputes. You could imagine new sort of consumer standards that might be voluntary that companies could opt into to say, we will accept AI disputes because we know that people want that. I think in some sense, DoNotPay is maybe a little bit too Robinhood branded to be the one that brings all the companies on board. But you have kind of an interesting position from which I wonder if you could even start to create something new like that, where it's, can there be some sort of standard? Or can there be some sort of shared expectation of how these things are going to happen so that it's not just bots bumping into each other, kind of, because eventually they both just stonewall. I mean, it seems in some limit, one version of events would be my bot refuses to accept their offer and their bot refuses to give me anything. And so they just talk to each other until one ends the conversation. How do we get to a scenario where it's actually all these bot interactions are actually productive?
Joshua Browder: 43:25 I think we need open AI regulations in the same way we need kind of open banking in Europe and even in the US, where you can have AI transact with other AIs, and there's a technical standard where they can communicate. Imagine an AI look. All the banks ripped us off because they give such low interest rates. And imagine if AI was shifting your money from one low interest rate account to a higher interest rate account in a safe way, and it had a power of attorney to do that on your behalf. And so I definitely fought kind of technical standards to allow different AIs to communicate. And so maybe your AI could say, I want to move my money or close down this credit card because the fees are too high. And the bank AI is that's an authentication AI to make sure it has the authority to do that and there's a way for them to communicate. So I think that more open systems and technical standards would stop the problem of AIs butting heads. Rather than going through this janky web conversation that was designed for humans, they can go through a much more efficient, maybe, API based framework to get it done. Joshua Browder: 43:25 I think we need open AI regulations in the same way we need open banking in Europe and even in the US, where you can have AI transact with other AIs, and there's a technical standard where they can communicate. Imagine an AI. All the banks ripped us off because they give such low interest rates. Imagine if AI was shifting your money from one low interest rate account to a higher interest rate account in a safe way, and it had a power of attorney to do that on your behalf. And so I definitely fought for technical standards to allow different AIs to communicate. And so maybe your AI could say, I want to move my money or close down this credit card because the fees are too high. And the bank AI is like, that's an authentication AI to make sure it has the authority to do that, and there's a way for them to communicate. So I think that more open systems and technical standards would stop the problem of AIs butting heads. Rather than going through this janky web conversation that was designed for humans, they can go through a much more efficient, maybe API based framework to get it done.
Nathan Labenz: 44:34 How about a possibility of an AI powered arbitration? I wonder if there's another model. Today, right, you either, in the limit, you end up in a court, or if there's some provision, you may end up in human powered arbitration. Do you think there's a role for a third party, or obviously it could be multiple third parties that you could choose from, that could sort of say, look, this arms race has to stop somewhere. We are going to be the provider that the companies and the consumers can come to agree on in advance, then come to when there's a problem. And our AI will render judgment, and that's the service that we'll provide.
Joshua Browder: 45:17 Yeah. In the US, so landlords hold people's security deposits, and then it's almost a negotiation after someone moves out whether they'll even return it. And that's one of our biggest use cases of DoNotPay. One could imagine AI arbitration for security deposits where at the beginning of a lease, someone deposits the money with AI, and then when the lease ends, everyone can submit their evidence and AI decides. And that's a great simple use case that's probably within the realms of current technology to shift and decide where the security deposit should go. That's another area where we might need pro AI laws because the market is not going to shift to that pro consumer stance without some sort of law. But that's a great example of where AI could be helpful.
Nathan Labenz: 46:02 It seems like there could be some hope for a market driven equilibrium shift there. You go to the grocery store, right, and food in general is legal. There are certain labeling standards that are maybe required, but the default assumption is if I want to start making... My dad makes pickles, right? And he's always assumed that if he makes enough of them, he can go sell them to a grocery store with relatively minimal hassle. But then there are these certifying bodies if he wants to be organic, if he wants to be this or that. There's these stamps that you can go out and try to earn. But it sounds like you don't really see a path to a future where Comcast might say, we support the do not pay AI arbitration standard so that you can sign up for us with peace of mind.
Joshua Browder: 46:55 I'm perhaps more cynical than you guys, but I don't think so. California had to pass a law that said that you have to have the option of canceling subscriptions online. The fact that they have to pass that law shows how backward these companies are. If they could get away with it, they would say you have to fax a cancellation in. In fact, up until a few years ago, that was the case with some subscriptions where you could only cancel via fax or mail. AppleCare was actually one example, ironically. Unless there's some laws, the companies will try anything to get out of their responsibilities. Even going back a long time ago, you see on every mattress, it's like, do not tear off this label if you're a retailer, because the retailers would stuff the mattresses with rats to save money. So I would think these big companies will do anything to save money and cheat people, and there has to be some sort of law. The market is good in some areas. Ultimately, the best products win. But on the lower end, there are a lot of shady stuff going on without the law.
Nathan Labenz: 47:58 You certainly know a lot more about that than I do, and your cynicism might in fact be quite correct. I want to get to the deflationary part too, because I think that's a really key dynamic, and you mentioned it. But maybe before going into that, I also am really interested in how you think the consumer reality or experience is going to change, even beyond the dispute moment. I think about, you go to the grocery store, you get everything on your list, and then at the end, you've got the candies in the checkout aisle. And it seems like the AI layer on everything can really supercharge that for businesses where we're all going to have these highly personalized and probably often reasonably compelling, do you want to add this on or impulse buy this opportunity, just coming at us all the time. And I wonder, do you think that that's just something we learn to live with or learn to tune out? Or do we have an AI layer of defense for that, that sort of filters that stuff for us? I don't know what it's like to be a consumer even two years from now. So I'm wondering if you can help me get a little window into that.
Joshua Browder: 49:14 I think that we'll have our own AI side of Apple AR glasses, and it'll be able to filter out stuff that we don't want. But a lot of personalization, people love. I love consumer rights, but I do think some of this privacy legislation is actually anti consumer. I think consumers love personalized ads. People love seeing stuff. They love it so much that there's always these conspiracies that say, oh, I was talking about buying something and it just appeared as a Facebook ad. In reality, Facebook is not recording you. It's just that it knows what you want so much that it shows you what you want. And I think giving people what they want is a good outcome, but I do think that they will also have AIs to filter out spam and maybe some doors around the edges to stop the trash. But I think it'll be a positive. If I see the stuff in the checkout line that I like... I love sparkling water. If there's a sparkling water can, that would make me happy.
Nathan Labenz: 50:14 How do you think people will interact with this? You mentioned the Apple, the upcoming glass device a couple of times. That could be potentially a huge part of it, I can imagine. Do you guys have... I know you have the website. You have the app. I looked for a plugin. I didn't see a plugin. I wonder if you imagine a future where people continue to come to DoNotPay website or app directly to use stuff, or if there's this AI super app layer that you plug... going back to China example, right? They have a WeChat super app for everything. Do you think ChatGPT or similar has a chance of becoming that and then having all the things, do not pay plug into it, or do you not really buy that unification theory?
Joshua Browder: 51:03 We're a big believer in being platform agnostic in part because these companies like Apple are also evil, and they're very monopolistic. And so if you're too reliant on them as an app, then they can shake you down and steal all your money by canceling your app. I think it's about being idiot proof. The best AI will just be in everywhere everyday life or just work in the background. When the Apple stuff comes out, we are ready to go. I said to our engineers, we have something. Day one, we're going to launch using AR. It's going to be able to scan everything at the supermarket and have prices hovering of cheaper prices at nearby retailers. So we're already thinking about new platforms and things like that.
Nathan Labenz: 51:41 How do you try to get ahead of the dynamics that seem inevitable to shift pretty quickly? Right? If everybody all of a sudden starts walking into the grocery store with your app on in their Apple AR device, and now every price mismatch is subject to the match policy. Presumably, they shut the match policy down, but maybe not. Maybe everybody just gets matches all the time. And that's maybe one of the ways it drives a lot of deflationary pressure. But I just don't know how to start to get a handle on that. How do you try to project what's likely to happen and get a read on where things are going?
Joshua Browder: 52:26 Yeah. There's the drop in the bucket argument where I think Gartner or one of these research companies did a survey of how many people have actually tried ChatGPT in America, and I think it was 12%. 54% of people have heard of it, but only 12% has tried it. And so the people that try these technologies will have an unfair advantage, which is sad for the people not trying it, but good for the people trying it because it means that things in the world won't change against them so quickly. And so I think that the cynical view is that those that use technology, the median consumer who's quite good at using some technology but obviously not creating it or knowing how it works, they'll do very well. The people that don't use smartphones or don't know how to use and embrace these things will be left behind for a few years until the rest of the world catches up.
Nathan Labenz: 53:16 That's not that long of a time though, right? I mean, it seems like... when you say that, I often think back to the Anthropic fundraising deck that apparently was real and got leaked and written about where they say that they think that the companies that fall behind, or basically that would be almost all companies, right? Companies not on the frontier in the '25, '26, that's 2025, 2026, as in two to three years from now timeframe, they say, if you fall behind in that window of time, you may never catch up, which sounds insane, but also may be realistic in the scenario that models start to train their successors and there could be... the moats really might get deep at that point perhaps. That sounds like a very different world. It doesn't sound like to me we're in a decades long transition. Do you think this is actually going to take a lot longer than that Anthropic deck suggests that it would?
Joshua Browder: 54:16 I think so. The more things change, the more things stay the same. There are a lot of old people in America just demographically, and we're a geriatric society, and it's increasingly so. And old people, they don't want to embrace new technology. They're like, I've seen it all. My vision doesn't really work. And there's still a lot of people in America who use checks for everything. There's restaurants that accept checks and things like that. And so I think there's going to be a huge contingent of society that don't see the benefits because they don't embrace the technology. And when they touch government services and things that do use AI, they'll benefit from that, but they might not benefit in their personal lives.
Nathan Labenz: 54:55 How much money does DoNotPay save people today, and how do you think that's going to evolve? And if it gets into... at some point, presumably, if it's the median per capita income in the US is whatever, 30 some thousand dollars, maybe up to 40. If you can start to save people a couple thousand dollars a year consistently and you're talking about saving people 5%, 10% of per capita income, it doesn't seem like people leave those size dollar bills on the sidewalk for very long. So my picture is, you probably can get there. You're probably going to be about to save me a couple $100 on Comcast. You probably can do that across a number of different things. Why wouldn't everybody pick up that free money? Do you think people are just that slow or the friction's just that high? What am I missing?
Joshua Browder: 55:47 I'm not sure what US GDP is. I think it's in the trillions, maybe 3,000,000,000,000. I could be wrong. It could be 1,000,000,000,000, but it's certainly over 1,000,000,000,000. So 10% of 1,000,000,000,000 is 100,000,000,000. And I think 100,000,000,000 a year of savings at a minimum is hugely ambitious. And you're right. Things would change a lot. I'd probably be in jail if I was saving people 100,000,000,000. They would find a way to put me in jail. Once again, it's the drop in the bucket argument. Right now, DoNotPay saves people mid nine figures a year. But to scale that exponentially, we would face some serious pushback. So it's really about the people knowing how to use us and getting an unfair advantage. The advantage that DoNotPay has, though, is we're very horizontal. If we were just doing parking tickets, the spike is so concentrated that the people we're against have a big incentive to stop us more quickly. But if you divide what we're doing by 200 use cases and divide that by thousands of companies we interact with, the impact is still strong, but it's more of a poke at the moment. The benefits of AI in terms of the companies will accrue to the capital holders. So inequality will be exacerbated by AI because these companies will be able to build huge outcomes with very few people. And that's great for the 1,000 people that work in Menlo Park, but not so great for the tens of millions of people being put out of a job. On the other hand, I think AI will make the average person more powerful. In society, we live in a pay to play society from a legal standpoint. Those who have more money tend to win their legal cases more. Those that have access to expensive professionals like doctors and therapists, that takes money to get access. And so AI can provide those services for free or much more cheaply. So I think the solution would be... actually I'm a big capitalist, but I think you have to redistribute some of the gains from these 1,000 person AI companies or maybe even 10 people AI companies that get all of the value from making all the society out of work. And I do think we will see some sort of basic income where someone could just have a decent standard of living paid for by the government.
Nathan Labenz: 58:07 Give me your expected time. I mean, in the AI safety world, people always talk about your timelines. I'd love to hear your not your doom timelines, but your consumer trajectory timelines. Because you're like, a couple years still doesn't seem like everybody's going to be using it in your mind.
Joshua Browder: 58:25 Well, on the consumer side, enterprise will use it and lay off people. I think that within five years, most white collar jobs... I would say maybe the ballpark, 30% of white collar jobs will be eliminated by AI. Everyone thought it would be the blue collar jobs first, but actually, those ones are protected, I think, for five years, and then we'll start to see more blue collar jobs with self driving cars, stock shelf AI robots, and things in the physical world catch up. So maybe within 15 years, maybe 60% of jobs will be eliminated. That's my ballpark. I'm curious what you guys think, though.
Joshua Browder: 58:25 Well, on the consumer side, enterprise will use it and lay off people. I think that within 5 years, most white collar jobs, I would say maybe the ballpark, 30% of white collar jobs will be eliminated by AI. Everyone thought it would be the blue collar jobs first, but actually, those ones are protected, I think, for 5 years. And then we'll start to see more blue collar jobs with self-driving cars, stock shelf AI robots, and things in the physical world catch up. So maybe within 15 years, maybe 60% of jobs will be eliminated. That's my ballpark. I'm curious what you guys think, though.
Nathan Labenz: 59:07 Yeah. My crystal ball gets pretty foggy beyond a few months out. So I do try to venture predictions because I feel in this line of work, I can't really shirk that responsibility, but always caveat them with the fact that I find a lot of these dynamics hard to predict. But I think you're basically right. It seems to me that we talked about Wendy's just made this announcement around their Fresh AI, and it's going to take your order at the Wendy's drive-through. And you think, I think it's a great example of how today the job description as it's traditionally been done, it's always just been humans, right, that are applying for the jobs. And so people like variety, and there's some resilience benefits perhaps in having these roles that span different kinds of tasks. So from what I see, I've never actually worked a fast food job myself, but from what I see, you typically have somebody who's taking orders and filling sodas and probably takes a shift on the grill, and they kind of rotate through and share duties in whatever sort of way. It does seem to me that certainly, you're right that the order taking seems like it's ready soon, if not now. Some of the statements from the Wendy's and the Google execs powering this, I thought, were pretty remarkable. One of the Wendy's guys says that the Fresh AI bot, I believe this is a direct quote, is better than any of our human order takers. They also do say you won't even know you're talking to an AI. Both of those things really stood out to me. So that's however much percentage of work that is at a Wendy's, seems like it's pretty clearly gonna be taken on by AI. They can't yet man the grill, so the jobs will kind of change in nature, and they seemed almost certainly that there would be fewer of them. I really don't know how that could play out any other way. And it seems like 30%, if I had to just to take your number, are there 30% fewer people working at Wendy's in the next couple of years as all the order taking is automated?
Joshua Browder: 1:01:16 My positive outlook is there are a lot of jobs that will be created as well. So that's why it's 30% and not 60%. Even a podcast engineer, maybe 20 years ago, that didn't exist. Prompt engineers, although that will be a very small part of the economy. I think the entertainment sector will explode. If everyone is getting basic income and is out of work, they need to be entertained. YouTube and OnlyFans and these things will just explode even further, especially in the AR era, gaming and things like that. So maybe the future will look like being a professional gamer, and there'll be more people doing fake work inside of a game, mining rocks in RuneScape, the AR version of RuneScape. AI will create fake work in games for people. Who knows? But it's an exciting area.
Nathan Labenz: 1:02:08 Yeah. Although even in Minecraft, for example, there have been some interesting papers just the last couple weeks where it's like, even your Minecraft playing is not immune from AI competition. And in fact, again, probably the best Minecraft AIs outperform the average player, I would guess, pretty decisively on the vast majority of tasks. So it is going to be hard, I think, to find areas where this stuff doesn't kind of encroach. My best guess, this is not my best guess necessarily, but one vision I do think is kind of a plausible part of the future is an almost reactionary sort of offline. I think mystery murder dinner party type things that are kind of bespoke, in person, highly sort of customized and local and almost peer to peer in terms of its provision. But something that is just a total reaction against the kind of everything being online, everything being AI. It seems like there's gonna be some kind of shift maybe just away from that. But again, that's kind of like prompt engineering in the sense that we can't all prompt engineer our way to prosperity, and we also probably can't all provide mystery murder dinner parties to each other. There's only so much of that. So maybe my imagination is limited. And then too, I think if I was doing that, it's like the leap is so quick. If I was going to do that murder mystery immersive experience business, I'd probably use GPT-4 to help me write the stories. So is there, what's the refuge from it? I think I see 2 possibilities. One, there's a stall out basically now, and we find that this stuff just never works. I don't see how that would really happen, but it's at least still somewhat of a possibility. Or the other is it becomes so ubiquitous that I don't really find that many things that seem like they are untouched by it.
Joshua Browder: 1:04:10 I think to your point, people really value human to human interaction. People appreciate Magnus Carlsen, the chess player, because they know he's not using AI. And I think there was another chess player who was caught using AI or some expert system, and he was completely shunned. So there's a lot of things in life that are only valuable because it's a human to human connection. I think this podcast, I think a lot of people tune in because it's great human beings versus some AI. And so I think there'll be AI certification. No AI was used, and maybe the AI will be doing the certifying so it can be sure, and things like that. People underestimate how much stuff is in the real world. If the toilet breaks, there's not going to be an AI toilet repair service anytime soon. And we look around us in the buildings, and that's all created by human beings. So there's still a lot of stuff that will definitely be human powered, and I don't think we have to worry that much. There's gonna be a huge shift with these people charging hundreds of dollars to copy and paste documents like lawyers that are biased against being replaced by AI. But there will also be a lot of jobs that are safe and also some exciting new jobs.
Nathan Labenz: 1:05:21 So then that's maybe a good transition to this kind of deflationary trend that you're predicting, which I also totally agree with. The deflation starts in AI itself with huge price drops on state of the art model from last year is now 97% cheaper and improved in terms of both capabilities and speed. And now we've got new higher end models that kind of maintain basically that top level price point from last year. But when a certain capability level is dropping 20 to 1 within a year, sets a good stage for kind of further deflation. Do you think that this is the thing that finally kind of solves the cost disease, like education, access to things like law, access to things like health care? How big of an impact do you expect in those kind of classically prices out of control areas over the next few years?
Joshua Browder: 1:06:18 Any services business will be touched by AI and experience deflation. Right now, you can go on Khan Academy today and get a world class AI. They have an AI version now where you can get a great maths education, and that's really great. At the same time, though, the areas you listed are highly regulated by the government, and that's the reason why they're expensive, not necessarily because they don't have any AI. TVs have dropped drastically in price over the past 30 years, but education and housing has remained expensive because the government is regulating it. And I think that the one way that these professionals are gonna save themselves is through huge regulation and licensing. Lawyers especially, DoNotPay has experienced a lot of challenges where lawyers are saying, oh, it's unauthorized practice of law to give people power to fight for themselves. There's a judge in Texas, a federal judge in Texas today, who has just said that every filing made, you now have to certify either that AI was not used at all, or that if AI was used, a human has manually checked everything that AI has done. I think that was a response to the guy in New York who's being put in the doghouse for making up the fake cases with ChatGPT. So all of the professionals are fighting back. They're not going easy. They're going to create AI regulations to save their jobs. And especially in the EU, the EU is banning even farther than the US. They'll probably say you need a license for any use of AI and may basically make ChatGPT unusable in the EU because of all these rules. So that's another reason that will slow things down as well.
Nathan Labenz: 1:07:58 I've been very amazed, honestly, by how positive the medical establishment's response has been. I haven't got maybe as good of a read on the legal side. I guess, how do you think that you will fight that? On the medical side, it seems like there's kind of so far, everybody seems reasonably happy thus far. Honestly, it's kind of weird. The technology providers are all being very sort of cautious and respectful. We had one of the lead authors from Med-PaLM 2 out of Google on the show, and they're very buttoned up. We've got to really prove this out. We're not trying to bypass doctors, all that kind of stuff. And the doctors, meanwhile, are like, this is gonna be a great aid to us. We're gonna use it. That's by and large the message that I've got. How would you contrast the legal side there? And then I really am curious to understand too how you think about fighting that over time. Do you do it in a provocative way? You've put some of these demos online where you're like, we're gonna have an earpiece in a courtroom. Do you think that's the way, these kind of highly visible examples? Or do you have hope still to work through the system? What's your strategy gonna be to fight through all that stuff?
Joshua Browder: 1:09:13 Our strategy right now is to not compete directly. So there's not a lawyer in the world who will get out of bed to save someone $100 on their Comcast bill, and but that still creates value for people, and so we're really serving an underserved area of the law. And that's largely kept a target off our backs for the past few years. When we started doing this earpiece stuff in court, the lawyers started paying attention, and we're thinking maybe we should go ask these people a bit, and that's what we're dealing with now. At the same time, though, because we're focusing on this kind of noncompetitive consumer rights area, I think everything will be okay. But if we were to start truly doing high level business disputes or going into federal court like this lawyer, then we would get into a lot of trouble. So I think if you go into an adjacent space that is less served, that's one strategy. Another strategy is changing the law yourself. One thing, we've engaged a lot of members of Congress. We're looking to pass an AI Assistance Act type thing that explicitly allows this sort of technology in some proceedings. We have a congressman here in New York called Ritchie Torres who we're looking to sponsor this, and that's going pretty well. So changing the law, not competing directly, and then finally, just throwing a firebomb and asking for forgiveness rather than permission. Because if you do that, you can show consumers what's possible, and that's what drives things. And so I would say those are our 3 strategies.
Nathan Labenz: 1:10:47 Yeah. I recently had a friend in the family who, this was a mind blowing episode in a couple different ways, but basically, young woman. She's 19 years old, and she got into, as far as I understand, as I've heard the story, she didn't do anything wrong at all, but she was asleep in her own car in a parking lot somewhere and had been drinking and then was arrested for a DUI. And she was like, I wasn't even driving. I got into the car when it was already parked and just fell asleep there and nobody saw me driving and this is ridiculous. She then doesn't have a lot of money, so I was kind of advising her a little bit remotely. I'm not a lawyer. She has a public defender that's assigned. She's like, the public defender doesn't seem like he's really on my side. He just tells me to plead. I don't really wanna plead. I didn't do anything. I don't, why should I be pleading guilty to something when I definitely didn't do it? And she ended up with a hung jury, and then they're doing a retrial of this incredible, miniscule problem. So she's gonna have to do it twice. And I was really thinking throughout that process, and I was honestly using GPT-4 a little bit to just ground my kind of guidance for her. So when you think about that layer, right? We have this kind of notion that there is this constitutional, you are guaranteed some right to representation. But in practice, that often falls down. Is that an area that you're interested in? I mean, it's certainly in some sense the greatest consumer right, you could say.
Joshua Browder: 1:12:23 For low level traffic offenses, certainly. I think that for serious criminal cases, people deserve a public defender, and it's more of a question about getting rid of the bad public defenders. So it's less of a production line and more of a well funded right to constitutional defense. I think these issues are too complicated because it's often about the cops playing dirty or individual judges. Some judges will throw out the case immediately. Some judges will allow a retrial. Some won't. So it's so human and so emotional. In courtrooms, for criminal cases, there's police officers and bailiffs in every courtroom because they have to stop people punching each other, and AI can't do that. And it's so emotional, also with divorce cases, that I think it will take a while to really seep in. But certainly, making their work more efficient, a lot of lawyers are using tools like Casetext and things to find cases. But I don't think we'll have robot lawyers in criminal cases anytime soon. And that's a humbling thing for me to say as the biggest robot lawyer fan out there.
Nathan Labenz: 1:13:28 Do you think there's a version of it where, this was an interesting synthesis that came out of some of our medical episodes. We interviewed professor Zak Kohane, who leads a department at Harvard Medical School. And he said that he thinks it will very soon be considered substandard and ultimately unacceptable to have human provided medical advice without the backing of a GPT-4 like system. And he's not saying we should bypass the doctors at all. But he is saying, you wouldn't wanna go, in the near future, you won't wanna go to a doctor, and it may be so standardized that it's just expected or even required that the doctors or the medical system as a whole must bake this stuff in for your benefit. Is there a legal version of that you could see too? Nathan Labenz: 1:13:28 Do you think there's a version of it where this was an interesting synthesis that came out of some of our medical episodes. We interviewed Professor Zach Kahani, who leads a department at Harvard Medical School. And he said that he thinks it will very soon be considered substandard and ultimately unacceptable to have human provided medical advice without the backing of a GPT-4 like system. And he's not saying we should bypass the doctors at all. But he is saying you wouldn't want to go in the near future, you won't want to go to a doctor, and it may be so standardized that it's just expected or even required that the doctors or the medical system as a whole must bake this stuff in for your benefit. Is there a legal version of that you could see too?
Joshua Browder: 1:14:20 Yeah. So every year, a lawyer has to keep their license going, and there's certain rules that one has to do to keep their license going. And I think 2 years ago, in a lot of states, they have you have to be able to use the Internet, and that was a few years ago. And so on the one hand, that's great that they have that. I'm sure they'll add AI eventually. But on the other hand, 2 years ago, and the Internet seemed a bit slow. So it will happen, but the lawyers are very slow because they want to protect themselves, and they're the people writing the rules to benefit themselves. There are lawyers who steal from their clients. There's a famous lawyer in California who stole from their clients for 30 years and wasn't prosecuted. So that's the issues that they're dealing with because the rules are so relaxed, and so it will take a while for them to require that. Medical, I think doctors are a lot more ethical than lawyers. They have the Hippocratic Oath to Save People. I don't think there's a similar oath with lawyers except for maybe do the best thing you can do for your client, but they're not saving lives. And so doctors are a lot more responsible at embracing technology than lawyers are.
Nathan Labenz: 1:15:26 So how about the deflationary elements as it applies to your own business at DoNotPay? Are you guys cost sensitive at all when it comes to the AI services that you use?
Joshua Browder: 1:15:39 Yeah. So we're a team of 7 people. We have hundreds of thousands of subscribers, and it's our company name and mission to be very frugal and lean. These AI costs are racking up. It's probably one of our biggest expenses right now, with the OpenAI API. And I think there'll be entire companies that are created that shift costs to different models. So what a lot of my friends have been asking for advice on how to lower API costs, and I say, you don't need to use GPT-4 for everything. If it's a simple request, you can send it to GPT-3.5, which is 10 times cheaper, or maybe even an open source model like GPT-J if it's very simple. So I think shifting to the highest you don't need the highest level models for everything. Shifting to the simple ones for simple requests will be good. And the positive outlook is that open source simple models are getting much more sophisticated every day. And I think even the open source models are maybe 60% of where GPT-4 is, so you don't need to use these expensive proprietary models to get a big outcome.
Nathan Labenz: 1:16:42 So how would you unpack that a little bit more for somebody? And I'll just to give you a little bit of my perspective. I'm not a cost sensitive consumer of AI in the projects that I've taken on so far. Again, I keep referencing my company, Waymark. We use OpenAI fine tuned models, which basically is the highest per token price out there, I think, on the market today. But it works well for us. And we have a pretty high hit rate. I've come to focus in on this as potentially an important driver of how people decide what language models to use. For us, it's something like if people do a prompt and then we show them a video fully formed, which they can then edit. But we really try to deliver something that is ready to watch as if you could just publish it immediately. And something like 1 in 3, maybe as 1 in 5, in that range of the generations that people do with some editing, they ultimately download and go publish somewhere. So it's a pretty high hit rate where it's not if it were a 100 to 1 and we were throwing out 99 out of 100 generations, our cost profile could be quite different. But where it's only 3 or 5 to 1 and basically, whatever that cashes out to us is fine. Then I do another thing with this company Athena, which is in the executive assistant space where we're largely just doing task automation for now. And there, the goal is ultimately for most tasks to use all of the output. Right? We want to you hope for reliability such that you're gonna have some human in the loop depending on the process still. But we hope that 99 plus percent of the generations, once we really have it dialed in, will be usable. I guess in those contexts, I find the easiest path is just use the best model. Don't really worry about the cost. I'm already taking usually 90 percent plus out. And so let's not try to save another couple percent. Let's get the best quality we can with the 90% cost reduction. Where are you seeing different conclusions based on maybe different inputs in that analysis?
Joshua Browder: 1:18:59 For enterprise businesses and even consumer subscriptions, I don't think it's a concern because the revenue far outweighs any sort of API costs. One area that was hugely expensive for us, there's a law in The US, it's called the No Surprises Act. It helps people negotiate their medical bills, and it requires hospitals to publish their bills and prices online. And every hospital in The US has their prices in some PDF somewhere. And so what we did is we had a bot scrape all of it, and then we used a custom trade model to standardize the pricing data. And I think that cost us only 80k to actually get that done. And that's expensive, but we'll make it up with lowering people's medical bills and the subscriber numbers we'll get from that. There's a lot of use cases I see on Twitter of free consumer products where it's a demo, and I don't think it will actually work because of the API costs. So an example would be imagine a browsing assistant is a great example. Imagine you have a browsing assistant that's analyzing a web page at the terms of service on every web page. Consumers don't want to pay for that because it's so simple, and so it would have to be a free product, but you wouldn't really get a good response unless it's GPT-4, and I would imagine if that was going to become popular, an AI version of Honey, I don't think that could exist because the costs wouldn't actually pay for themselves. So a lot of business models where it's freemium or free with ads like a Credit Karma or Honey style business model won't work with GPT-4, I feel, because the API costs outweigh the revenue per consumer. But for subscription models, and certainly for enterprise use cases, it does work. But I was speaking with the founder of a popular podcast recording software, which we may or may not be using right now, And even he's having these issues with cost and AI. So I think everyone has to be concerned about it.
Nathan Labenz: 1:20:57 Well, you are certainly the most cost sensitive person I think I know at this point. And sometimes that's not surprising. But I still feel even in those use cases that you cited, there's certainly some fixed costs. When you said the $80,000 I guess that is the cost of training a model, but it's done now. Right? I mean, in some of these other things too, if you were going to go process the terms of service on every website, largely that would be a one time deal. Wouldn't you cache it or have some embedding backed database where you're sort of you wouldn't have to do it real time every time, right? It seems like there's a lot of ways to take cost out where you could still use the best model, but maybe be clever about applying it opportunistically or only updating so often. You don't want to run it every time I load a web page.
Joshua Browder: 1:21:52 Yeah. And that that's, I guess, where it comes down to shifting models as we spoke about training to reduce inference costs and things like that. The more custom trained a model, the cheaper it will be. But at the same time, the most powerful use cases of AI are where it's used in totality and every single time. So for the imagine you have an AI coupon finder that scrapes the web looking for coupons and applying them. You would want that every single time because the coupon might not work, it upsets and things, and so you want the most reliable product. And so I think that in these free consumer use cases, that's why you haven't really seen it as much where it's less transactional because of the costs. And so certainly for smaller businesses and individual developers, Honey didn't start as Honey. It started as this team of startup engineers building something, and they have to get funding. And especially in this high interest rate environment, it's much more difficult to justify a money losing revenue model before you build great technology that does it in itself.
Nathan Labenz: 1:22:57 There is a certain tier of company right now that is just taking the subsidized approach. We talked to the CEO, Flo, of Linde AI, and he said that his costs were dozens of dollars per user per month. But in his case, because he'd raised a bunch of money and he used it as first one to make a great product is gonna get huge gains. He's currently just subsidizing that and saying, I don't care, and the cost will come down, and we'll figure that out later. Do you think he's making a mistake in that analysis? Do you think the cost doesn't come down as fast as he's projecting, or what's the disagreement?
Joshua Browder: 1:23:37 I don't know enough about his business to criticize any aspect of his strategy personally, but I would say that the 2021 mindset in general is wrong. You saw a lot of companies spend $100 in customer acquisition costs to get a $70 LTV, and the same is true with AI, and that's certainly not the approach we take at DoNotPay, partly because of the name, but also because of our mission to save people money. It would be hypocritical if we weren't doing that ourselves. One counterargument is that the technology is improving so much that if you get the user base and engagement today, you can shift the technology on the back end when it improves. GPT-3.5 was 10 times more expensive until OpenAI changed the pricing. So there's that argument as well. But you can't count on that. I read an article that says we're almost at the limits of Moore's Law and things like that. So you can't run a business on hope. It should be based on fundamentals.
Nathan Labenz: 1:24:33 What are the things that with all the hundreds of use cases, what are some of the most commonly applicable use cases? I've poked around, but I've probably seen 10% of what you have. So what should I go and do? What should our listeners do to save themselves some money that's maybe not obvious?
Joshua Browder: 1:24:54 My one time favorite use case out of all the things we have is doing robocallers. We have a user. It's their full time job that they've just taken it upon themselves to sue every robocaller they get, and they've made tens of thousands of dollars. They bought a new roof for their house. There's a great law that says you can get 1.5k per call, but the problem is these robocoolers hide behind spoof numbers. They don't tell you who they are. So the question I'm sure you have is, who do you actually sue? And the way we've got around this is actually with fintech, not with AI, where we built a trap credit card. And the way it works is it's a do not pay credit card. And when they try and sell you something, you can give them the card number and they run the transaction, it declines, and it gets their business name, phone number, address, all the details you need to sue them through the payment network. So I think this is a great conclusion to this conversation, which is it's never about AI or fintech. It's just about using the best technology to fight for people, and that's a great creative use case of the card network to find these robocallers to sue them.
Nathan Labenz: 1:26:01 If I wanted to do this, I would start answering those calls. I'd have your credit card on hand, and I would then just say, yeah. I want to buy whatever you're selling. And then as soon as that ping hits, now I sue you for the robocall.
Joshua Browder: 1:26:16 Yeah. And then it generates a demand letter, then lawsuit in small claims court and gets all of that going. And what's interesting is they settle a lot because these robocallers are terrified of a class action. So what they do is they'll say, we're not gonna give you $1,500. We'll give you $500. If you sign here, confidentiality, we'll give you a check. And we actually see robocaller disputes settle at a higher rate than security deposit disputes, which I find interesting because you would think you're more entitled to the money as a security deposit, but the robocallers are terrified of the big law firms finding out. So it's almost advantage to be a consumer fighting for your rights versus a big law firm because they know that once the lawsuit is filed, it becomes public record, and the big guys start to target that.
Nathan Labenz: 1:27:04 Are there any things that you think are coming at us that we didn't think to ask you about?
Joshua Browder: 1:27:10 I've been working on my business 8 years. I like to think it's overnight success 8 years in the making, and it's sad to see everyone you're seeing funds change their name from crypto to AI. So I think that all of that is very disappointing. But that's my only hot take I have for you.
Nathan Labenz: 1:27:28 Other AI products and services that you find valuable and would recommend people check out?
Joshua Browder: 1:27:33 Everyone has seen the viral influencer, Karen AI, the influencer creates an AI of ourself that lots of people pay to talk to it. Lot not a lot of people know there's actually a startup called Forever Voices that powers is bringing AI to influencers. So I think that's really a cool company.
Nathan Labenz: 1:27:50 Let's imagine a perhaps not too distant future where Neuralink, which recently got its FDA trial approved, it is soon, I think, registering clinical trial patients. Let's imagine that goes through, and now we're in a world where 1 million people have the Neuralink implant. And it's broadly found to be safe. Let's say vaccine level safety. Right? People generally acknowledge that it's safe, but you may have some noise out there. If you get one, you can control your devices with your brain. Your computers can ingest your thoughts and take actions just based on your thoughts, can record your thoughts. Would you be interested in getting such a device?
Joshua Browder: 1:28:39 100%. I think you've gotta stay on top of the arms race. Whatever tools help you, I would get. And it will be the difference between the people using ChatGPT and those that aren't is the same thing. You have to embrace technology and not be scared of it. I wouldn't be in the house for that. I'd be in the beta.
Nathan Labenz: 1:28:57 Million yeah. 1 million people, I think, is hopefully enough to demonstrate basic safety at least. So okay. Last one then. You've covered this from a lot of different angles, but just trying to zoom out as far as possible. What are your biggest hopes for and also fears for society as AI begins to touch everything? Nathan Labenz: 1:28:57 Million yeah. 1,000,000 people, I think, is hopefully enough to demonstrate basic safety at least. So okay. Last one then. You've covered this from a lot of different angles, but just trying to zoom out as far as possible. What are your biggest hopes for and also fears for society as AI begins to touch everything?
Joshua Browder: 1:29:18 My biggest hopes are that it levels the playing field, and it's available to everyone, and it makes goods and services much more cheap. My biggest fears are just like everything else in society, it gets captured from a regulatory standpoint, but also from a technology standpoint where it's just as a tool to repress people. And I think that I would be very sad if that happened.
Nathan Labenz: 1:29:39 Joshua Browder, thank you for being part of the Cognitive Revolution.
Joshua Browder: 1:29:43 Thank you for having me.