The AI Revolution in Hollywood? With WGA Writers Trey Kollmer, Sophia Lear, and Garrett Schabb
Exploring AI's impact on entertainment and the WGA strike, with insights from TV writers Trey Kollmer, Sophia Lear, and Garrett Schabb.
Watch Episode Here
Video Description
Today’s episode is a deep dive into the collision of AI and the future of entertainment, against the backdrop of the still-raging Writers Guild of America (WGA) strike. Nathan Labenz sits down with Trey Kollmer, Sophia Lear, and Garrett Schabb – all seasoned television writers and Guild members – to discuss the labor dispute. While the strike encompasses many dynamics, the timely intersection with a rapidly changing AI landscape has the writers entwined in the wild possibilities and existential threats of paradigm-shifting technology. Other entertainment guilds and labor unions are watching closely.
Note: Nathan kicks off this episode with a longer-than-usual introduction and analysis as the subject matter of this episode requires some added context. If you would prefer to dive straight into the interviews, follow the timecodes below to skip ahead (and maybe give the intro a listen when you finish!)
More about our guests Trey Kollmer, Sophia Lear, and Garrett Schabb and their work:
- Trey Kollmer, currently co-executive producer for the TV show Ghosts, was described to me as perhaps the single most knowledgeable guild member on the topic of AI. Follow him at @treyko on Twitter.
- Sophia Lear has been a writer for TV shows including Ghosts, The Unicorn, and New Girl, and was also previously an assistant literary editor for the New Republic.
- Garrett Schabb has written for shows including Tosh.0 and Suits (of Meghan Markle fame) and has also written for Crooked Media. Follow him at @garrettschabb on Twitter.
PODCAST RECOMMENDATION:
The AI Breakdown: @TheAIBreakdown
As anyone in AI knows, the pace of progress of new releases is relentless. The AI Breakdown is a daily podcast (10-20min long) that helps us ensure we don't miss anything important by curating news and analysis.
TIMESTAMPS:
(00:00) Preview
(01:54) Introduction and analysis for this episode
(09:02) Trey Kollmer breaks down the dynamics of compensation, ownership and crediting for WGA writers, issues at the heart of the strike
(14:52) Recommendation: The AI Breakdown
(16:11) Sponsor: Omneky
(17:16) What are the attitudes and implementation of AI in the writer’s room?
(22:42) Usefulness of AI for script writing (and will better training out of jailbreaks make it less useful?)
(30:59) The main two AI-related demands of the labor dispute
(34:44) Is there any way to police or standardize use?
(42:48) Writers’ diverse reactions to AI, from hostility to experimentation
(50:00) Picturing a long term Utopian future
(57:46) Sophia Lear breaks down sentiments around GPT-4, AI-generated scripts and the previous eras of crappy sitcoms and bad streaming series (written by humans)
(1:08:19) Dissecting quality
(1:11:27) Whose jobs are safe?
(1:17:03) Garrett Schabb breaks down dystopian/realistic view of studio’s incentives
(1:20:33) How Garrett uses ChatGPT
(1:27:37) Ethical boundaries of using the model to write like your favorite writers
(1;32:16) The two types of storytelling in the future, UBI and decoupling writing from a profession
TWITTER:
@CogRev_Podcast
@treyko (Trey)
@garrettschabb (Garrett)
@labenz (Nathan)
@eriktorenberg (Erik)
Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
Music Credit: MusicLM
More show notes and reading material released in our Substack: https://cognitiverevolution.substack.com
Full Transcript
Transcript
Trey Kollmer: (0:00) The big fear is that if these models are a force multiplier, a show might have 1 or 2 showrunners, which are the creators or head writers who have the big idea for the show. And then you have the models generate the first drafts or decent episode scripts. And then most of the writers become these gig economy cheap writers who come in and polish it and punch it up.
Nathan Labenz: (0:22) It's 12 people, and you're just what could an episode be about? You just have no ideas. And it's sort of, oh, does the robot have any ideas? I mean...
Garrett Schabb: (0:37) Almost without exception, it reverts to pretty cliche stuff. Even when I spent a lot of time trying to explain how novel concepts are arrived at. If we enter this near future where most written content for TV or films are generated by a model, I'm not sure we're going to see new voices come. I think we're going to see sort of this endless regurgitation of what's already out there.
Sophia Lear: (1:01) The best scripts or movies or TV shows are pretty polarizing. It's probably on the specifics of how they choose certain outputs over others matters whether you get a risky output that will be beloved by 10% of users and hated by 90% versus 1 that's fine with everyone.
Nathan Labenz: (1:20) Hello and welcome to the Cognitive Revolution where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas and together we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz joined by my cohost, Erik Torenberg.
Nathan Labenz: (1:20) Hello, and welcome back to the Cognitive Revolution. Today, we're taking a bit of a different approach to understanding AI and its impact on the future with a series of 3 interviews with members of the currently striking Writers Guild of America. ChatGPT has been a part of daily life for just 6 months still, GPT-4 for only 2 and a half, and we're now beginning to see the first power struggles taking shape. The Italian government temporarily banned ChatGPT for alleged data privacy violations, and EU regulators have proposed various rules, including 1 that would require disclosure of copyrighted material used in training, which caused Sam Altman to say OpenAI would, in the worst case, have to leave Europe before later saying that, quote, we are excited to continue to operate here, and, of course, we have no plans to leave. And then there's the subject of today's episode, the Writers Guild of America, currently on strike, listing among its strike demands some assurances about the way that AI may and may not be used in the future of Hollywood writing. I had expected that this might happen first with doctors or lawyers, so when I heard about the WGA's AI related strike demands, and especially after seeing some online reactions that caricatured the writers as reactionary Luddites, I knew that I wanted to get the writers' perspective directly. Was this really a core part of the dispute or just good dramatic writing on the part of the guild? I managed to get an introduction to a WGA board member, but he told me that he couldn't speak publicly while still engaged in active negotiations. Fortunately, as you'll hear, I tapped into a network of friends, Trey Kollmer, Sophia Lear, and Garrett Schabb, all guild members and all Ivy League grads who have had very different levels of interest in and experience with AI to date. Trey, currently co executive producer for the TV show Ghosts, was described to me as perhaps the single most knowledgeable guild member on the topic of AI, and he did not disappoint. I did not expect to find a Hollywood writer who would cite and heal Nanda's grokking paper, but in fact, I connected with Trey via a college dorm mate. Sophia has been a writer for TV shows including ghosts, the unicorn, and new girl, and was also previously an assistant literary editor for the New Republic. She has the least hands on experience with AI of these 3, but expressed high curiosity and quite radical uncertainty about what the future might hold. Garrett has written for shows including Tosh 0 and Suits and also written for Crooked Media. He's experimented with ChatGPT assisted writing quite extensively and has even developed his own personal ethical code to define the level of creative contribution that he will allow AI to make to his writing process. My takeaways from these conversations were several. First, the writers are not Luddites, and in fact, they are all really curious about the AI technology itself. They all get that AI can be both an amazing tool and a potential threat. They're not trying to ban ChatGPT, but rather ensure that the TV shows and movies that viewers watch remain fundamentally human creations. And, yes, also, that they continue to get paid. Second, the importance of collective action and collective bargaining are likely to increase in the AI era. While traditional questions of revenue share remain largely central for now, the question of which roles AI will and won't be allowed to play will loom much larger going forward, if only because the range of possible outcomes is so wide. AI can't yet write a professional quality script with any reliability, but it certainly seems plausible that it might before too long. And at that point, any number of economic structures become possible. Third, if AI progress doesn't stall out for another 1 to 2 years, it seems likely that we will want to consider more holistic social contract reforms. While the writers have enough social clout to have at least some chance of a successful strike, fast food workers like those at Wendy's, which recently announced that its new fresh AI drive through order taking service had outperformed human order takers, don't even have that. Allowing AI to freely enter the labor market without some structural reform seems more likely to provoke the backlash and perhaps overregulation that the accelerationists fear. It's no coincidence that Sam Altman is also running UBI experiments. Fourth, there are no simple answers. On the contrary, the right answer seems to vary dramatically by the type of activity under consideration as well as the status quo and relevant alternatives. When it comes to medicine, for example, to me, it seems clear that the individual's right to quality health information should outweigh the doctor's rights to high income. And to their credit, the medical establishment has responded very positively to AI, recognizing how much it can help them elevate their level of care and also correctly recognizing that for people who truly lack access, some risk of AI imperfections must indeed be accepted in the name of the greater good. When it comes to culture on the other hand, whether it's movies and TV or whatever else, to me it's much less clear what makes sense. Many in technology might be tempted to say, it's just entertainment. Who cares if an AI or a human wrote it? But I honestly think this would be shortsighted. Yuval Noah Harari recently gave an outstanding lecture about the risks of AIs hacking the operating system of human civilization, that is language, and creating new risks of AI powered cultural transformation, including the potential even for new religions. After all, as he notes, religions throughout time have claimed nonhuman origin for their sacred texts. With this in mind, a certain French style cultural protectionism against AI might indeed be prudent as we seek to maintain humanity's place at the cultural helm. And finally, there's always a big risk that we're all still thinking too small. At a couple points in these interviews, we discussed the potential for totally new forms of entertainment that could only be created with AI, such as dynamically generated 3D virtual worlds with AI generated and animated characters. Considering the progress across so many domains of generative AI, it's quite possible that such multimodal AI systems, far more integrated and advanced than ChatGPT, are ultimately the things that change the game in ways that force us to ask entirely new sets of questions. That's a lot to digest, but I hope you enjoyed this set of conversations with Trey Kollmer, Sophia Lear, and Garrett Schabb.
Nathan Labenz: (8:41) Trey Kollmer, welcome to the Cognitive Revolution.
Trey Kollmer: (8:43) Thanks. Thanks for having me.
Nathan Labenz: (8:45) I have so many questions. For starters, maybe can you just tell me a little bit about who you are and what you do in the crazy world that is Hollywood?
Trey Kollmer: (8:54) Yes, I'm a writer. I mostly work on broadcast sitcoms. Right now, I'm a writer on a show called Ghosts on CBS. I guess I've been doing it since about 2010, and I've mostly worked on shows that I love, but that were canceled very quickly, and I'm finally on something that's sticking around.
Nathan Labenz: (9:19) So let's start with just how it works, because I have a sense almost everything in TV and movie production has a gig economy sort of flair to it. And I find it fascinating how these productions come together and disperse, and it's very network driven, as I understand. So I think that would probably be good to just set up so that folks have a background.
Trey Kollmer: (9:40) Yes. So for television, right, there's 2 main ways that you're getting paid. 1 is you're developing new shows, and 1 is you're joining a staff of a show that's been greenlit to be on the air. So the development process is, it's kind of like you picture in the movies, you go in, you pitch an idea, you try to sell it to the network. And 10 years ago, usually each network would buy, say, 70 comedy pilot scripts. It would buy a ton of them. When they buy your if they buy your idea, what they're doing is hiring you to write the pilot script. That's the work you're hired to do when you sell your idea. They buy your services to write the script, and they buy all these options on the idea for the future, they could turn into a show. You have some guarantees of whether you stay have to stay working on the show and some profit participation. But it's really writing that pilot script that gives you the created by credit and ownership of the show, and usually some rights to some of the back end and profit participation in success. And once you sell the script, in January they take those 70 back in the day, now all these numbers are much smaller. I'm not even sure what they are the past few years. They pick 10 to 12 of them to actually film. They cast, they build the sets, you film the pilot. Those pilot episodes get tested to death. All the top execs from marketing and scheduling come together to advise on what should become real shows. And then, in the old days, 4 or 5 of them get greenlit to be on the next year's schedule to get produced. And then, usually 1 creator or a writing team does all that themselves. And then once it's greenlit to become a show, you get to hire a staff. On my current show, there are 14 writers on staff. On streaming shows, the rooms are much smaller. That's 1 of the other points of the labor dispute is trying to maintain the size of the streaming rooms. That's the other way you make money. You interview for a job on a show that's going to be produced and you join the room for that. And then the process, once you're producing a show, usually you're all around the table generating ideas for arcs for the season, for episode ideas. And then once you have an episode idea, you all together, or maybe you split into somewhat smaller groups, come up with the general beats of what happens in an episode. And you divvy up responsibility for captaining an episode. So if you see written by flashed on the screen at the beginning of a show, what it really means is they wrote the first actual draft of it. But in the process everyone was brainstorming everything that happens in the episode. Once they write the first draft, the whole room usually rewrites it together. So you can see there are many different steps that these models could come in and try to do some of the work.
Nathan Labenz: (12:40) Yeah, that's fascinating. I'm starting to imagine if I was in your spotlight where I might start to call on GPT-4 a little bit. So let me come back to that in just a second. There's 1 more term that I've seen flying around in the coverage of this so far, and that is the MBA agreement and things that are MBA covered versus not MBA covered. And it seems like that's a pretty key distinction in terms of certain different types of projects. So can you unpack that for us from your perspective?
Trey Kollmer: (13:10) Yes. So the MBA is basically our union contract. It guarantees a bunch of minimums. So it doesn't negotiate your final working terms, but it's the minimum you can be paid for different types of work, the types of options they can have on you, the contributions to your health insurance and pension. And then once you have an offer for a job or a writing assignment, you're frequently negotiating some terms that are better than the minimum. But it sort of sets the rules of the game. And for the major studios, I think once they sign on to the MBA, that means they're allowed to use union writers. But I believe they have to use, for the types of content that are covered by the MBA, they can only use union writers. So for a while some forms of animation weren't covered, so they could use non union writers for that. I think a lot of our last strike was making sure that streaming was covered by the MBA, which seems like it was pretty prescient.
Nathan Labenz: (14:15) Gotcha. So it's, I think I was a little bit confusing myself because when I read this term MBA covered, I was thinking that maybe smaller shows or smaller budgets would be covered, but then bigger things would be not covered. But it sounds like it's really more a standard, high level agreement between a union and a studio. And once that's locked into, basically everything that they're going to do together is going to be covered.
Trey Kollmer: (14:41) Yes, that's true. But I think 1 of the big points in this debate is in the current negotiation trying to get more residuals from international streaming. It used to be you have a show on network, it would rerun everywhere, or it would rerun and you get money for every rerun, and you get money when they rerun it, start showing in other countries. And it's been tough translating the money we used to get for replays of shows over the years on network to translate to streaming. And I believe in our last negotiation, we made progress on domestic streaming residuals. And I think a big push on this is for the international, which some of it is just even getting transparency on the number of subscribers, which I think a lot of the studios and platforms don't want to share.
Nathan Labenz: (15:27) Yeah. Okay, cool. So that's really good context. And obviously it's a complicated world. But now here we are and GPT-4 just dropped and ChatGPT just dropped a few months ago. So maybe for starters, before we even get to the union position, what have you seen in writers' rooms to date when it comes to different attitudes, different practices, different using or not using? Is it stigmatized? What have you seen among writers when it comes to shunning or embracing or some perhaps complicated mix of how they approach GPT? Nathan Labenz: (15:27) Yeah. Okay, cool. So that's really good context. And obviously it's a complicated world. But now here we are and GPT-4 just dropped and ChatGPT just dropped a few months ago. So maybe for starters, before we even get to the union position, what have you seen in writers' rooms to date when it comes to different attitudes, different practices, different using or not using? Is it stigmatized? What have you seen among writers when it comes to shunning or embracing or some perhaps complicated mix of how they approach GPT?
Trey Kollmer: (16:03) So we were only back for 3 weeks this season since ChatGPT came out before the strike hit. It didn't seem like anyone was using it for work. Among my friends who are writers, it's a mix. A few people are really experimenting with it and having fun and brainstorming with it, and a lot of people aren't using it at all right now. I've been playing around with it. I have a premise for a show of a Google type company that has some realistic robot who's an artificially intelligent AI, and is tasking one of its employees to have him move in with him, to observe it and teach him about the world. And I just would use that as a placeholder and give that description and just ask ChatGPT, write the cold open of the pilot episode. And I tried with GPT-3.5, with Bing, with ChatGPT with GPT-4. It's really good at structuring the scene and getting out everything for the premise and the themes of the show that you have to get out. The big picture, what needs to be accomplished by the scene, it's very good at doing that. It can be hard to get it to get specific. It ends up being very vague and it won't drill down. For a joke you want very specific, unique language. And I found it a little bit hard to get it to do that. Although it's not not funny. It had this bit about how the boss was very proud of this robot. And he's like, and the best thing is we decided to look exactly like one of our most attractive famous celebrities. And the guy can't recognize it. And he's like, Oh, what do you mean? Which one? He's like, Ryan Reynolds. It doesn't guess it looks exactly like Ryan Reynolds. And the guy's like, I guess so. Which I thought was really funny and scary how, I mean, I can send you at some point the transcript of the scene it wrote, but I was sort of impressed by that. But yeah, it's good at the medium level execution and not as polished. It's not as polished and as perfect and as specific and unique as you want. And then I've tried using it on more brainstorming and big idea generation, and it's very hard to get it to get specific. Younger writers or writers who aren't great at, or who are newer and don't quite have the skills down of taking, you get an outline for the script, you all work through an outline first, and then you write your draft off of a paragraph of prose for each scene. But a younger writer doesn't know just how a scene should be structured and laid out and making sure they're accomplishing all of the important plot points and jokes and things you have to thread for later in the outline. It might be really helpful for them to put the paragraph into ChatGPT, get the bones of their scene, and then go in and do what they're good at, which is adding jokes and making it fit the characters better and polishing it. I do think for a lot of writers that could be very helpful. I do think once you have enough experience, you should be at least better than the current versions at laying out scenes and stuff.
Nathan Labenz: (19:23) It's amazing how quickly we adapt to this technology. That's one of the constant lessons I learned. It's like 4 months ago, none of this existed at all. And now it's like, yeah, it's not quite as good as I am at laying out scenes, but it can still be helpful in these other ways.
Trey Kollmer: (19:39) It was funny how quickly, at least on Twitter, people seem to go, This thing can't put coherent sentences together to I mean, it wrote a paper, but it's a low level grad student paper.
Nathan Labenz: (19:51) Yeah. I think we probably should have really focused on that for a little longer than we did as a society. It sounds like today, you called said earlier, some people are really embracing, others are experimenting, maybe even more than embracing. Others are not doing it at all. It sounds like you are on more of the forefront of curious, trying to figure out what it can do for you. So far it's been
Trey Kollmer: (20:17) less helpful actual work wise. It hasn't done anything that's actually made me better at my work or made anything usable. I'm more just very curious at just how the models are getting better and what they can do for almost for its own sake. I sort of got very into AI a while ago. I went to that Asilomar conference back in 2016, went to NIPS the following year and have been trying to follow these things. It's really interesting, but it's not quite good enough. Or maybe I'm not good enough at using it yet, which I think is another part of the equation to get tangible work benefits from it.
Nathan Labenz: (20:58) Yeah, there probably are especially narrowly defined tasks that I would expect you would get real value from. For you, I wonder, have you gotten to the point yet where you're like, here's my script. In scene 2, I don't really like my second joke. Can you give me 10 alternative ideas for that joke? Or things just general critique. Like, pretend you're the head of the studio and tell me what you're going to say about this draft. Oftentimes those kind of reflexive additional layers, I think, are where people are sometimes finding the most value right now.
Trey Kollmer: (21:36) Yeah, I mean, I think I haven't tried it for, but I think what would be useful, right now less so for jokes, but the couple needs to figure out a way to get money from the bank. What are 5 ways they can do it? Or I would imagine for a heist movie, you're just asking it, what are the requirements for bank withdrawals? What type of ID would you have to fake in order to accomplish this crime or something? I'm sure for twists or for character strategies, it might just be fun to ask it to brainstorm 10 possibilities. And maybe one of those is usable, maybe it sparks something in you, or you learn some law or trick, or it probably knows just a bunch of the history of frauds or something. I'm reading this book called Lying for Money, and it's like, oh, so many of these things would be so useful for many plots of just a strategy a character could do to accomplish something or to fool people. And that actually might be a helpful way to use it.
Nathan Labenz: (22:40) Yeah, there's something fascinating there too, that is a rhyme or another angle on one of the big ChatGPT jailbreaks that first came out within 24 hours of them initially releasing it. People had found that it would refuse to help you with certain things. If you said, tell me how to hotwire a car, it would say to you, whatever, as a large language model, I can't help you with that, essentially. Right? But then people figured out if they framed it in the context of fiction, then they could get the AI to give them the thing. So the setup would be, Trey and Nathan are 2 characters in a story, and it's a hard science fiction story or whatever. So we need absolute detail down to the concrete steps of how they're going to hotwire this car go. And then you'd get all that stuff because you sort of bypass, it's not exactly a filter, but sort of get around the mitigations that they put into the model.
Trey Kollmer: (23:42) Yeah. I do wonder as they get better at training out certain jailbreaks, does that make it less useful for some creative tasks?
Nathan Labenz: (23:51) Yeah, people are reporting that. I think that this stuff is extremely hard to quantify and there's a certain amount of skepticism, I think, on these claims that is warranted. I mean, for context, I often cite this stat, GPT-4, which is so much better than 3.5 to the degree that the difference is bottom 10% to top 10% on the bar exam. So that's a pretty big difference. It's still only preferred by users by a ratio of 70 to 30, which is just over 2 to 1. So evaluation and which model did a better job on any given thing, especially the more creative eye of the beholder it becomes, is a real challenge. But I am hearing somewhat consistently the report that, yeah, the earlier version was better for creative tasks. It was a little bit more freewheeling. It was a little bit more whatever. And now it seems to be just always pruned back toward normal and toward less offensive. And obviously, comedy can often be about finding the line that's approaching, making people uncomfortable, but hopefully landing on the right side of it. So I would imagine that there are, just based on all those reports, that there are some performance losses in some of the more creative or edgy type of queries that you might want to give it.
Trey Kollmer: (25:13) Yeah, I think two things that makes me think of is the hot wire car example. You may want someone doing a detailed hot wiring of a car in some action movie, and you may not want your model to give that information to people regardless of whether it's fiction or not. So there might just be a class of things that it would never put into a script because it looks bad if it's giving out that information at all. And then I think, I mean, a lot of just the best scripts or movies or TV shows are pretty polarizing. And I don't know how, I guess in the when they're reinforcement learning it, they're having people rank different outputs or something. And I just do wonder whether, it's probably, on the specifics of how they choose certain outputs over others and how they penalize others matters whether you get a risky output that will be beloved by 10% of users and hated by 90% versus one that's fine with everyone.
Nathan Labenz: (26:16) Yeah, it's very fraught. I mean, they've put a ton of work into that. They're now hiring, and this is often done with partner companies, so as I understand it, it's not so much OpenAI doing the direct hiring. But for example, a company called Scale AI has a ton of PhD level evaluator positions open right now. And it's for chemistry and accounting and just all these deep fields. They are finding that they basically have no choice but to go up and up and up the expertise stack. I don't know if writers, comedy writers is one of those positions. I'll have to go back and look at all the job postings to see if there's one like that.
Trey Kollmer: (26:55) I actually don't know how big it is right now, but traditionally there's an army of assistants that read scripts and write coverage on it. So I don't, I mean, don't want to give them any ideas, but I don't know at some point, are they going to appropriate all these people who just read scripts and evaluate them for some massive fine tuning on creating a Hollywood TV movie scripts project.
Nathan Labenz: (27:21) Yeah, I always come back to, I think we're going to see everything everywhere all at once. It just seems like everything is working. Everything is somewhat viable. Some things are weird, but even so they can still be interesting. And one of the big things I also am expecting is community driven model customization over time. You can imagine a, and obviously these communities are vast and they're just extremely diverse. There's tons of them, right? And they all have their little, whether it's anime or Fast and Furious or whatever. There's all these communities of people that care passionately about something. And it really seems like a Discord server with not even a GPT-4 language model, but one of the ones that's recently been open sourced potentially could be shaped into an infinite writer specific to a particular genre. So where I sort of imagine this going, I'm a little lost, I guess, in terms of the strike demands. The studio may do one thing or maybe negotiated with to not do some things or whatever, but then I'm also just like, but the broader world, people are going to do their Harry Potter fanfic activity wherever they do it. And it seems quite plausible that with a reinforcement cycle in place, you could get those to the point where they're really good and they could just exist in a space that's totally divorced from typical production.
Nathan Labenz: (29:01) Now, how do
Nathan Labenz: (29:01) you turn those scripts into full assets? That's obviously a whole other question, but we're seeing a lot of progress there too. Progress or threat, depending on probably your perspective, but certainly effects and image creation and video, text to video type of things are coming up extremely hot as well. Nathan Labenz: 29:01 You turn those scripts into full assets? That's obviously a whole other question, but we're seeing a lot of progress there too. Progress or threat, depending on probably your perspective, but certainly effects and image creation and video, text to video type of things are coming up extremely hot as well.
Trey Kollmer: 29:20 All right. Well, I guess first thing, I can give you the 2 big worries of the labor dispute. Just to run down the 4 things that they're asking for are that the WGA covered companies, the studios, agree that AI can't write scripts, that AI can't rewrite scripts, that AI can't generate source material that shows or movies are based on. A lot of times you'll option an article or a short story or a comic book or a book to kind of base your show or movie off of. That's very common. And that the WGA covered material can't be used to train AI models. Because I guess most of the probably high quality scripts that have been produced or written and not produced, which is maybe an order of magnitude more than the ones that are produced, are all WGA covered material. But they're also, I think currently, I assume, owned by all the studios. And they probably, if they just, without this agreement, could just fine tune models on them. But I think the 2, the big fear is that if these models are kind of a force multiplier, you could have a show might have 1 or 2 showrunners, which are the creators or head writers who have the big idea for the show. Maybe 1 writer could do 3 shows if it's enough of a force multiplier. And then you have the models generate the first drafts or decent episode scripts, and then most of the writers become these kind of gig economy cheap writers who come in and polish it and punch it up. And I think that's the main worry that they see. That seems most relevant if in the world where there's a medium pace of the capabilities increasing. It gets a little bit better, can do a little bit more, but not so much better that it's just like, we're all screwed anyway. But a little bit better than it is now. I think there's another, the other, the second major fear in this realm is that you can almost use the current versions for producers or people on the studio side to kind of cheat some of the designations. It's really writing a pilot script that gets you to created by credit that entitles you to some of the back end. I think to some of the other things you were talking about in terms of stuff outside the studio system, I think in terms of video production and stuff, right now it seems obvious that these things are beginning to be able to write. But at some point, the models will start making our compliments a lot cheaper too. So that if you're a writer, you might be able to go off, write your own script and not need the studios or need the infrastructure of actors and directors and composers, and just have a model generate your finished product. Which is, I think it's all scary for everyone. Stuff's changing very quickly. And I think some of the WGA and the other union's ideas, we need to stick together on this. But I also totally understand that there's a world in a world where these get better much faster at all parts of production, the studio sign on to this agreement and agree not to use any AI in their projects, and then just get their lunch eaten by some outside company or just fans making stuff for themselves. I mean, it gets really cheap to produce a finished product, a community or at some point 1 person could just say, here's the type of show I want and get a show generated just for them. Yeah, I think I kind of get why it's complicated. Also the WGA is only for 3 years. So to some extent, whatever promises you get, if things change rapidly, you might not be guaranteed to still have those protections in 3 years.
Nathan Labenz: 33:06 1 very practical question is like, is there any
Sophia Lear: 33:09 way to police this? I mean, it feels like how's anyone, to a certain degree,
Nathan Labenz: 33:16 you guys are maybe working around a single table together in real time. You could spot somebody consulting GPT on the side, but
Sophia Lear: 33:25 right now we don't
Nathan Labenz: 33:26 really have great detectors of GPT output or there's always the possibility that people have tried that. Best I can tell, they're pretty easily broken by either they just don't work that well in the first place, or you can make a sort of superficial edit to what was generated and then it's no longer detected. Or sometimes you get false positives, right? I've seen examples where people are like, I swear to God, I wrote this and this thing thinks it's generated by GPT. So these standards seem like everybody has some incentive to cheat and nobody has any good enforcement mechanism anyway.
Trey Kollmer: 34:04 If someone who hasn't been writing at all and is a producer is suddenly just pumping out content now, it might be pretty obvious as people come up or people are breaking in and as the technology gets better, it's probably really hard. It's probably really hard to tell. If the models are very effective and really start generating value, how cheap is it going to be for you to use something that gives you a multimillion dollar movie idea? And right, you know what I mean? Will these models start cost, will the fine tuned versions of these that are that helpful start costing more? Or is it going to continue to be so cheap for some producer to just be somehow pulling millions of dollars worth of value out of it?
Nathan Labenz: 34:56 Yeah, I don't know. I mean, I generally think everything's going to be cheap in the future. The models are getting cheaper dramatically quickly, like 97% price reduction from 8 months ago to today for kind of equivalent models. And now of course we've added a new high end as well, where basically it's the same price as before, but you get a lot more for that money. And for both medicine and law, I was able to sit down and have pretty much like 45 minute robust, fully coherent consult that I would say rivaled the actual human that I worked with in each case. And so 1 of my big takeaways from that is just, I think the price of everything is kind of going down. Cheap expertise seems to be a big part of the future. And I see this moment with the Writers Guild is so interesting because I'm honestly kind of surprised that the lawyers haven't come out with a strong stance on this yet. Doctors have actually been much more positive on it than I would have expected. On the art side, I kind of think there might be a flip. I'll try this 1 on you. We do video creation for small businesses and it's all AI driven process. We have a great creative team and their whole role and way of working is kind of changing. But it does seem like there's huge value in curation still. You can generate a ton of ideas. I don't see a great prospect for us getting to the point where you ask GPT-4 for 1 hit sitcom idea and it just delivers every time. It's like, yeah, boom, bonafide hit. It seems more likely that it's, you can get a lot of pretty good ideas, but what's really going to resonate, what's really worth developing? What seems like it might be plausible is this kind of tastemaker role. Yeah, curation. It seems like it becomes where a lot of the value is. What do you think about that?
Trey Kollmer: 36:54 This sounds so standard, but being a differentiated brand, I think will be a way to keep capturing value, whether you come up with some great ideas yourself or you're just the person who just knows what people are going to like and can recommend it to them. Yeah, I do think there will be a lot of value in people that are very good at curating, whether it's a smaller number of people that are massively popular and well known as good curators, or it's more like on Instagram how people have their influencers who they trust and you kind of get this more decentralized, a lot of different people recommending things and you stumble upon the people whose tastes you trust. But yeah, curation, the thing is there is a pretty, when it comes to curating scripts, either it's very hard or just humans also aren't that high of a bar. Until recently, the networks would combine, buy 250 ideas every year to put 15, 20 shows on the air. If it becomes much cheaper to make the content overall, then you almost get to like in the limit, it's like a TikTok thing where you can generate so much content and just see what sticks with people and iterate on it.
Nathan Labenz: 38:12 Yeah. And the algorithm is certainly very good at surfacing new little bits of interest in the feed. That's for sure.
Trey Kollmer: 38:18 And I do, it does seem like there will be a period where, as you were saying, this symbiosis where the humans and the models work very well together, it feels like we're almost not quite at the level where the models are generating a ton of help. And then I wonder how long that window will last where the symbiosis is really that helpful. Or if it's like chess where there was a few years where everyone's like, man, it's just human plus machine. That's the way to get the best results. Then a few years later, the human can't really add much.
Nathan Labenz: 38:52 I cite that fairly often myself, because I do think that is likely to prove a mirage in a lot of domains. It feels like a cope right now in a lot of places that people are kind of like, oh, but this is really, it's really together is where we're really going to, and I don't know. It doesn't seem like a safe assumption anyway.
Trey Kollmer: 39:12 Yeah, it feels, it's just very attractive. It's almost just like a storytelling device. It's always like this versus this and then the end, oh, they work together and that's the best result.
Nathan Labenz: 39:23 Yeah. Nature doesn't always work that way. Whatever might make a good happy ending. Yeah, nature is full of non happy endings, unfortunately.
Trey Kollmer: 39:31 When you were saying that, you're not just going to ask it for a hit and it'll give you a hit. So a few weeks ago, I wrote this prompt to try to see if it could just come up with a hit TV idea. And it was, I wanted something kind of timeless. And so I started with, it's the year 2044. But 20 years ago, this show came out that just changed the game and had rich characters, and it's the most beloved show of the past 20 years. Can you please write the magazine review of it from when it came out in the 2024? And it didn't do the best job, but what I thought was funny was because I set the vibe up of the year's 2044, all the suggestions were sci fi. Anyway, that's just a random example of how the details of how you use it and, oh, it's very vibe focused. It takes a lot of cues from the vibe you're putting out there. And if you're trying to get 1 thing specific from it and give it this timeless perspective, but you use the phrase, it's the future, and then it's in this mode where everything's sci fi.
Nathan Labenz: 40:32 What I take away from that is I think just writers are really well positioned probably to be effective users of language models. Maybe it's too early to even know, but do you think that writers are not even really aware of this yet or hostile to it because they feel like it's sort of an assault on the craft? Or is it more of a, hey, this could be cool, but it's definitely not going to work out to our economic advantage. So we got to fight it. Even if it might be cool, we kind of have to fight it out of self interest. What do you think is the vibe that is kind of driving this demand as it exists today?
Trey Kollmer: 41:10 Yeah, I think there's a real diversity of reactions. I think, I mean, on the picket signs, at least the things that make it to the picket signs seem to be dismissive that it's just really bad at writing, but the studios don't care about the quality of the content. So they're just going to use it because it's so much cheaper, it's worth it. Then I think there's some writers who see it as a very real threat and just think we need to fight, it's existential, and you need to really fight tooth or nail for the survival of all of our livelihoods and careers. And then I think there are some other writers who are, my 1 good friend, maybe it's because he's a drama writer. So people who work more on sci fi and got into writing because they're fascinated by ideas like this, I think are, I mean, I'm sure they're all afraid that it gets so much better, we can't contribute as much, but are excited to play with it and see how it works and see what it's good at and what it's not good at. Yeah, and I do think you're playing with it, sometimes you see a little quirk or it surprises you with something like that Ryan Reynolds bit was, oh my gosh, that's really funny. It's kind of cool seeing those sparks where it exceeds your expectations.
Nathan Labenz: 42:22 Yeah. Sparks of AGI, you've probably seen this, but that was the title of the Microsoft paper that basically reported the same. Of all those, it sounds to me like the first 1 sounds ultimately least tenable. I mean, I'm sure that this is kind of the standard thing, right? Whether it's the lawyers, the doctors, the writers, anybody can sort of, for now, take this position that, oh, it's not very good, it's substandard. And for the patient or the client, the defendant, the viewing audience, we have to protect the integrity of the product. That to me seems like it's ultimately going to be least defensible, because even if we're not going to see show writer GPT that can do it all, there's no way it can't help some in various contexts and speed things up or give ideas or what have you. So I guess if I'm advising the strikers, I would probably focus on the latter concerns of just, I think there's also a good question of, do we want to cede control of culture? Even if it's good, there's a question of, is this wise? Even it's funny at first, are we wise to let such an unknown alien force have such a big influence on our collective thoughts and shared meme space and all that sort of thing. I can see an argument that it would not be a good idea, certainly. Nathan Labenz: 42:22 Yeah. Sparks of AGI, you've probably seen this, but that was the title of the Microsoft paper that basically reported the same. Of all those, it sounds to me like the first one sounds ultimately least tenable. I mean, I'm sure that this is kind of the standard thing, right? Whether it's the lawyers, the doctors, the writers, anybody can sort of, for now, take this position that, oh, it's not very good, it's substandard. And for the patient or the client, the defendant, the viewing audience, we have to protect the integrity of the product. That to me seems like it's ultimately going to be least defensible, because even if we're not going to see show writer GPT that can do it all, there's no way it can't help some in various contexts and speed things up or give ideas or what have you. So I guess if I'm advising the strikers, I would probably focus on the latter concerns of just, I think there's also a good question of, do we want to cede control of culture? Even if it's good, there's a question of, is this wise? Even it's funny at first, are we wise to let such an unknown alien force have such a big influence on our collective thoughts and shared meme space and all that sort of thing. I can see an argument that it would not be a good idea, certainly.
Trey Kollmer: 43:45 Yeah, well, just one thing I should say on the first thing you were saying is I'm pretty sure the union is in favor of writers being allowed to use it as a tool. So they don't want to block writers themselves from reaping some benefits from bouncing ideas off of it or using it. But on the second thing, yeah, I think it's, I mean, we've seen it to some extent with social media where you do a bit lose control of the culture generating process or whatever you call it. And it's not really a democratic thing. It's just sort of the accidents of the algorithm or, in TikTok, it's literally, you don't know what's happening. It's just all this content is being generated everywhere in the world and things are being bubbled up to you. Yeah, it might not be a good thing. It seems like a very hard thing to coordinate, to take deliberate control over how the culture will progress and evolve. I mean, in one way, union covered studios are sort of, they have one of the only walls to try and maintain some control in their garden. It's because they can work with all the actors and all the directors and writers that people like. But yeah, if these models get much better, I'm not sure how much longer anyone would be able to. If anyone in the world can be generating the stories they want, or as you said, the different communities, I think it'll be difficult to have any deliberate macro control over where that's going. And then it's an interesting question. If all these communities are using these models, how much is it the humans directing it? And how much is it biased based on how the models they're using and
Nathan Labenz: 45:29 Do you see any possibility for an agreement that could align studio and writers, at least on this AI point? I mean, obviously there's going to be negotiations about percentages and those are, I assume, going to continue to be contentious.
Nathan Labenz: 45:48 But could you envision a sort of
Nathan Labenz: 45:49 AI standard or guidelines or something that might make everybody happy?
Trey Kollmer: 45:57 Well, I first should say that beyond the things I sent you and those 4 points I mentioned that the guild is arguing for, as the general membership, we don't have a ton of insight into the specifics. I guess it makes sense. They can't just let everyone know which points of it they really care about and which they're willing to trade for other things. So, I really don't speak for the negotiation committee or the union as a whole. And that being said, I'm really not sure. I think maybe there's a world where it's some protections against the cheapest versions of studios or producers trying to steal some of the ownership when the writers are putting in the bulk of the work to develop things. One of the other big demands is for minimum staffing on shows so that a lot of the streaming, the writer's rooms have gotten very small because it's an 8 episode order. You have a writer's room open for a shorter amount of time. They brainstorm all the outlines and the ideas, and then the head writers or the show writers just write all the episodes themselves. And there's kind of a synergy between the 2 demands because if you have minimum number of writers per show, we're less worried that the AI is going to kind of destroy the middle of writing jobs. And yeah, and AI becomes a bigger part of it, but the writers have ownership over some of the product from it and kind of we all benefit.
Nathan Labenz: 47:34 Here's one other theory for you. This is very speculative, but I'm very curious as to your take on it. Obviously a lot of people jump to, okay, everything's going to be cheap, but it's also going be hard to make a living. We need some sort of new social contract at the highest level, right? We need a new universal basic income or something like that. So everybody can kind of relax knowing that you don't have, you can eat even if you're not gainfully employed, right? That would be seemingly a nice enhancement for the future. And then I guess people then start to worry, well, what am I going to do? How am I going to use my time? Are people going to find fulfillment and all that? And it seems like to some degree, maybe this is romanticizing, but it seems like to some degree, the writers kind of have one of those jobs that people actually would want to have or would imagine themselves doing, even if they didn't have to work to get paid. So is there a framing of this that's, once this all happens, then we can all be writers and maybe we won't make a ton of money from it, but we could sort of have needs taken care of and then everybody can kind of explore their own creativity.
Trey Kollmer: 48:39 Yeah, I was just talking to a friend about if you picture the long term utopian future where artificial intelligence could on its own just make something, but then there's room for you to kind of just do whatever part of that you want to do. If you're a writer, you can write a script and have the rest generated by the AI. If you're an actor, you can give your performances and have everything else, have the scripts filled in. If you're a director, you can get the script and give feedback on the performances. And I mean, there's a world where, I'm very fortunate. I get a very fun job. I just get to be creative and have fun discussions and make jokes with really smart, talented people. And it would be nice in a world where everyone can kind of have an experience like that. People could form groups and work on scripts together and use the AI to fill in the stuff they can't do. Because right now it's just very expensive to make something. And so it makes a high bar for, it really limits the number of people that get to be creative every day. And in the long term, it's tough to see where, in terms of livelihood, who knows where the value gets captured in the sea where everyone is contributing something different, but everything's getting so cheap. But yeah, when you say that, when you paint that picture, to me, it seems very nice.
Nathan Labenz: 50:08 There's obviously been much attention focused to the importance of diversity and different viewpoints in any sort of collaborative work setting. I've never felt that as viscerally as I have when we're trying to get the AI to do something. It's so often the case that I'm, well, I kind of got it to here. I don't really like the outputs. And then somebody who has a different educational background or any kind of background, different perspective, can often tweak my prompt to make it a little sharper in this particular way. And I've seen it go both directions where somebody will come and be, what we need is a defined rubric and that will make this work and that can work. And then other times, who's much more on the creative side compared to me will be, what we need to do is inspire this to take on a particular style. And that can also really unlock a lot of value. So there is almost a, where I'm leaving this is I almost feel like there is a writer's room with your minimum staffing requirements, perhaps negotiated by the union that becomes a sufficiently kind of diverse team to take maybe full advantage of some of the AI tools, which at least I really do struggle to do totally solo. I very consistently do find better results if I workshop prompts and workflows with others.
Trey Kollmer: 51:34 Yeah. I mean, a lot of times a script comes in, we all read the same script, and we all give our notes to it. Oh, this should be more in this style, and this character should be talking more like this, and this part doesn't make sense. And it really helps having a bunch of different eyes on the same script to throw their thoughts in. I mean, a lot of using the prompting feels like giving notes in a writer's room. I'd probably again go back
Nathan Labenz: 51:58 to the domains where there is a standard of care or something equivalent as compared to domains where there's just not, and everything is kind of its own unique snowflake output. I think it is very plausible that even GPT-4, I don't know that it needs to be a next generation model even, but just maybe refinements to this one, can get you to the point where, you should be able to approach it naively with just an earnest concern, whether it's medical or whatever it may be, and have it kind of, I think it makes sense to aspire if you're the model creator to say, we don't want people to have to get all nuanced in that moment. We just want them to be able to ask their question in their own words, whatever. But if you are trying to engage it as a creative partner, then yeah, I don't really see how that's going to go away anytime soon, right? I mean, even if you were trying to delegate to a human or your human writing partner, there's going to be, you have to give something to get something back that's of any relevance or interest. So I do think there is at least a sub domain of prompt engineering that I don't really see how it goes away.
Trey Kollmer: 53:13 It is not that different sometimes from prompting humans who you're collaborating with.
Nathan Labenz: 53:18 It's getting more and more similar all the time, that's for sure.
Trey Kollmer: 53:22 When you say the standard of care, how much for medical and law do you think it's so much more effective because the success is such a clear thing you're hitting for? And how much do you think it's just areas where they depend on a vast literature, it's just going to over perform on? Because medicine, there's just so much and no human's keeping it all in their head at once. Whereas if you're trying to solve a story problem or find a twist, you almost need no outside literature. It's just pure thinking.
Nathan Labenz: 53:53 I think both definitely matter. We did, I think a prior guest put this really well, the founders of Assistent, really emphasize that for certain tasks, there just is no success criteria. And if you can't engineer one, then you're kind of in a tough spot. Now, you can ultimately measure whether people laugh or not. And we're so early in that. I mean, I think OpenAI, they obviously haven't disclosed it, but they have done deals with data providers. We don't know who, we don't know what they paid, we don't what they got. But it's pretty clear, I think, that they have done deals with data providers. Seeing, for probably multiple reasons, but seeing that the writing's on the wall that there's going to be legal challenges, they want to be able to kind of lift the curtain and be, guess what? We're all legal. Those deals are going to be a huge factor as well. Stability AI is kind of trying to position themselves as, we'll make the model that's just for you using all your content, the Disney model, right? They're going go to Disney and I don't know if this has or hasn't happened, but somebody's definitely going to go to Disney and be, you have all this stuff and it's probably enough to make something that could really work for you, but only you can allow us to do, first of all, even give us access and legally allow us to do that. And then of course you're going to own it. So there's kind of maybe more of a service business model there in model creation. Mosaic is doing a great business in this type of thing as well. But I've really enjoyed this conversation. It's a great surprise to connect with someone who's both in the writing world and very clearly paying a lot of attention to AI. And you've got a great read on the current situation. So I really appreciate the time.
Trey Kollmer: 55:35 This was really fun. I will say on just the last thing you were saying, I do think one of the things the union is fighting for is we have some rights to how our writing is reused. Right now, they're demanding to not be able to train stuff on our material. But to what you're saying, I wonder if there is the writers enjoying some of the upside.
Nathan Labenz: 55:55 Yeah, potentially so.
Trey Kollmer: 55:56 But yeah, this has been so fun. And yeah, definitely going to be listening to more of this podcast.
Nathan Labenz: 56:04 Well, that's flattering. Trey Kollmer, thank you for being part of the Cognitive Revolution. Sophia Lear, welcome to the Cognitive Revolution.
Sophia Lear: 56:12 Thank you for having me. I'm happy to be here.
Nathan Labenz: 56:14 Yeah. So I'm excited to just kind of learn about you and how AI is starting to present itself in your life and work. You're a Hollywood writer. Maybe just give us a little bit of introduction into what that looks like. Then obviously we can get into the WGA strike and the AI related demands and all that good stuff.
Sophia Lear: 56:34 I was currently a co executive producer on Ghosts, which is a sitcom on CBS. And I've been doing mostly network TV shows for a while, which is sort of outside of the experience of a lot of the guild and is a little bit sort of separated from a lot of the issues that the guild is striking about right now. But, yeah, I've been, I worked on the show New Girl and then a lot of other shows that less people have seen. And being in the writers room is just one of the things that I really love and is just so fun to be a part of a comedy writers room. So, yeah, I usually have been these sort of 22 episode network shows. They're 9 months of being in a writer's room from coming up with the episodes to writing them to punching them up and seeing them through production and editing. Yeah, it's been a blast.
Nathan Labenz: 57:38 Bring us then to the present day. I know that there are multiple different issues that are important to the strike. How important do you think these AI issues are, as compared to things that would be, presumably considered kind of more core, like how certain revenues are going to be divvied up?
Sophia Lear: 57:57 I think we don't know, really. I mean, as with AI in so many ways, it just feels so nascent in terms of what the implications are exactly. For me, personally, I feel like if AI can really do a writer's job as well or better than humans, I don't think that there's anything that we can sort of put into contractual language or kind of negotiate to change. So in terms of the strike, I don't know. I personally feel that humans will always have something to offer in terms of writing scripts and telling stories and jokes also, I think, is something that is particularly difficult for AI to understand or to do. But obviously, I could be 100% incorrect about that. Nathan Labenz: 57:57 I think we don't know, really. I mean, as with AI in so many ways, it just feels so nascent in terms of what the implications are exactly. For me, personally, I feel if AI can really do a writer's job as well or better than humans, I don't think that there's anything that we can sort of put into contractual language or negotiate to change. So in terms of the strike, I don't know. I personally feel that humans will always have something to offer in terms of writing scripts and telling stories, and jokes also, I think, is something that is particularly difficult for AI to understand or to do. But obviously, I could be 100% incorrect about that.
Sophia Lear: 59:02 Yeah. I was pretty deep in it as of last summer, doing all kinds of different task automation type things and building apps. And then GPT-4 came online and it was like, woah, this is really next level. It kind of changed my framing because I guess implicitly had been thinking it's probably going to level off, or if I even had that fully consciously in my head. But it clearly not rivaling me in anything. It was very much a tool that with some elbow grease, I could get to do stuff for me. And then GPT-4 is like, man, it's a lot closer. So people are still, I think, pretty divided on is that going to kind of level out just under, and I think this would be in some ways an ideal place for us to hang out for a while, where the AIs kind of max out at just under human level. If we could get something like that, it would be like we could have AI-powered doctor type experiences that could be extremely valuable, extremely good, apply standard practice. But yeah, we wouldn't lose track of, lose control of the future to AI scientists. And probably similarly in culture, there could be maybe a happy place where you get a great writing assistant, AI writing assistant, but it's never truly breaking through to this kind of brilliant next level, deeply resonant storytelling that the best people in their best moments are able to create. I don't know where we're going to be on that. Honestly, if I had to guess, I would say it does seem likely just given how far it's come already that it probably goes even further, but it may not. The argument would be, you're limited by the training data. How are you ever going to get smarter than the smartest stuff in your training data? Maybe that just can't happen, and we kind of max out.
Nathan Labenz: 1:00:58 To your point, I mean, it seems sort of silly to think that we'll always be slightly better. But I mean, just imagining if you had a script that's written by AI and it's computer generated set and computer generated actors, let's say, could that be as moving or impactful if you sort of know that it's all a simulation? I don't know. But I think there's something about it being art. Does that change the question at all in terms of is it good, or does it have the same impact? Like, versus just other tasks that are sort of they are done, and that's the only thing that is necessary for them to be complete.
Sophia Lear: 1:01:59 Yeah. We're very early in trying to sort all that out. There've been some of these stories where somebody will enter an AI-generated piece of visual art into a contest, and there have been a couple instances of the AI art winning the contest. And then you've got this big reveal after the fact and kind of a lot of gnashing of teeth because it's kind of hard to argue that it must have been competitive. When blinded, people accepted it and seem to enjoy it even to the point of awarding it prizes. But then when the reveal happens, they feel often quite upset about that, partly maybe just because they feel like they've been used. But I think it also runs a little bit deeper than that, and it's kind of like it's not just that you kept that secret from us. It's like the whole thing is kind of bothersome to people for, I think, pretty understandable reasons. How much of that kind of discussion or awareness would you say is kind of the norm now in a Hollywood writers room? Is this something that you guys are talking about in the writers room a lot over the last few months?
Nathan Labenz: 1:03:07 I mean, not on a deep level. I think there's been sort of some playing around with it. I mean, it's some sort of naughty but very intriguing thing. And yeah, I first was made aware of ChatGPT from a writer in the writer's room being like, my brother asked the ChatGPT for movie prompts and gave it little ideas, and then it fleshed out the whole story, and they're pretty good. And I mean, currently, I think it's very good at sort of logline level story ideas. And so I don't know. There's been some messing around with it, seeing what it can do, seeing what it's good at, seeing what it's not good at. There's always so much time in the writer's room where it's 12 people and you're just like, what could an episode be about? What could happen? And it just feels like you just have no ideas. And so I think anything where it's sort of like, oh, does the robot have any ideas? I mean, I think that's always just gonna—
Sophia Lear: 1:04:28 I'm kind of really curious as to how people are feeling. Is this a strategic defense of an economic position? Is it a kind of defense of a certain sanctity or purity of the craft?
Nathan Labenz: 1:04:45 You have to have a showrunner that's human. They often in writers' rooms, you just sort of run out. You easily feel just sort of burned out of just raw material, and you'll send off writers, usually lower level writers, of just generate some story areas. Just generate some just raw material that then we can kind of dig into. And I don't know. I could sort of imagine at least it seeming very helpful to writers that you could have AI sort of like, okay, what are story areas that you could do? But to me, you're always gonna need that, especially comedy writers' rooms, just the conversation and the alchemy and the feeding off of each other and the sort of unexpected things that happen when you're working with other people. So I think that to me, sort of fundamentally having some tool that kind of can just generate material that then you can sort of work with sounds at least nominally helpful, but that also I believe in that it really makes a difference when people are talking to each other and generating things together. And what comes out of that is always more exciting and interesting than just someone working on their own. That's sort of predicated on some idea I have of quality or what's good. And if it doesn't matter that something reach that level of quality, it's possible that companies just—and I think that's a little bit what we've seen leading up to the strike with just the conditions for writers not being that great. It's like, well, what is the bar in terms of quality? And maybe just I personally feel like you get better stuff out of people talking and working together, but I could be wrong, and the companies could easily just not care. And it's just sort of like, okay, it's a slightly better cookie that you're making than the cookie I think is a better quality, but people are gonna buy it. So what do they care about it? I think there's a lot of sort of existential fear in the Writers Guild right now. And so I think AI is very much a part of that. So I think there is worry about that. I just tend to feel like if this was a phase of time where we got to be writers, I'm glad to have been a part of it, and I need to get some more skills I can market. I don't know. It just feels so too large to be something to sort of fret about or be concerned about.
Sophia Lear: 1:07:53 People have this notion that there's the studios don't really care. They'll feed crap to the audience. And if they can save a dollar on the writers, either by just having fewer of them or screwing them whichever way or replacing with AI, they don't really care about any of that. They'll just do it and continue to churn stuff out, and the audience will accept it. That's a little tough for me to reconcile with my just general sense that content is fiercely competitive. There's infinite alternatives. It's one of the most liquid markets that we're in. It's I have a hard time imagining how the studios could end up in a spot where they don't care about the quality of the content. But—
Nathan Labenz: 1:08:36 And writers are very prone to feel this way, but one part is that I think sometimes writers at least often feel that the writing aspect of something being good is sort of overlooked. And there's the Lord of the Rings show that's out now that was sort of a real dud, and I think part of that is just not given enough time to write, not just being well thought out as a story. And sometimes I think there's, oh, it's Lord of the Rings, and it's gonna look really cool, and underestimating the fact that if the story isn't good or fleshed out, it's really gonna—people are gonna register it as bad and not even quite know why.
Sophia Lear: 1:09:22 So I wonder with the rise of avatars and deep fake type stuff, are we gonna see an actor guild strike coming up soon? There's all this post-production amazingness. Adobe is just rolling out insane updates left and right where you can just—saw a demo yesterday. You can segment an image and replace the floor, replace the wall, replace the ceiling, replace the guy, add a car. I mean, it's like, woah. And this is taking seconds to minutes and comes out, to my eye, looking very good. So I guess is there sort of a similar fear going on? Is there enough kind of cross-pollination? Do you think that there's ultimately gonna be solidarity across all these different groups that contribute to the creation of content? How do you think that kind of plays out? Because it's happening, as you I'm sure are aware, seemingly at every stack of the content and every layer of the content creation stack.
Nathan Labenz: 1:10:25 Yeah. I think the deepfake—I'm sure that you can have fake actors pretty soon, if not already. I'll be curious to see if the actors guild of SAG strikes. They haven't been on strike in a long time. Directors seem pretty set. They—I might have to get into directing because that seems pretty—they there's just very strict—I mean, the guild is very strong. They have very strict rules about that you have to have a director, and it does seem something that sort of—I mean, I think directing is a managerial role in a lot of ways. You're just sort of head of the production. So that seems—I don't know if the Directors Guild has similar alignment with the other guilds. Yeah. And in terms of—I mean, obviously, CGI was a huge thing and just the technology of how things look. It'll be fascinating to see what happens.
Nathan Labenz: 1:11:31 These disruptions have happened before.
Sophia Lear: 1:11:33 Right? At one point in time, you wanted to shoot Ben-Hur, you had to set up actual horses, actual chariots. There's no other way to do it. Now you don't really have to. You don't have to necessarily visit the Hippodrome in person to create a similar scene. And so we sort of survived that. And a lot of people are kind of like, yeah, it'll happen again. People talk about bank tellers like, oh, when the ATM came out, people were sort of, there goes the bank teller. And what has instead happened is there's actually more bank tellers because it became more profitable to run a bank, and so people opened up a lot more banks. There's way more bank branches than there used to be, and so there's more jobs, and they don't count as much money, but they sell you mortgages or whatever. It's a pretty pat story at this point. Big question, obviously, is does that pattern hold this time? And it's really hard to say. Do we just get another 10x or a 100x more content that all has kind of awesome special effects and is made much more cheaply with smaller teams, but nevertheless, those are jobs, and there's just more content creation happening? Or is there some sort of genuine kind of displacement this time around because it's like people can only watch so many movies or shows? Their time, obviously, remains the one resource that's not changing. And maybe there is—what could we even do with a 100x kind? And people are already watching 8, 9 hours of video a day. They can't really move that much. So what can we really do with 100 times more content? Is it gonna be that people just have that much more choice or they get that much more tailored stuff? You could even imagine individual personalization, obviously, at some point. But yeah, I don't know. The crystal ball gets pretty foggy there.
Nathan Labenz: 1:13:34 You could imagine that everybody is just sort of watching the things that are tailored to them, and this idea that part of TV or part of movies is a sharedness to it is just an idea that will go away. I personally think that something would be lost, but it already sort of has been. I mean, there's just the sort of show that everybody—but it feels nice to me when it's like Succession and you get to talk about it.
Sophia Lear: 1:14:03 Any other thoughts on your mind or topics you want to cover?
Nathan Labenz: 1:14:06 Anything that can make jobs done cheaper at the same quality, business is always going to reward that. AI should just cure cancer and do stuff like that and let us keep writing stories. That's my pitch to AI if it's listening.
Sophia Lear: 1:14:27 I'll pass that along. Sophia Lear, thank you for being part of the Cognitive Revolution. Garrett Schabb, welcome to the Cognitive Revolution.
Garrett Schabb: 1:14:36 Cool. Thanks for having me.
Sophia Lear: 1:14:38 We are talking in the midst of a Hollywood writers' strike, and you are party to that. So I guess maybe just for starters, can you kind of give us a little bit of context on who you are and how this whole thing has come about in recent weeks? Nathan Labenz: 1:14:38 We are talking in the midst of a Hollywood writers' strike, and you are party to that. So I guess maybe just for starters, can you give us a little bit of context on who you are and how this whole thing has come about in recent weeks?
Garrett Schabb: 1:14:52 Of course. So I'm a TV and film writer. I've been in the writer's field since 2012. I've worked in comedy and drama, and the last show I wrote for was a show called Suits, a legal show with famously a Meghan Markle fame. And in the last, this is going into our third week of striking. And I think for listeners, one thing that I find super interesting and lucky is that we're talking about how ChatGPT and other large language models could be used to sort of take our jobs. I think we're very lucky that this negotiation cycle happened when it did. If ChatGPT or these large language models had been released a year ago or 2 years ago, I think the studios would have already implemented them and we wouldn't have had this moment to get out ahead of the conversation. I think we would be way behind it. So just pure luck. I think we're talking about these issues now, we have some voice in the matter rather than being immediately phased out.
Nathan Labenz: 1:15:54 How would you articulate to somebody outside the industry what you view to be really at the heart of the issue?
Garrett Schabb: 1:16:03 The Writers Guild position is that AI should not be used to generate literary material of any kind. So a studio couldn't ask a large language model for a short story about time travel and then bring that short story to a writer and ask them to adapt that piece of intellectual property as a television show or a movie. We also don't want it to generate drafts of scripts that then a writer could be employed or engaged at a discounted rate to sort of edit it or punch it up, as we say. It seems like from, this is our dystopian, but I think also realistic view, is that the studios see a future in Hollywood that involves as little human contribution as possible. And I think for, this is complex obviously, but I think for professionals working in a field that is centered around telling human stories, I think we find that fundamentally, I don't think any writer is opposed to change, but we think that the approach that the studios have obviously is geared towards profit, less towards telling and enriching human stories than phasing out an entire workforce. It's obviously something we want to prevent. We've seen a trend and we've connected that with the emergence of these large language models, and we can very easily see how these studios would use that technology to continue that trend to a pretty logical endpoint.
Nathan Labenz: 1:17:52 Is this something that is happening now? Are there any examples of shows that people have tried to assign the creator role to an AI or just not have one?
Garrett Schabb: 1:18:03 So there's been some rumors, and I'm not sure if they've been corroborated or not, that right now, many of these studios are, because obviously, there is some copyright law that I just don't think has been worked out at all yet on this. It's so cutting edge, but what the studios sounds like they're doing is feeding works, literary works in the public domain into, I think they may ask their own proprietary models at this point, and asking these models to generate feature length scripts for books that are in the public domain. And they might be saying, make The Count of Monte Cristo into a modern day female driven action thriller. Go.
Nathan Labenz: 1:18:53 Have you gone in and played around with this stuff? What has your ChatGPT personal exploration looked like?
Garrett Schabb: 1:19:01 Yeah, I've used it extensively. First, just as a layperson, I want to understand the technology as much as I can for non working purposes, but I have started to use it a fair amount in my own personal writing. I would say I use it almost as a writer's assistant because I'm especially now during the strike, I'm working alone from home. I find it's great at helping me organize, keep track of my ideas, and use as a sounding board that I sort of consider it to be if I was employing somebody just out of college and I wasn't asking them for pitches on ideas. I was just asking them to hear me, recite back with clarity what they heard me say, hold on to certain ideas so that I can bring them back at a moment's notice. I'm outlining a new pilot episode for a new concept of the television show right now, and I will just talk to it about the characters, their relationship, their dynamic, or say the outline of the episode, and I'll just sort of say to the model, keep this outline at hand so if I want to make changes to it, you can just spit it back out to me. And if I say, oh, actually, I don't want that thing to happen in act 3 that then rippled into act 4, can you remove those 2 scenes and replace them with this and this? That's sort of how I'm using it right now, and I would say it's been pretty, it's useful and also more unreliable in some ways that I wasn't expecting, but I've sort of drawn a moral line in that I'm not asking it for pitches or ideas. That's me personally. I've sort of developed a code with myself, but I've also asked it more on an experimental level to generate ideas just to see what it's capable of, and I've found that almost without exception, it reverts to pretty cliche stuff. Even when I spent a lot of time trying to explain how novel concepts are arrived at and trying to coax it to come up with new stuff, it still feeds me super cliche ideas. So I don't know if I'm missing out on anything novel by maintaining that personal code.
Nathan Labenz: 1:21:37 Yeah, interesting. It sounds like ChatGPT is kind of the main.
Garrett Schabb: 1:21:42 Yeah, GPT-4. I will generate an outline for this pilot episode that I have, broken down into acts, and then under each act, I'll have maybe 4 or 5 major scenes, and I'll type 2 or 3 sentences for each scene, and this will be maybe under 1,000 words. And put this into the chat and I'll say, keep this outline handy. And anytime I type outline, spit this back exactly as I wrote it to me so I can make changes, and then make changes as I dictate. What I've been noticing, at first it was subtle and then it sort of started to get this weird feedback loop, it would start spitting me back the outline, and then it would make subtle changes or omissions. So it would subtly change the profession of my main character or forget a scene that I had put in there. I mean, I understand that, for example, these things aren't great at doing math, at least right now. That's something that's lagging behind. It's interesting to me that you're asking it simply to just regurgitate exactly what I had put in maybe 3 exchanges earlier, and it's failing at that. And it actually added an extra headache for me because if you go through 9 or 10 or 12 iterations of an outline and then you realize, wait a second, where's that scene that I had imagined 5 hours ago that I relied on this thing to keep track of, and now it's gone, and now I have to scroll back up thousands and thousands of words, it diminishes how useful the tool is for me. I'm still using it in this way, but it was just an interesting thing that I noticed that I was not expecting.
Nathan Labenz: 1:23:31 Yeah, I wonder if there might be some techniques that could help you with that. One of the really common ones is known as the format trick. And you basically sort of say, use this format. There's some other, again, they're very flexible, right? So you can experiment and find your own format rhythm. But I tend to do things like use this format, and then I'll have deliberately sort of odd tags sometimes. I'll use an XML, HTML like tag to be, script scene 1 within brackets, and then an end tag below that. And then dressing that up a little bit more, oddly structured, but telling it use this format. At a minimum, I would think you should be able to get consistent structure back. I would not expect that you should see scenes getting dropped. What I would guess might be happening there is also, you could just be a little bit more explicit about some things. I always also recommend positive framing instead of negative framing. So sometimes people will say, do not change the structure of the outline or whatever. And sometimes I think the later models are getting better with this too, but definitely have seen in the past problem with negation where, it's kind of the, I forget, don't think of the white rhino or whatever. And then it's all you can think of is the white rhino. There's a little bit of that effect with the model. So sometimes I recommend to people, instead of saying, don't do X, say in a positive way, what you do want it to do. You must always return exactly the structure and may modify these aspects. I bet you can get over that hump to at least get the consistency of structure that you want back. Simple things too might be interpreted a little differently than you mean them. So if you say rewrite this script or whatever, exactly what it's, you may be thinking in your head a certain definition of rewrite, and it may have a little more expansive definition of rewrite. So, we can workshop this separately. I think maybe folks will find this interesting, but I bet there is at least to the point where you'd get the right number of scenes back. I think you could get there.
Garrett Schabb: 1:25:52 What's interesting is that all of us as writers have voices of other writers in our heads. So when I'm writing something, I'm trying to, whether it's consciously or not, I'm trying to write like Jesse Armstrong, the creator of Succession, or Vince Gilligan, the creator of Breaking Bad. Those people's voices are definitely in my head. And obviously, it's an interesting question. Do by invoking David Ogilvy or by invoking Vince Gilligan and using it in the ChatGPT prompt, what does that artist owe, if anything, for their contribution to the voice that the model has in its head? And we haven't litigated that yet. I'm sure you've talked to some people about that, but it's another big question that I think a lot of writers, especially the high profile ones, are worried about, which is they've worked their entire careers, they've been extremely lucky, and they're extremely talented just to have their voice, their unique voice, automated. But I think the other scary thing is that for the idea of new voices coming along, I'm not sure if we enter this near future where most written content for TV or films are generated by a model, I'm not sure we're going to see new voices come. I think we're going to see this endless whirlpool of regurgitation of what's already out there. So I think it's a fear and a unique question about who is owed what when you say write like David Ogilvy. I don't know if you have a take on that.
Nathan Labenz: 1:27:40 I don't feel like I have the final answer. I do think it is definitely a fair question. What definitely doesn't cut it in my mind is a notion that is, when you do hear this stuff, you'll hear people say, well, people really do the same thing. You said a version of that, right? Everything's a remix, and we're all taking inspiration from everywhere. And so therefore, it's no big deal if the AI does the same. I don't buy that argument at all, because I think one of the most important things to keep in mind about these technologies is that they are fundamentally alien. They are not human. They do, I mean, people go the other error on the other side of this too and say they don't understand anything. And I would push back on that and say, well, I think they actually do understand things. They may not understand them in quite the same way that we do, but we are starting to elucidate the mechanisms of understanding, some of which turn out to be quite weird in comparison to how we feel like we understand, but there is some understanding there. But it doesn't follow from, in my mind at all, from this fact that humans take inspiration from other humans, that we therefore have to allow AIs to run wild and do whatever they want to do. They're fundamentally different beasts, and I don't think it is at all obvious what the rights, decisions around that should be. I do think it's really, it does get into, in my mind, definitely much more gray area when you start to name specific people. Because you're talking in an agreement with studios, but then there's also the broader societal question of, all the TikTok creators can go to ChatGPT and ask for David Ogilvie, and they can just put stuff out there and nobody's even negotiating over that right now. Nathan Labenz: 1:27:40 I don't feel like I have the final answer. I do think it is definitely a fair question. What definitely doesn't cut it in my mind is a notion that is, when you do hear this stuff, you'll hear people say, well, people really do the same thing. You kind of said a version of that, right? Everything's a remix, and we're all taking inspiration from everywhere. And so therefore, it's no big deal if the AI does the same. I don't buy that argument at all, because I think one of the most important things to keep in mind about these technologies is that they are fundamentally alien. They are not human. They do, I mean, people go the other error on the other side of this too and say, they don't understand anything. And I would push back on that and say, well, I think they actually do understand things. They may not understand them in quite the same way that we do, but we are starting to elucidate the mechanisms of understanding, some of which turn out to be quite weird in comparison to how we feel like we understand, but there is some understanding there. But it doesn't follow from, in my mind at all, from this fact that humans take inspiration from other humans, that we therefore have to allow AIs to kind of run wild and do whatever they want to do. They're fundamentally different beasts, and I don't think it is at all obvious what the rights, decisions around that should be. I do think it's really, it does get into, in my mind, definitely much more gray area when you start to name specific people. Because you're talking in an agreement with studios, but then there's also the broader societal question of, all the TikTok creators can go to ChatGPT and ask for David Ogilvie, and they can just put stuff out there and nobody's even negotiating over that right now.
Garrett Schabb: 1:29:26 Yeah. And I think there are probably different thresholds for, and maybe these are arbitrary but different thresholds for different types of media. I would say, it's interesting. You plugged in, give me some ideas in voice of this person, but you sort of made a contract with yourself that nothing that you get back is going to see the outside world. Even if it gives you some good ideas, you're going to then filter those ideas through your own human prism. I think I still think that there's something morally okay with that. But the idea, for me at least, of taking the ideas that come straight from the model, especially if you're asking for ideas based on other famous writers and then just using them whole cloth. There just feels something wrong to me about that. I do wonder, obviously, we don't know how these things are going to, how they're going to get better in terms of storytelling. And I wonder, is there going to be a world where they get amazing but in different ways than human storytelling is? I wonder if there's going to be sort of two types of storytelling in the future, right? And maybe we find that it tells stories that are just different and weird in a way that we never saw coming, and maybe there's a place for human stories and AI stories. But we're not asking the AI to sort of create or masquerade as human storytelling, but tell us stories that we never would have thought of. And as somebody, as a creative at heart, I would love to see those stories. I don't necessarily want it to take my job away, so I'm sort of talking through both sides of my mouth. But at that point, you're almost verging into gaming, which is a completely legitimate and amazing form of storytelling in itself. So I think that there are delineations between different types of storytelling. I do believe that there's something about singular, fixed stories that are going to remain powerful. Who read the same book or watch the same movie or television series or listen to the same album and are able to discuss and bond over what that means to them individually and collectively, I think that that type of fixed storytelling should remain. And I think there's a real cultural power to stuff like that. But of course, I think gaming is awesome, and I'm curious about it. There's a thread by one of the former director of the Screen Actors Guild, and she wrote this long thread about the studios wanting to create these basically choose your own adventure movies where everyone has their own action movie built for them, where they are placed into, if they're in the Fast and Furious, they can be put into the driver's seat next to Vin Diesel, and the whole movie can be generated, bespoke, in a moment's notice for them, and they get to see themselves or whatever, their son as the main character of Fast and the Furious. I can see that, just seeing where the technology is going, I can see that. I'm not sure that type of storytelling has a ton of power behind it culturally. For my personal taste, it feels like a novelty that I'm not sure people will, it feels like a bauble to me. I could be totally wrong, but I don't think that that has as much cultural power as, wow, we all watched the same movie. We have different thoughts, but we can bond as humans over it. And don't we tell stories to find solidarity as humans? Stories resonate. A single story like Lord of the Rings, that resonates with millions, billions of people over generations. It doesn't change. It doesn't make adjustments for whoever's reading or experiencing it. It is this fixed story that we all see something that we recognize in. So it's trying to, it's an antidote to the sort of loneliness of life, and I worry that if we're all just getting this sugar rush from these personalized stories, it loses the core quality of art or storytelling. If you look at the response to shows like Succession, which is written with a full writer's room over the course of a year, right? A traditional writer's room. The response to the depth and the texture and the unpredictable nature and the humanity of a show like that that's written by multiple people over a significant period of time doesn't compete with these shorter, faster shows that you're seeing on other platforms. And I can see that gulf growing. So people saying, what happened to the stories that we used to like that felt real and not flat? So I think laypeople assume that studios are incentivized to just find the best writing and create conditions that will tell the best stories. It's just not the case. They don't, especially now, as they're all publicly traded, most publicly traded companies, that's not paramount in what they're looking for. And I think just the shift to AI could accelerate that, where it's just about getting out content. And if we land on something that's good or three-dimensional or moving, that's an afterthought. I think that's the real fear, is that we, writers, we have this financial incentive, but there's also a big moral thing here that maybe a lot of other unions don't have. We're trying to defend the sanctity of really good storytelling, and that sounds a little woo woo, right, to say that we have this moral element that we're fighting for. And but I think we, we accept that that's, we accept that mantle. At least I do.
Nathan Labenz: 1:36:23 If there were a universal basic income, and we had decoupled the right to eat and to have basic needs met from jobs, would that change your thinking on this, if at all?
Garrett Schabb: 1:36:40 I'm all for decoupling those things. So you told me that in 10 years we'll be at that place, I might still be telling stories and making films with my friends, and it'd probably be a lot easier to do that. I think the question is, are we going to make it through to that point? So I'm a big fan of the book Fully Automated Luxury Communism that I'm sure you've read. Aaron Bastani, I think is the name of the author. And the central question of that book is we can imagine that future. It's easy to imagine a future where those things are decoupled. What's a little harder to imagine is making it through late stage capitalism, making it through these technologies reaching the point where they can automate lots of jobs, but we don't reach that real post scarcity where the decoupling happens because of this manipulated or artificial scarcity so that the people who control the technology can be enriched by it. I'm all for making it through that gauntlet. I just don't know if we will. And so I think to be conservative or cautious as a guild, we have to react as if that world is not going to exist. But yes, if that world does exist, I would happily, happily give up being paid to write stories and the idea of having a job and making money from writing a story. I could trade that for everyone having a basic income and a high standard of living. Yeah, give me that over my stupid writing job for sure.
Nathan Labenz: 1:38:33 Yeah, that's a great utopian vision perhaps for us to end on. They're in short supply. Maybe that's one other thing I could leave you with is, I think a positive vision for the future is maybe the thing that is most needed today. The technology is coming online fast. The upside potential of that in my mind is not just clear, but clearly transformative. And we've got dystopian stories and scenarios in abundance, but very few kind of compelling articulations of what a genuine positive future might be. Arguably no better group to try to solve that problem of scarcity for us than the writers. So if you want to take that challenge on, I'm looking for inspiring visions of the future right now.
Garrett Schabb: 1:39:32 I will say we are a cynical bunch, and we've only become a little more cynical during these last three weeks, but we'll try to accept that challenge.
Nathan Labenz: 1:39:40 Garrett Schabb, thank you for being part of the Cognitive Revolution.
Garrett Schabb: 1:39:43 Thank you so much for having me.