Is AI Stalling Out? Cutting Through Capabilities Confusion, w/ Erik Torenberg, from the a16z Podcast
Erik Torenberg joins to debate if AI progress is stalling, addressing arguments from Cal Newport. The episode counters this, highlighting significant qualitative advances in AI and arguing for continued rapid progress in the coming years.
Watch Episode Here
Listen to Episode Here
Show Notes
Erik Torenberg joins to debate whether recent developments suggest AI progress is slowing down or stalling, addressing arguments from Cal Newport and others. Nathan counters this view by highlighting significant qualitative advances, including 100X context window expansion, real-time interactive voice, improved reasoning, vision, and AI's growing contributions to hard sciences. The conversation then covers AI's impact on the labor market, the potential for AI protectionism, and concerns about recursive self-improvement. This episode argues that AI capabilities are not stopping, with frontier developers seeing a clear path for continued rapid progress in the coming years.
Sponsors:
Tasklet:
Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai
Linear:
Linear is the system for modern product development. Nearly every AI company you've heard of is using Linear to build products. Get 6 months of Linear Business for free at: https://linear.app/tcr
Shopify:
Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive
PRODUCED BY:
CHAPTERS:
(00:00) About the Episode
(03:49) Is AI Slowing Down?
(09:15) Newport's Scaling Law Theory
(16:56) The Value of Reasoning (Part 1)
(17:17) Sponsors: Tasklet | Linear
(19:57) The Value of Reasoning (Part 2)
(24:52) Explaining GPT-5's Vibe Shift
(31:39) AI's Impact on Jobs (Part 1)
(36:50) Sponsor: Shopify
(38:47) AI's Impact on Jobs (Part 2)
(44:08) Recursive Self-Improvement via Code
(49:35) The Future of Engineers
(53:24) Economic Pressure vs. Protectionism
(58:29) Progress Beyond Language Models
(01:07:11) The State of AI Agents
(01:19:19) China's Open Source Models
(01:29:44) A Positive Vision Forward
(01:37:39) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolution.ai
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathanlabenz/
Youtube: https://youtube.com/@CognitiveRevolutionPodcast
Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Transcript
Introduction
Hello, and welcome back to the Cognitive Revolution!
Today, I'm excited to share a conversation I recently had with Erik Torenberg, which originally aired on the a16z Podcast feed, about whether recent developments suggest that AI progress is slowing down or even stalling out.
We begin with a discussion of recent arguments from Cal Newport, from the New Yorker, where he argued that "progress on large language models has stalled," and from a recent episode of the podcast Lost Debates, where he highlighted, among other things, the negative impact that AI can have on student learning.
While I absolutely share Cal's concerns about AI-enabled bad habits like "cognitive offloading", and more broadly whether AI will ultimately prove to be good or bad for humanity overall, I think it's important to separate the question of impact from the analysis of capabilities advances.
And on the capabilities point specifically, while OpenAI's naming decisions have caused a lot of confusion, I point to the 100X expansion of context windows, the introduction of real-time interactive voice mode, the improvement in reasoning capabilities and resulting IMO gold medals, the dramatic improvements in vision and tool use â including general computer use, and the fact that today's frontier models are beginning to contribute to the hard sciences in meaningful ways to argue that we have in fact seen qualitative advances.
And in fact, while the capabilities frontier does remain jagged and embarrassing failures are still fairly common, every aggregate measure â from the volume of tokens processed, to the size of task that AIs can handle, to the revenue growth we are seeing across the industry â suggest that, overall, progress remains pretty much right on trend.
From there, we go on to discuss:
- My expectations for AI's impact on the labor market, and why I think that some verticals, like accounting, where most people only want to buy what they absolutely have to have, will be most disrupted, while areas like software engineering might, at least for a while, maintain employment by dramatically expanding output;
- How advances in multimodality, as they move beyond text & image and begin to deeply integrate reasoning models with specialist models in domains like drug development, materials science, and robotics, suggest a sort of base case for superintelligence;
- The possibility of AI protectionism â such as the proposal to ban self-driving cars that Senator Josh Hawley recently floated â and a broader AI culture war;
- My concerns about recursive self-improvement and companies tipping into that regime without adequate controls;
- How much it matters that China now produces the best open source models and why, though they probably will have a real impact on China's AI sector, I remain skeptical that chip export controls will make the world a better place;
- And finally, considering that the scarcest resource is a positive vision for the future, why I encourage everyone â regardless of their technical ability or cognitive profile â to get involved in shaping the future of AI development.
The bottom line for me is that AI capabilities advances have not stopped, and I don't expect them to stop for the foreseeable future.
Frontier developers report a clear line of sight to at least two more years of similar progress, and their optimism is supported by the last 5 years of history, which shows that, again and again, AI weaknesses that were expected to be very hard to overcome have in practice been solved through continued scaling plus relatively minor tweaks to the core paradigm.
By 2027 or 2028, I think labor market impacts will be undeniable, and it will be clear to all that AI models are making important contributions to science. And while there are indeed some investment bubble dynamics, and some new model releases between now and then will surely disappoint, the most dangerous thing we could do is convince ourselves that we don't have anything major to worry about.
With that, I hope you enjoy this discussion about recent AI capabilities advances, and what's coming next, with Erik Torenberg, from the a16z Podcast.
Main Episode
Erik Torenberg: Nathan, I'm stoked to have you on the ACZ podcast for the first time. Obviously, I've been podcast partners for a long time with you leading Cognitive Revolution. Welcome.
Nathan Labenz: It's great to be here. Thank you.
Erik Torenberg: So we were talking about Cal Newport's podcast appearance on Lost Debates, and we thought it was a good opportunity to just have this broad conversation and really entertain this sort of question of, is AI slowing down? So why don't you sort of steelman some of the arguments that you've heard on that side from him or more broadly, and then we can sort of have this broader conversation.
Nathan Labenz: Yeah, I mean, I think for one thing, it's really important to separate a couple different questions, I think, with respect to AI. One would be, is it good for us right now even? And is it going to be good for us in the big picture? And then I think that is a very distinct question from are the capabilities that we're seeing continuing to advance and, you know, at a pretty healthy clip. So I actually found a lot of agreement with the Cal Newport podcast that you shared with me when it comes to some of the worries about the impact that AI might be having even already on people. You know, he goes over looks over students shoulders and watches how they're working and finds that basically he thinks that they are using AI to be lazy, which is, you know, no big revelation. I think a lot of teachers would tell you that. Shocker. Puts that in. Yeah. Puts that in maybe more dressed up terms that people are not even necessarily moving faster, but they're able to reduce their strain. that the work that they're doing places on their own brains by kind of trying to get AI to do it. And, you know, that continues. And I think, you know, he's been, I think, a very valuable commenter on the impact of social media. Certainly, I think we all should be mindful of how is my attention span, you know, evolving over time? And am I getting weak or, you know, averse to hard work? Those are not good trends if they are showing up in oneself. So I think he's really right to watch out for that sort of stuff. And then, as we've covered many conversations in the past, I've got a lot of questions about what the ultimate impact of AI is going to be. And I think he probably does, too. But then when it comes to it's a strange move from my perspective to go from there's all these sort of problems today and maybe in the big picture to But don't worry, it's flatlining, kind of worry, but don't worry because it's not really going anywhere further than this. Or scaling has kind of petered out or we're not going to get better AI than we have right now. Or even maybe the most easily refutable claim from my perspective is GPT-5 wasn't that much better than GPT-4. And that I think is where I really was like, whoa, wait a second. I was with you on a lot of things and some of the behaviors that he observes in the students, I would cop to having exhibited myself. When I'm trying to code something these days, a lot of times I'm like, Oh, man, can't the AI just figure it out? I really don't want to have to sit here and read this code and figure out what's going on. It's not even about typing the code anymore. I'm way too lazy for that, but it's even about figuring out how the code is working. Can't you just make it work? Try again. Just try again. I do find myself at times falling into those traps. But I would say big part of the reason I can fall into those traps is because the AIs are getting better and better. And increasingly it's not crazy for me to think that they might be able to figure it out. So that's my kind of first slice at the takes that I'm hearing. There's almost like a two by two matrix maybe that one could draw up where it's like, do you think AI is good or bad now and in the future? And do you think it's not a big deal or a big deal? I think it's both on the good and bad side. I definitely think it's a big deal. The thing that I struggle to understand the most is the people who kind of don't see the big deal that it seems pretty obvious to me and the, you know, especially when it comes again to the leap from GPT-4, GPT-5. Maybe one reason that that's happened a little bit is that there were just a lot more releases between GPT-4 and 5. So what people are comparing to is something that just came out a few months ago, like '03, that only came out a few months before GPT-5. Whereas with GPT-4, it was shortly after ChatGPT, and it was all this moment of like, Whoa, this thing is exploding onto the scene. A lot of people were seeing it for the first time. And if you look back to GPT-3, there's a huge leap. I would contend that the leap is similar from GPT-4 to five. These things are hard to score. There's no single number that you could put on it. Well, there's loss, but of course, one of the big challenges is that What exactly does a lost number translate into in terms of capabilities? So, you know, it's very hard to describe what exactly has changed, but we could go through some of the dimensions of change if you want to, and, you know, enumerate some of the things that I think people maybe are starting to or have come to take for granted and kind of forget, like that GPT-4 didn't have a lot of the things that now, you know, we're sort of... expected in the GPT-5 release because we'd seen them in 4.0 and 01 and 03 and all those, you know, things sort of, you know, maybe boiled the frog a little bit when it comes to how much progress people perceived in this last release.
Erik Torenberg: Well, Yeah, a couple reactions. So one is, and even to complicate your two by two even further in the sense of, is it bad now versus it bad later? Like Cal is not really, who we both admire, by the way, a lot. Cal's a great guy and a valuable contributor to the thought space, but he's not as concerned about sort of this sort of future AI concerns that sort of the AI safety folks and many others are concerned about. He's more concerned about what it means to life for cognitive performance and development now in the same way that he's worried about social media's impact. And you think that's a concern, but nowhere near as big a concern as what to expect in the future. And then also he presents sort of this theory of why we shouldn't worry about the future because it's slowing down. And why don't we just share how we interpreted kind of his history, which as I interpret it was this idea of like, hey, We figured out, the simplistic version is we figured out this way such that if you throw a bunch of data into the model, it gets better in sort of order of magnitude. And so the difference between GPT-2 and GPT-3 and then GPT-3 and GPT-4. But then that sort of was significant, the difference, but then it achieved sort of a diminishing returns significantly and where we're not seeing it at GPT-5, and thus we don't have to worry anymore. How would you edit the characterization of his view of sort of the history, and then we can get into the differences between four and five?
Nathan Labenz: The scaling law idea, which is, it's definitely worth agreeing, taking a moment to note that it is not a law of nature. We do not have a principled reason to believe that scaling is some law of it will go indefinitely. All we really know is that it has held through quite a few orders of magnitude so far. I think that it's really not clear yet to me whether or not the scaling laws have petered out or whether we have just found a steeper gradient of improvement that is giving us better ROI on another front that we can push on. So they did train a much bigger model, which was GPT 4.5, and that did get released. And there are a number of interesting, you know, of course there's a million benchmarks, whatever. The one that I zero in on the most in terms of understanding how GPT 4.5 relates to both O3 and GPT-5 and OpenAI obviously famously terrible at naming. We can all agree on that. I think a decent amount of this confusion and sort of disagreement actually does stem from unsuccessful naming decisions. 4.5 on this one benchmark called SimpleQA, which is really just a super long tail trivia benchmark. It really just measures Do you know a ton of esoteric facts? And they're not things that you can really reason about. You either just have to know or don't know these particular facts. The 03 class of models got about a 50% on that benchmark, and GPT 4.5 popped up to like 65%. So in other words, it basically, of the things that were not known to the previous generation of models, it picked up a third of them. Now, there's obviously still two thirds more to go, but I would say that's a pretty significant leap. These are super long tail questions. I would say most people would get close to a zero. You'd be like the person sitting there at the trivia night who maybe gets one a night is kind of what I would expect most people to do on SimpleQA. And that checks out, right? Obviously, the models know a lot more than we do in terms of facts and just general information about the world. So at a minimum, you can say that GPT-4.5 knows a lot more. You know, a bigger model is able to absorb a lot more facts. Qualitatively, people also said, in some ways, maybe it's better for creative writing. You know, it was never really trained with the same power of post-training that GPT-5 has had. And so we don't really have an apples to apples comparison, but people did still find some utility in it. I think maybe the The way to understand why they've taken that offline and gone all in on GPT-5 is just that that model's really big. It's expensive to run. The price was like way higher. It was a full order of magnitude plus higher than GPT-5 is. And it's maybe just not worth it for them to consume all the compute that it would take to serve that. And maybe they just find that people are happy enough with the somewhat smaller models for now. I don't think that means that we will never see a bigger GPT-4.5 model with all that reasoning ability. And I would expect that that would deliver more value, especially if you're really going out and trying to do esoteric stuff that's pushing the frontier of science or what have you. But in the meantime, the current models are really smart and you can also feed 'em a lot of context. That's one of the big things that has improved so much over the last generation. When GPT-4 came out, at least the version that we had as public users was only 8,000 tokens of context, which is like 15, you know, pages of text. So you were limited. You couldn't even put in like a couple papers. You would be overflowing the context. And this is where prompt engineering initially kind of became a thing. It was like, man, I've really only got such a little bit of information that I can provide. I gotta be really careful about what information to provide lest I overflow the thing and it just can't handle it. There were also, as context windows got extended, there were also versions of models where they could nominally accept a lot more, but they couldn't really functionally use them. You know, they sort of could fit them, you know, at the API call level, but the models would lose recall or they'd sort of unravel as they got into longer and longer context. Now you have obviously much longer context and the command of it is really, really good. So you can take dozens of papers on the longest context windows with Gemini And it will not only accept them, but it will do pretty intensive reasoning over them and with really high fidelity to those inputs. So that skill, I think, does kind of substitute for the model knowing facts itself. You could say, geez, let's try to train all these facts into the model. We're going to need, you know, a trillion or who knows, 5 trillion, however many trillion parameters to fit all these super long tail facts. Or you could say, well, a smaller thing that's really good at working over provided context can can, if people take the time or, you know, go to the trouble of providing the necessary information, I can kind of access the same facts that way. So you have a kind of, do I wanna push on this size and do I wanna bake everything into the model or do I wanna just try to get as much performance out of a smaller, tighter model that I have? And it seems like they've gone that way. And I think basically just because they're seeing faster progress on that gradient. In the same way that the models themselves are always kind of in the training process, taking a little step toward improvement, the outer loop of the model architecture and the nature of the training runs and where they're going to invest their compute is also kind of going that direction. And they're always looking at like, well, we could scale up over here, maybe get this kind of benefit a little bit, or we could do more post-training here and get this kind of benefit. And it just seems like we're getting more more benefit from the post training and the reasoning paradigm than scaling. But I don't think either one is, I definitely don't think either one is dead. We haven't seen yet what 4.5 with all that post training would look like.
Erik Torenberg: Yeah. And so, I mean, one of the things that you mentioned that Cal, you know, the analysis missed was that it way underestimated the value of extended reasoning, right? And so what would it mean to fully sort of appreciate that?
Nathan Labenz: Well, I mean, a big one from just the last few weeks was that we had an IMO gold medal with pure reasoning models with no access to tools from multiple companies. And, you know, that is night and day compared to what GPT-4 could do with math, right? And these things are really weird. Like it's nothing I say here should be intended to suggest that people won't be able to find weaknesses in the models. I still use a tic-tac-toe puzzle. to this day where I take a picture of a tic-tac-toe board where one of the players has made a wrong move that is not optimal and thus allows the other player to force a win. And I ask the models if somebody can force a win from this position. Only very recently, only the last generation of models are starting to get that right some of the time. Almost always before they were like, tic-tac-toe is a solved game. You know, you can always get a draw. There's, and they would wrongly assess my board position as the player can still get a draw. So there's a lot of weird stuff, right? The jagged capabilities frontier remains a real issue and people are going to find, you know, peaks and valleys for sure. But GPT-4, when it first came out, couldn't do anything approaching IMO gold problems. It was still struggling on like high school math. And since then, we've seen this high school math progression all the way up through the IMO gold. Now we've got the frontier math benchmark that is, I I think now like up to 25%. It was 2% about a year ago, or even a little less than a year ago, I think. And we also just today saw something where, and I haven't absorbed this one yet, but somebody just came out and said that they had solved a canonical, super challenging problem that no less than Terrence Tao had put out. And it was like this, you know, this thing happened in, I think days or weeks of the model running versus it was 18 months, you know, that it took professional, not just any professional mathematicians, but like really, you know, the leading minds in the world to make progress on these problems. So yeah, I think that's really, you know, that's really hard jumping capabilities to miss. I also think a lot about the Google AI co-scientist, which we did an episode with. You can check out the full story on that if you want to. But they basically just broke down the scientific method into a schematic. And this is a lot of what happens when people... There's one thing to say the model will respond with thinking and it'll go through reasoning process. And the more tokens it spends at runtime, the better your answer will be. That's true. Then you can also build this scaffolding on top of that and say, Okay, well, let me take something as broad and, you know, aspirational as the scientific method. And let me break that down into parts. Okay, there's hypothesis generation, then there's hypothesis evaluation, then there's, you know, experiment design, there's literature review, there's all these parts to the scientific method. What the team at Google did is created a pretty elaborate schematic that represented their best breakdown of the scientific method, optimized prompts for each of those steps, and then gave this resulting system, which is scaling inference now kind of two ways. It's both the chain of thought, but it's also all these different angles of attack structured by the team. And they gave it legitimately unsolved problems in science. And in one particularly famous kind of notorious case, it came up with a high hypothesis, which it wasn't able to verify because it doesn't have direct access to actually run the experiments in the lab, but it came up with a hypothesis to some open problem in virology that had stumped scientists for years. And it just so happened that they had also recently figured out the answer, but not yet published their results. And so there was this confluence where the scientists had experimentally verified and Gemini, in the form of this AI co-scientist, came up with exactly the right answer. And these are things that literally nobody knew before. And GPT-4 just wasn't doing that. I mean, these are qualitatively new capabilities. That thing, I think, ran for days. It probably cost hundreds of dollars, maybe into the thousands of dollars to run the inference. That's not nothing, but it's also very much cheaper than years of grad students. And if you can get to those caliber of problems and actually get good solutions to them, You know, what would you be willing to pay, right, for that kind of thing? So, yeah, I don't know. That's probably not a full appreciation. We could go on for a long time, but I would say in summary, GPT-4 was not able to push the actual frontier of human knowledge. I don't, to my knowledge, I don't know that ever discovered anything new. It's still not easy to get that kind of output from a GPT-5 or a Gemini 2.5 or, you know, a Claude Opus 4 or whatever. But it's starting to happen sometimes, and that in and of itself is a huge deal.
Erik Torenberg: Well, then how do we explain the bearishness or the kind of vibe shift around GPT-5 then? One potential contributor is this idea that if the improvements are at the frontier, not everyone is working with sort of advanced math and physics in a day-to-day, and so maybe they don't see the benefits in their day-to-day lives in the same way that sort of the jumps in ChatGPT were obvious and shaped the day-to-day.
Nathan Labenz: Yeah, I mean, I think a decent amount of it was that they kind of ****** ** the launch, you know, simply put, right? They like were tweeting Death Star images, which Sam Altman later came back and said, no, you're the Death Star. I'm not the Death Star. But I think people thought that the Death Star was supposed to be the model. That was generally the expectations were set extremely high. actual launch itself was just technically broken. So a lot of people's first experiences of GPT-5, they've got this model router concept now where I think one, another way to understand what they're doing here is they're trying to own the consumer use case. And to own that, they need to simplify the product experience relative to what we had in the past, which was like, okay, you got GPT-4 and 4.0 and 4.0 mini and 03 and 04 mini and Other things, four or five was in there at one point. You got all these different models. Which one should I use for which? It's very confusing to most people who aren't obsessed with this. And so one of the big things they wanted to do was just shrink that down to just ask your question and you'll get a good answer. And we'll take that complexity on our side as the product owners to do that. Interestingly, and I don't have a great account of this, but one thing you might want to do is kind of merge the models and figure out just have the model itself decide how much to think. Or maybe even have the model itself decide how many of its experts, if it's a mixture of experts architecture it needs to use. Or maybe there's been a bunch of different research projects on skipping layers of the model. If the task is easy enough, you could skip a bunch of layers. So you might have hoped that you could genuinely on the backend merge all these different models into one model that would dynamically use the right amount of compute for the level of challenge that a given user query presented. It seems like they found that harder to do than they expected. And so the solution that they came up with instead was to have a router where the router's job is to pick. Is this an easy query, in which case we'll send it to this model? Is it a medium? Is it a hard? And I think they just have two really models behind the scenes. So I think it's just really easy or hard. Certainly the graphs that they showed basically showed the kind of with and without thinking. The problem at launch was that that router was broken. So all of the queries were going to the dumb model. And so a lot of people literally just got bad outputs, which were worse than 03 because they were getting non-thinking responses. And so the initial reaction of like, okay, this is dumb and that sort of, you know, traveled really fast. I think that kind of set the tone. My sense now is that as the dust has settled, most people do think that it is the best model available. And, you know, things like the meter, the infamous meter task length chart, it is the best. You know, we're now over two hours and it is still above the trend line. So if you just said, you know, do I believe in straight lines on graphs or not? And how should this latest data point influence whether I believe on these straight lines on, you know, power logarithmic scale graphs? It shouldn't really change your mind too much. It's still above the trend line. I talked to Zvi about this, Zvi Moschwitz, legendary infovore and AI industry analyst on a recent podcast too, and kind of asked him same question. Like, why do you think the, you know, even some of the most plugged in, you know, sharp minds in the space have seemingly pushed timelines out a bit as a result of this. And his answer was basically just, it resolved some amount of uncertainty. You know, you had, open question of maybe they do have another breakthrough, you know, maybe really as the Death Star, you know, if they surprise us on the upside, then all these short timelines, you know, we could have expected a, yeah, I guess one way to think about it is like the distribution was sort of broad in terms of timelines. And if they had surprised on the upside, it might have narrowed and and narrowed in toward the front end of the distribution. And if they surprised on the downside or even just were purely on trend, then you would take some of your distribution from the very short end of the timelines and kind of push them back toward the middle or the end. And so his answer was like, AI 2027 seems less likely, but AI 2030 seems basically no less likely, maybe even a little more likely because some of the probability mass from the early years is now sitting there. So it's not that, I don't I don't think people are moving the whole distribution out super much. I think there may be more just kind of shrinking the, it's getting a little tighter because it's maybe not happening quite as soon as it seemed like it might have been. But I don't think too many people, at least that I think are really plugged in on this, are pushing out too much past 2030 at all. And by the way, obviously there's a lot of disagreement. The way I kind of have always thought about this sort of stuff is Dario says 2027. Demis says 2030, I'll take that as my range. So coming into GPT-5, I was kind of in that space. And now I'd say, well, I don't know, Dario's got, what cars does he have up his sleeve? They just put out 4.1 Opus. And in that blog post, they said, we will be releasing more powerful updates to our models in the coming weeks. So they're due for something pretty soon. Maybe they'll be the ones to surprise on the upside this time, or maybe Google will be, I wouldn't say 2027 is out of the question. But yeah, I would say 20, 2030 still looks just as likely as before. And again, from my standpoint, it's like, that's still really soon, you know, so if we're on track, whether it's 28, 29, 30, I don't really care. I try to frame my own work so that I'm kind of preparing myself and helping other people prepare for what might be the most extreme scenarios and kind of. one of these things where if we aim high and we miss a little bit and we have a little more time, great. I'm sure we'll have plenty of things to do to use that extra time to be ready for whatever powerful AI does come online. But yeah, I guess I don't My worldview hasn't changed all that much as a result of these summer's developments.
Erik Torenberg: Anecdotally, I don't hear as much about AI 2027 or situational awareness to the same degree. I do talk to some people who've just moved it a few years back, to your point. But yeah, Darkesh had his whole thing around, you know, he still believes in it, but sort of, you know, maybe because this gap in continual learning or something to the effect that, you know, maybe it's just going to be a bit slower to diffuse. And, you know, Meter's papers, as you mentioned, showed that engineers are less productive. And so maybe there's less of a sort of concern around, you know, people being replaced in the next few years in mass. I think we spoke maybe a year ago about this, or I think you said something like 50% of jobs. Um, I'm curious if that's still your, your, uh, your litmus test or how you think about it.
Nathan Labenz: Well, for one thing, I think that meter paper is worth unpacking a little bit more because this was one of those things that was, and I, I, I'm a big fan of meter and I have no, um, you know, no shade on them. Cause I do think, do science publish your results? Like, that's good. You don't have to, uh, yeah. make every experimental result and everything you put out conform to a narrative. But I do think it was a little bit, um, it was a little bit too easy for people who wanted to say that, oh, this is all nonsense to latch onto that. And, you know, again, there is, there's something there that I would kind of put in the Cal Newport category too, where for me, maybe the most interesting thing was the, Users thought that they were faster when in fact they seem to be slower. So that sort of misperception of oneself, I think is really interesting. Personally, I think there's some explanations for that that include like hitting go on the agent, going to social media and scrolling around for a while and then coming back. The thing might have been done for quite a while by the time I get back. So honestly, one like really simple, and we're starting to see this in products, one really simple thing that the products can do to a address those concerns is just provide notifications like the thing is done now. So, you know, stop scrolling and come back and check its work. That in terms of just clock time, you know, it would be interesting to know like what applications did they have open? Maybe they took a little longer with Cursor than doing it on their own, but how much of the time was Cursor the active window and how much of it was, you know, some other random distraction while they were waiting? But I think a more fundamental issue with that study, which again, wasn't really about the study design, but just in the sort of interpretation and kind of digestion of it, some of these details got lost. They basically tested the models or the product cursor in the area where it was known to be least able to help. This study was done early this year, so it was done with, you know, it kind of one, depending on how you want to count, right, a couple releases ago. with code bases that are large, which again, strains the context window. And, you know, that's one of the frontiers that has been moving. Very mature code bases with like high standards for coding and developers who really know their code bases super well, who've made a lot of commit, you know, commits to these particular code bases. So I would say that's basically the hardest situation that you could set up for an AI because the people know, you You know, their stuff really well. The AI doesn't. The context is huge. People have already absorbed that through working on it for a long time. The AI doesn't have that knowledge. And again, a couple generations ago, models. And then a big thing too is that the people were not very well versed in the tools. Why? Because the tools weren't really able to help them yet. I think the... sort of mindset of the people that came into the study in many cases was like, well, I haven't used this all that much because it hasn't really seemed to be super helpful. They weren't wrong in that assessment, given the limitations. And you could see that in terms of some of the instructions and the help that the meter team gave to people. One of the things that is in the paper that they would If they noticed that you weren't using cursor super well, they would give you some feedback on how to use it better. One of the things that they were telling people to do is make sure you @tag a particular file to bring that into content. context for the model so that the model has, you know, the right context. And that's literally like the most basic thing that you would do in cursor. You know, that's like the thing you would learn on your, in your first hour, your first day of using it. So it really does suggest that these were, you know, while very capable programmers, like basically mostly novices when it came to using the AI tools. So I think the result is real, but I just, I would be very cautious about generalizing too much there. In terms of, I guess, what else? What was the other question? What is the expectation for jobs? I mean, we're starting to see some of this, right? We are definitely seeing no less than like Marc Benioff has said that they have been able to cut a bunch of headcount because they've got AI agents now that are responding to every lead. Klarna, of course, is, you know, has said, you know, very similar things for a while now. They also. I think have been a little bit misreported in terms of like, oh, they're backtracking off of that because they're actually going to keep some customer service people, not none. And I think that's a bit of an overreaction. They may have some people who are just insistent on having a certain experience and maybe they want to provide that. And that makes sense. I think you can have a a spectrum of service offerings to your customers. I once coded up a pricing page for a, and I actually just vibe coded up a pricing page for a SaaS company that was like basic level with AI sales and service is one price. If you want to talk to human sales, that's a higher price. And if you want to talk to human sales and support, that's a, you know, third higher price. And so like literally that might be what's going on, I think in some of these cases and it. It could very well be a very sensible option for people. But I just, I do see the intercom. I've got an episode coming up with, they now have this fin agent that is solving like 65% of customer service tickets that come in. So, you know, what's that gonna do to jobs? Are there really like three times as many customer service tickets to be? Like, I don't know. I think there's kind of a relatively inelastic supply. Maybe you'll get somewhat more tickets if people expect that they're going to get better, faster answers, but I don't think we're going to see like three times more tickets. By the way, that number was like 55% three or four months ago. So, you know, as they ratchet that up, the ratios get really hard, right? At half ticket resolution, in theory, maybe you get some more tickets, maybe you don't need to adjust headcount too much, but when you get to 90% ticket resolution, You know, are you really gonna have 10 times as many tickets or 10 times as many hard tickets that the people have to handle? It seems just really hard to imagine that. So I don't think, I don't think these things go to zero probably in a lot of environments, but I do expect that you will see significant headcount reduction in a lot of these places. And the software one is really interesting because the elasticities are really unknown. You know, you can potentially produce X times more software per user or, you know, per cursor user or per developer at your company, whatever. But maybe you want that, you know, maybe there is no limit or no, you know, maybe the, the regime that we're in is such that if there's, you know, 10 times more productivity, that's all to the good. And, you know, we still have just as many jobs because we want 10 times more software. I don't know how long that lasts. Again, the ratios start to get challenging at some point. But yeah, I think the old Tyler Common thing comes to mind. You are a bottleneck, you are a bottleneck. I think more often it is, are people really trying to get the most out of these things and are they using best practices and have they really put their minds to it or not? Often the real barrier is there. I've been working a little bit with a company that is doing Basically government doc review, I'll abstract a little bit away from the details. Really gnarly stuff like scanned documents, you know, handwritten, uh, filling out of forms. And they've created this auditor AI agent that just won a state level contract to do the audits on like a million transactions a year of, of these, um, these packets of documents, again, scanned, handwritten, all this kind of crap. And they just blew away the human workers that were doing the job before. So where are those workers going to go? Like, I don't know. They're not going to have 10 times as many transactions. I can be pretty confident in that. Are there going to be a few still that are there to supervise the AIs and handle the weird cases and answer the phones? Sure. Maybe they won't go anywhere. The state may do a strange thing and Just have all those people like sit around because they can't bear to fire them. Like, who knows what the ultimate decision will be. But I do see a lot of these things where I'm just like, when you really put your mind to it and you identify what would create real leverage for us, can the AI do that? Can we make it work? You can take a pretty large chunk out of high volume tasks very reliably in today's world. And so the impacts I think are starting to be seen there on on a lot of jobs. Humans, I think, are, you know, the leadership is maybe the bottleneck or the will in a lot of places might be the bottleneck. And software might be an interesting case where there is just so much pent up demand, perhaps, that it may take a little longer to see those impacts because you really do want, you know, 10 or 100 times as much software.
Erik Torenberg: What is, yeah, let's talk about code because it's, you know, it's where Anthropic made a big bet early on, you know, perhaps inspired by the sort of automated researcher, you know, recursive self-improvement, you know, sort of, you know, desired future. And then we saw OpenAI make moves there as well. Why don't we flesh that out or talk a little about, you know, what inspired that and where you see that going?
Nathan Labenz: You know, utopia or dystopia is really the big question there, I think, right? I mean, is maybe one part technical, two parts social in terms of why code has been so focal? The technical part is that it's really easy to validate code. You generate it, you can run it. If you get a runtime error, you can get the feedback immediately. It's, you know, somewhat harder to do functional testing. Replit recently, just in the last like 48 hours, released their V3 of their agent, and it now In addition to code, try to make your app work, V2 of the agent would do that and it could go for minutes and in some cases generate dozens of files. I've had some magical experiences with that where I was like, Wow, you just did that whole thing in one prompt and it worked amazing. Other times it will code for a while and hand it off to you and say, Okay, does it look good? Is it working? And you're like, No, it's not. I'm not sure why. You get into a back and forth with it. But the difference between V2 and V3 is that instead of handing the baton back to you, it now uses a browser and the vision aspect of the models to go try to do the QA itself. So it doesn't just say, okay, hey, I tried my best, wrote a bunch of code, like, let me know if it's working or not. It takes that first pass at figuring out if it's working. And, you know, again, that really improves the flywheel, just how much you can do, how much you can validate, how quickly you can validate it. The, the speed of that loop is really key to the pace of improvement. So it's a problem space that's pretty amenable to the sorts of, you know, rapid flywheel techniques. Second, of course, they, they're all coders, right, at these places. So they want to, you know, solve their own problems. That's like very natural. And third, I do think on the, you know, sort of social vision, competition, who knows where this is all going, they do want to create the automated AI researcher. That's another data point, by the way, from, this was from the 03 system card. They showed a jump from like low to mid single digits to roughly 40% of PRs actually checked in by research engineers at OpenAI that the model could do. So prior to 03, not much at all, you know, low to mid single digits. As of 03, 40%. I'm sure those are the easier 40% or whatever. Again, there will be, you know, caveats to that. But that's, you're entering maybe the steep part of the S-curve there. And that's presumably pretty high end. You know, I don't know how many easy problems they have at OpenAI, but presumably, you know, not that many relative to the rest of us that are out here making generic web apps all the time. So You know, at 40%, you gotta be starting to, I would think, get into some pretty hard tasks, some pretty high value stuff. You know, at what point does that ratio really start to tip where the AI is like doing the bulk of the work? GPT-5 notably wasn't a big update over 03 on that particular measure. And it also wasn't going back to the simple QA thing. GPT-5 is generally understood to not be a scale up relative to 4.0 and 03. And you can see that in the simple QA measure, it basically scores the same on these long tail trivia questions. It's not a bigger model that has absorbed like lots more world knowledge. It is, it is, you know, Cal is right. I think it is analysis that it's, it's post training. But that post training, you know, is potentially entering the steep part of the S curve when it comes to the ability to do even the kind of hard problems that are happening at, at OpenAI on the research engineering front and You know, yikes. So I'm a little worried about that, honestly. The idea that we could go from these companies having a few hundred research engineer people to having, you know, unlimited overnight and like what would that mean in terms of how much things could change and also just our ability to steer that overall process. I'm not super comfortable with the idea of the companies tipping into a recursive self-improvement regime, especially given the the level of control and the level of unpredictability that we currently see in the models. But that does seem to be what they are going for. So in terms of like why, I think this has been the plan for quite some time. Even you remember that leaked Anthropic fundraising deck from maybe two years ago where they said that in 2025 and 2026, the companies that train the best models will get so far ahead that nobody else will be able to catch up. I think that's kind of what they meant. I think that they were projecting then that in the 25, 26 timeframe, they'd get this like automated researcher. And once you have that, how's anybody, you know, who doesn't have that going to catch up with you? Now, obviously, some of that remains to be validated, but I do think they have been pretty intent on that for a long time.
Erik Torenberg: Five years from now, are there more engineers or fewer engineers?
Nathan Labenz: I tend to think less. You know, already, If I just think about my own life and work, I'm like, would I rather have a model or would I rather have like a junior marketer? I'm pretty sure I'd rather have the model. Would I rather have the models or a junior engineer? I think I'd probably rather have the models in a lot of cases. I mean, it obviously depends on, you know, the exact person you're talking about. Um, but truly, forced choice today. Now that, and then you've got cost adjustment as well, right? I'm not spending nearly as much on my cursor subscription as I would be on a, you know, an actual human engineer. So even if they have some advantages, you know, and I also have not scaffolded, I haven't gone full co-scientist, right, on my cursor problems. I think that that's another interesting, you start to see why Folks like Sam Altman are so focused on questions like energy and the $7 trillion build out because these power law things are weird. And, you know, to get incremental performance for 10x the cost is weird. It's definitely not the kind of thing that we're used to dealing with. But for many things, it might be worth it, and it still might be cheaper than the human alternative. You know, it's like, well, cursor cost me whatever, 40 bucks a month or something. Would I pay 400 for, you know, however much better? Yeah, probably. Would I pay 4,000 for however much better? Well, it's still, you know, a lot less than a full-time human engineer. And the costs are obviously coming down dramatically too, right? That's another huge thing. GPT-4 was way more expensive. It's like 90, it's like a 95% discount from GPT-4 to GPT-5. That's, you know, no small thing, right? I mean, it's, Apple Stat was a little bit hard because the chain of thought does spit out a lot more tokens. And so you give back a little, on a per token basis, it's dramatically cheaper, more tokens generated, you know, does eat back into some of that savings. But everybody seems to expect the trends will continue in terms of prices continuing to fall. And so, you know, how many more of these like price reductions do you have to then be able to, you know, do the power law thing a few more times? I guess I think, I think, I think less. And I think that's probably true even if we don't get like full blown AGI that's, you know, better than humans at everything. I think you could easily imagine a situation where of however many million people are currently employed as professional software developers, some top tier of them that do the hardest things can't be replaced. But there's not that many of those, you know, they, and the, the real like, rank and file. You know, the people that over the last 20 years were told, learn to code, you know, that'll be your thing. Like the people that are the really top, top people didn't need to be told to learn to code, right? They just, it was their thing. They had a passion for it. They were amazing at it. We may not, it wouldn't shock me if we like still can't replace those people in three, four, five years time. But I would be very surprised if you can't get your nuts and bolts web app, mobile app type things spit out for you for far less and far faster than, and probably honestly with significantly higher quality and less back and forth with an AI system than, you know, with your kind of middle of the pack developer in that timeframe.
Erik Torenberg: One thing I do want to call out, you know, there are definitely people have concerns about progress moving too fast, but there's also concern and maybe it's rising about progress not moving fast enough in the sense that You know, a third of the stock market is is mag seven. You know, AI CapEx is, you know, over 1% of GDP. And so we are kind of relying on some of this progress in order to sort of sustain our sustain our economy.
Nathan Labenz: Yeah. And with the, you know, another thing that I would say has been slower to materialize than I would have expected are AI culture wars or, you know, sort of the the ramping up of protectionism of various industries. We just saw Josh Hawley. I don't know if he introduced a bill or just said he intends to introduce a bill to ban self-driving cars nationwide. You know, God help me. I've dreamed of self-driving cars since I was a little kid. Truly, like sitting at red lights. I used to be like, there's got to be a way. I think we took a Waymo together. Yeah. And then it's, it's so good. Um, and the safety, you know, no, I think whenever people want to argue about jobs, it's going to be pretty hard to say 30,000 Americans should die every year. Uh, so that people's incomes don't get disrupted. It seems like you have to be able to get over that hump and say like the, you know, saving all these lives, if nothing else is just really hard to, uh, to argue against, but we'll see, you know, I mean, he's, uh, not, uh, without influence, obviously. So, Yeah, I mean, I am very much on team abundance and, you know, my old mantra, I've been saying this less lately, but adoption accelerationist, hyperscaling pauser, the tech that we have, you know, could do so, so much for us even as is. I think if, if progress stopped today, I still think we could get to. 50 to 80% of work automated over the next like five to 10 years. It would be a real slog. You'd have a lot of, you know, co-scientist type breakdowns of complicated tasks to do. You'd have a lot of work to do to go sit and watch people and say, why are you doing it this way? What's going on here? What's this? You handled this one differently. Why did you handle that one differently? All this tacit knowledge that people have and the kind of know-how procedural, um, just instincts that they've developed over time. Those are not documented anywhere. They're not in the training data. So the AIs haven't had a chance to learn them. But again, when I say no breakthroughs, I still am allowing there for fine tuning of things to just like capabilities that we have that haven't been applied to particular problems yet. So just going through the economy and just sitting with people and being like, why are you doing this? Let's document this. Let's get the model to learn your particular niche thing. That would be a real slog. And in some ways, I kind of wish that were the future that we were going to get, because it would be a methodical kind of one foot in front of the other, no quantum leaps. It would probably feel pretty manageable, I would think, in terms of the pace of change. Hopefully, society could absorb that and kind of adapt to it as we go without one day to the next, like, oh my God. all the drivers are getting replaced or that one to be a little slower because you do have to have the actual physical build out. But in some of these things, customer service could get ramped down real fast, right? Like if a call center has something that they can just drop in and it's like this thing now answers the phones and talks like a human and has a higher success rate and scales up and down. One thing we've seen at Waymark, small company, right? We've always prided ourselves on customer service. We do a really good job with it. Our customers really love our customer success team. But I looked at our intercom data and it takes us like half an hour to resolve tickets. We respond really fast. We respond in like under two minutes most of the time. But when we respond, you know, two minutes is still long enough that the person has gone on to do something else, right? It's the same thing as with the cursor thing that we were talking about earlier, right? They've tabbed over to something else. So now we get the response back in two minutes. But they are doing something else. So then they come back at, you know, minute six or whatever, then they respond. But now our person that's gone and done something else. So the resolution time, even for like simple stuff, can be easily a half an hour. And the AI, you know, it just responds instantly, right? So you don't have to have that kind of back and forth. You're just in and out. So I do think some of these categories could be really fast changes. Others will be slower. But yeah, I mean, I kind of wish we had that slower path in front of us. My best guess, though, is that we will probably continue to see things that will be significant leaps and that there will be actual disruption. Another one that's come to mind recently, maybe we can get the abundance department on these new antibiotics. Have you seen this development?
Erik Torenberg: No, tell us about it.
Nathan Labenz: I mean, it's not a language model. I think that's another thing people really underappreciate or that you could kind of look back at GPT-4 to 5. And then imagine a pretty easy extension of that. So GPT-4, initially when it launched, we didn't have image understanding capability. They did demo it at the time of the launch, but it wasn't released for some months later. The first version that we had could understand images, could do a pretty good job of understanding images, still with like jagged capabilities and whatever. Now with the new Nano Banana from Google, you have this like basically Photoshop level ability to just say, Hey, take this thumbnail. Like we could take our two feeds right now, you know, take a snapshot of you, a snapshot of me, put 'em both into Nano Banana and say, generate the thumbnail for the YouTube preview featuring these two guys, put 'em in the same place, same background, whatever, it'll mash that up. You can even have it, you know, put text on top progress since GPT four, whatever we wanna call it. GPT five is not a bust. And it'll spit that out. And you see that it has this deeply integrated understanding that bridges language and image. And that's something that it can take in, but now it's also something that can put out as all as part of one core model with like a single unified intelligence. That I think is going to come to a lot of other things. We're at the point now with these biology models and material science models where they're kind of like the image generation models of a couple of years ago. They can take a real simple prompt and they can do a generation, but they're not deeply integrated where you can have like a true conversation back and forth and have that kind of unified understanding that bridges language and these other modalities. But even so, it's been enough for this group at MIT to use some of these relatively, you know, narrow purpose-built biology models and create totally new antibiotics. New in the sense that they have a new mechanism of action, like they're, they're affecting the bacteria in a new way. And notably, they, they do work on antibiotic resistant bacteria. This is some of the first new antibiotics we've had in a long time. Now they're going to have to go through, when I say get the abundance department on it, it's like, where's my operation warp speed for these new antibiotics? We've got people dying in hospitals from drug resistant strains all the time. Why is nobody crying about this? I think one of the things that's happening to our society in general is just so many things are happening at once. It's like the flood the zone thing, except there's so many AI developments flooding the zone that nobody can even keep up with all of those. And that's come from me, by the way, too. I would say two years ago, I was pretty in command of all the news, and a year ago, I was starting to lose it. And now I'm like, wait a second, there was new antibiotics developed. I'm kind of missing things just like everybody else, despite my best efforts. But key point there is AI is not synonymous with language models. There are AIs being developed with pretty similar architectures for a wide range of different modalities. We have seen this play out with text and image where you had your text only models and you had your image only models, and then they started to come together, and now they've come really deeply together. And so I think you're going to see that across a lot of other modalities over time as well. And there's a lot more data there. You know, we might, I don't know what it means to like run out of data. In the reinforcement learning paradigm, there's always more problems, right? There's always something to go figure out. There's always something to go engineer. The feedback is starting to come from reality, right? That was one of the things Elon talked about on the Croc 4 launch was like, maybe we're running out of problems we've already solved and we only have so much of those sitting around in inventory. We only have one internet. We only have so much of that stuff. Over at Tesla, over at SpaceX, like we're solving hard engineering problems on a daily basis, and they seem to be never ending. So when we start to give the next generation of the model these power tools, the same power tools that the professional engineers are using at those companies to solve those problems, and the AI start to learn those tools and they start to solve previously unsolved engineering problems, like that's going to be a really powerful signal that they will be able to learn from. And now again, fold in those other modalities, right? The ability to have sort of a sixth sense for, you know, the space of material science possibilities. When you can bridge or unify the understanding of language and those other things, I think you start to have something that looks kind of like super intelligence, even if it's like not able to, you know, write poetry at a superhuman level necessarily. It's ability to see in these other spaces is going to be truly a superhuman thing that I think will be pretty hard to miss.
Erik Torenberg: You said that that was one thing that Cal's analysis missed is just the lack of appreciation for non-language modalities and how they drive in some of the innovations that you're talking about.
Nathan Labenz: Yeah, I think people are often just kind of equating the chatbot experience with AI broadly. And that conflation will not last probably too much longer because we are going to see self-driving cars unless they get banned. And that's a very different kind of thing. And talk about your impact on jobs too, right? It's like what, four or 5 million professional drivers in the United States. That is a big deal. I don't think most of those folks are going to be super keen to learn to code. And even if they do learn to code, I'm not sure how long that's going to last. So that's going to be a disruption. And then general robotics is like not that far behind. And this is one area where I do think China might be actually ahead of the United States right now, but regardless of whether that's true or not, these robots are getting really quite good, right? They can walk over all these obstacles. And these are things that a few years ago, they just couldn't do at all. They could barely balance themselves and walk a few steps under ideal conditions. Now you've got things that you can like literally do a flying kick and it'll like absorb your kick and shrug it off and just keep going, you know, right itself and, and continue on its way. Super rocky, you know, uneven terrain. All these sorts of things are getting quite good. You know, the same thing is working everywhere. I think one of the other thing that's kind of, there's always a lot of detail to the work. So it's, it's a, Sort of inside view, outside view, right? Inside view, you're like, there's always this minutia, there's always, you know, these problems that we had and things we had to solve, but you zoom out and it looks to me like the same basic pattern is working everywhere. And that is like, if we can just gather enough data to do some pre-training, you know, some kind of raw, rough, you know, not very useful, but just enough at least to kind of get us going, then we're in the game. And then once we're in the game, now we can do this flywheel thing of like, you know, rejection sampling, like have it try a bunch of times, take the ones where it succeeded, you know, refine tune on that, the RLHF, you know, feedback, the sort of preference, take two, which one was better, you know, fine tune on that, the reinforcement learning, all these techniques that have been developed over the last few years. Seems to me they're absolutely gonna apply to a problem like a humanoid robot as well. And that's not to say there won't be a lot of work to figure out exactly how to do that. But I think the big difference between language and robotics is really mostly that there just wasn't a huge repository of data to train the robots on at first. And so you had to do a lot of hard engineering to make it work at all. to even stand up. You had to have all these control systems and whatever, because there was nothing for them to learn from in the way that the language models could learn from the internet. But now that they're working at least a little bit, I think all these kind of refinement techniques are going to work. It'll be interesting to see if they can get the error rate low enough that I'll actually allow one in my house around my kids. They'll probably be better deployed in factory settings first, more controlled environments than the chaos of my house as you you know, I've seen in this, uh, in this recording, but I do think they're going to, they're going to work.
Erik Torenberg: What's the state of agents more, more broadly, uh, at the moment? Where do you, how do you see things playing out? Where does it go?
Nathan Labenz: Well, broadly, I think, you know, we're, it's the, the task length story from meter of the, you know, every seven months or every four months doubling time, we're at two hours ish with GPT five repla just said their new agent V3 can go 200 minutes that if that's True. That would even be a new, you know, high point on the on that graph. Again, it's a little bit sort of apples to oranges cuz they've done a lot of scaffolding. How much have they broken it down? Like how much scaffolding are you allowed to do, you know, with these things before you sort of are off of their chart and onto maybe a different chart. But if you extrapolate that out a bit and you're like, okay, take, take the four month case just to be a little aggressive. That's three doublings a year. That's eight x task length increase per year. That would mean you go from two hours now to two days in one year from now. And then if you do another eight x on top of that, you're looking at basically say two days to two weeks of work in two years. That would be a big deal, you know, to say the least, if you could delegate an AI two weeks worth of work and have it do even half the time, right? The meter thing is that they will succeed half the time on tasks of that size. But if you could take a two-week task and have a 50% chance that an AI would be able to do it, even if it did cost you a couple hundred bucks, right? It's like, well, that's again, a lot less than it would cost to hire a human to do it. And it's all on demand. It's kind of, you know, it's immediately available. Um, if I'm not using it, I'm not paying anything. Transaction costs are just like a lot lower. The whole, you know, the many, many other aspects are favorable for the AI there. So, you know, that would suggest that you'll see a huge amount of automation in, in all kinds of different places. The other thing that I'm watching though, is the reinforcement learning does seem to bring about a lot of bad behaviors. Re um, reward hacking being one, you know, the, the any sort of gap between what you are rewarding the model for and what you really want can become a big issue. We've seen this in coding in many cases where the AI will, Claude is like notorious for this, will put out a unit test that always passes, you know, that just has like return true in the unit test. Why is it doing that? Like, well, it must have learned that what we want is for unit tests that pass, you know, we want it to pass unit tests. Well, we didn't mean to write fake unit tests that always pass, but that technically did, you know, satisfy the reward condition. And so we're seeing those kind of weird behaviors. With that comes this like scheming kind of stuff. We don't really have a great handle on that yet. There is also situational awareness that seems to be on the rise, right? Where the models are like increasingly in their chain of thought, you're seeing things like, this seems like I'm being tested. You know, maybe I should be conscious of what my tester is really looking for here. And that makes it hard to evaluate models in tests because you don't know if they're actually going to behave the same way when they're out in the real world. So those, you know, I wouldn't say there's a high level or high confidence prediction, but like one model of the future I've been playing with is the task length keeps doubling while at the same time, these weird behaviors pop up and then are suppressed. And we have seen in the Claude four and in the GPT five system cards, Claude four reported, I think a two thirds reduction in reward hacking and you know, in GPT five, they reported a few different dimensions, but, you know, say something similar, reduction in deceptive behavior. But those behaviors kind of just emerged. So it's sort of like weird behavior emerges, then they sort of figure out how to tamp it down, but not entirely. Presumably in the next generation, they'll like tamp it down some more, but maybe some new additional weird behavior could emerge and then we'll have to kind of tamp that one down. All the while, the tasks... are expanding in scope, you know, with a every four months doubling. So you could end up in a world where you can delegate really like major things to AIs, but there's some small, but not necessarily totally vanishing chance that it like actively screws you over in the way that it is trying to do that task. And so you have a, you know, maybe it's like, okay, I think Here, I'm about to get two weeks worth of work done for a hundred bucks. Okay, that's amazing. But there's also a one in 10,000 chance that it like legitimately, you know, attacks me in a, you know, like meaningful way. Some of the things that we have seen, these are like fairly famous at this point, but in the cloud four system card, they reported blackmailing of the human. The, the setup was that the AI had access to the engineer's e-mail and They told the AI that it was going to be like replaced with a, you know, a less ethical version or something like that. It didn't want that. And it found in the engineer's e-mail that the engineer was having an affair. So it started to blackmail the engineer so as to avoid being replaced with a less ethical version. People, I think, are way too quick, in my view, to move past these anecdotes. People are sort of often like, well, they set it up that way, and that's not really realistic. But another one was whistleblowing. There was another thing where they sort of set up this dynamic where there was some unethical, illegal behavior going on. And again, the model had access to this data, and it decided to just e-mail the FBI and tell the FBI about it. So first of all, I don't think we really know what we want. To some degree, maybe you do want AIs to report certain things to authorities. That could be one way to think about the bioweapon risk, you know, is like, not only should the models refuse, but maybe they should report you to the authorities if you're actively trying to create a bioweapon. I certainly don't want them to be doing that too much. I don't want to live under the, you know, surveillance of Claude 5 that's always going to be, you know, threatening to turn me in. But I do sort of want some people to be turned in if they're doing sufficiently bad things. We don't have a good resolution society wide on, you know, what we want the models to even do in those situations. And I think it's also, you know, it's like, Yes, it was set up. Yes, it was research, but it's a big world out there, right? We got a billion users already on these things and we're plugging them in to our e-mail. So they're going to have very deep access to information about us. I don't know what you've been doing in your e-mail. I hope there's nothing too crazy in mine, but now I got to think about it a little bit. Have I ever done anything that I, geez, I don't know. Or even that it could misconstrue, right? It's obviously not... Maybe I didn't even really do anything that bad, but it just misunderstands what exactly was going on. So that could be a weird, you know, if there's one thing that could kind of stop the agent momentum, in my view, it could be like the one in 10,000 or whatever, you know, we ultimately kind of push the, the really bad behaviors down to is maybe still just so spooky to people that they're like, I can't deal with that, you know, and that might be hard to resolve. So, well, you know, what happens then? You know, it's hard to check two weeks worth of work every couple hours or whatever, right? That's part of where the whole, then you bring another AI in to check it. You know, that's again, where you start to get to the, now I see why we need more electricity and $7 trillion of build out is, yikes, you know, they're going to be producing so much stuff. I can't possibly even review it all. I need to rely on another AI to help me do the review of the first AI to make sure that if it is trying to screw me over, somebody's catching it, I can't monitor that myself. I think Redwood Research is doing some really interesting stuff like this where they are trying to get systematic on like, okay, let's just assume this is quite a departure from the traditional AI safety work where the big idea traditionally was, let's figure out how to align the models, make 'em safe, You know, make 'em not do bad things, great. Redwood Research has taken the other angle, which is let's assume that they're gonna do bad stuff. They're gonna be out to get us at times. How can we still work with them and get productive output and, you know, get value without, you know, fixing all those problems? And that involves like, again, all these sort of AIs supervising other AIs and crypto might have a place to, a role to play in this. Another episode coming out soon with Ilya Polosukhin, who's the founder of Near. Really fascinating guy because he was one of the eight authors of the Attention is All You Need paper. And then he started this Near company. It was originally an AI company. They took a huge detour into crypto because they were trying to hire task workers around the world and couldn't figure out how to pay them. So they were like, this sucks so bad to pay these task workers in all these different countries that we're trying to get data from that we're going to pivot into a whole blockchain side quest. Now they're coming back to the AI thing and their tagline is the blockchain for AI. And so you might be able to get, you know, a certain amount of control from, you know, the, the sort of crypto security that the, the blockchain type technology can provide. But. I could see a scenario where the, these, the bad behaviors just become so costly when they do happen that people kind of get spooked away from using the frontier capabilities in terms of just like how much, you know, work the, the AIs can do. But that wouldn't be a, that wouldn't be a pure capability stall out. It would be a, we can't solve, you know, some of the long tail safety issues challenge and. You know, that, if that is the case, then, you know, that'll be, um, that'll be an important fact about the world too. I, I always, nobody ever seems to solve any of these things like a hundred percent, right? They, they always, every, every generation, it's like, well, we reduced hallucinations by 70%. Oh, we reduced deception by two thirds. We reduced, um, you know, scheming or, or whatever by however much, but it's always still there, you know, and it's, and if you take the. even, you know, lower rate and you multiply it by a billion users and thousands of queries a month and agents running in the background and processing all your emails and, you know, all the deep access that people sort of envision them happening. It could be a pretty weird world where there's just this sort of negative lottery of like AI accidents. Another episode coming up is with the AI underwriting company and they are trying to bring the insurance industry and all the wherewithal that's been developed there to price risk, figure out how to create standards, what can we allow, what sort of guardrails do we have to have to be able to insure this kind of thing in the first place? So that'd be another really interesting area to watch is like, can we sort of financialize those risks in the same way that we have with car accidents and all these other mundane things. But the space of car accidents is only so big, the space of weird things that AIs might do to you. Um, you know, as they have weeks worth of runway is much bigger. And so it's, it's gonna be a hard challenge, but you know, people are, people are working, we got some of our best people working on it.
Erik Torenberg: What do you make of the claim that 80% of AI startups have Chinese open models? Um, and what do you make of the claim and, and the implications?
Nathan Labenz: I think that may be, that probably is true with the one caveat that It is only measuring companies that are using open source models at all. I think most companies are not using open source models. And I would guess, you know, the vast majority of tokens being processed by American AI startups are their API calls, right, to just the usual suspects. So weighted by actual usage, I would say still the majority as far as I could tell, would be going to commercial models. For those that are using open source, I do think it's true that the Chinese models have become the best. You know, the American bench there was always kind of thin, right? It was basically meta that was willing to put in huge amounts of money and resources and then open source it. You've got, you know, Paul Allen funded group, the Allen Institute for AI, AI too. You know, they're, they're doing good stuff too, but they don't have pre-training resources. So they do, you know, really good post training and, and open source their recipes and all that kind of stuff. So it's not like American open source is bad, you know, and again, it's the time, this is another way in which I think you can really validate that things are moving quickly because if you take the best American open source models and you take them back a year, they are probably as good, if not a little better than anything. that we had commercially available at the time. If you compare to Chinese, you know, they have, I think, surpassed. So there's been like pretty clear change at the frontier. I think that means that the best Chinese models are like pretty clearly better than anything we had a year ago, commercial or otherwise. So Yeah, I mean, that just means like things are moving. I think that's like, hopefully I've made that case compellingly, but that's another data point that I think makes it hard to, I don't think you can believe both that the Chinese models are now the best open source models and that AI has stalled out and we haven't seen much progress since GPT-4. Like those seem to be kind of contradictory notions. I believe the one that is wrong is the lack of progress. In terms of what it means, I mean, I don't really know. We're not gonna stop China. Yeah, the whole, I've always been a skeptic of the no selling chips to China thing. The notion originally was like, we're gonna prevent them from doing, you know, some super cutting edge military applications. And it was like, well, we can't really stop that. Um, but we can at least stop them from training frontier models. And then it was like, eh, well, we can't necessarily really stop that, but. now we can at least keep them from having tons of AI agents. We'll have way more AI agents than they do. And I don't love that line of thinking really at all. But one upshot of it potentially is they just don't have enough compute available to provide inference as a service to the rest of the world. So instead, the best they can do is just say, okay, well, we'll train these things and you can figure it out. Here you go, have at it. It's kind of a soft power play, presumably. I did an episode with Anjanee from A16Z, who I thought really did a great job of providing the perspective of what I've started calling countries three through 193. If the US and China are one and two, three through, there's a big gap. You know, there's like, I think the US is still ahead, but not by that much in terms of research and, you know, ideas relative to China. We do have this compute advantage and that does seem like it matters. One of the upshots may be that they're open sourcing and countries through through 93 are like, or three through 193 are significantly behind. So for them, it's a way to, you know, try to bring more countries over to the Chinese camp, potentially in the US-China rivalry. It seems like the model of everybody, and I don't like this at all. I don't like technology decoupling. As somebody who worries about, you know, who's the real other here? I always say the real other are the AIs, not the Chinese. So if we do end up in a situation where, yikes, like, you know, we're seeing some crazy things, it would be really nice if we were on basically the same technology paradigm to the degree that we really decouple and You know, not just the chips are different, but maybe the ideas start to become very different. Publishing gets shut down, you know, tech trees evolve and kind of grow apart. That to me seems like a recipe for, you know, it's harder to know what the other side has. It's harder to trust one another. It seems to feed into the arms race dynamic, which I do think would, you know, is a real. existential risk factor. I would hate to see us, you know, create another sort of mad type dynamic where we all live under the threat of AI destruction. But that very well could happen. And so, yeah, I don't know. I do kind of have some sympathy for the recent decision that the administration made to be willing to sell the H20s to China. And then it was funny that they turned around and rejected them, which to me seemed like a mistake. I don't know why they would be rejecting them. If I were them, I would buy them. And I would maybe sell inference on the models that I've just been creating, and I would try to make my money back doing that. But in the meantime, they can at least demonstrate the greatness of the Chinese nation by showing that they're not far behind the frontier. And they can also make a pretty powerful appeal to countries three through 193 and say like, you know, look, you really want to, you see how the US is acting in general, you know, you really want to, they cut us off from chips. They had a, even a long, you know, the last administration had an even longer list of countries that couldn't get chips. This administration is doing all kinds of crazy stuff. You know, you get 50% tariffs here, there, whatever. How do you know you can really rely on them to continue to provide you AI into the future? Well, you can rely on us. We open source the model. You can have it. So come work with us and buy our chips. Because by the way, our models will, as we mature, they'll be optimized to run on our chips. So I don't know. That's a complicated situation. I do think it's true. I don't think the adoption is as high as that 80%. I think that is within that. subset of companies that are doing stuff with open source. We're going to experiment with that at Waymark, but we, to be honest, we have never done anything with an open source model in our product to present. Everything we've ever done has been through commercial. At this point, we are going to try doing some reinforcement fine tuning. We are going to do that on a QEN model, I think first. So, you know, that'll put us in that 80%, but I'm guessing that at the end of the day, We'll take that Quinn model, we'll do the reinforcement fine-tuning, and we'll probably get roughly up to as good as GPT-5 or Cloud4 or whatever. And then we'll say, okay, do we really want to have to manage inference ourself? How much are we really gonna save? And at the end of the day, I would guess we probably are still gonna end up just being like, eh, we'll pay a little bit more on a monthly bill basis for one of these frontier models, or a little bit better maybe still, and it's operationally a lot easier. And they'll have upgrades. So yeah, I mean, of course there's regulated industries. There's a lot of places where you have hard constraints that you just can't get around and that forces you to do those Chinese thing, Chinese models. Then there's also going to be the question of like, are there backdoors in them? People have seen the sleeper agents. project where a model was trained to be good up until a certain point of time. And, you know, people put the today's date in the system prompt all the time, right? Today's date is this, you are Claude, you know, here you go. So, and then that's gonna be another kind of thing for people to worry about. And we don't really have great, there have been some studies, Anthropic did a thing where they trained models to have some hidden objectives and then challenged teams to figure out what those hidden objectives were. And with certain interpretability techniques, they were able to figure that stuff out relatively quickly. So you might be able to get enough confidence that you take this open source thing, you know, created by some Chinese company, whatever, and then put it through, you know, some sort of, not exactly audit, because you can't trace exactly what's happening, but some sort of examination, you know, to see, can we detect any hidden goals or any, you know, secret backdoor, bad behavior or whatevers. And maybe with enough of that kind of work, you could be confident that you don't have it. But the more and more critical this stuff gets, you know, again, going back to that task length doubling, weird behavior, now you got to add into the mix. What if they intentionally programmed it to do certain bad things under certain, you know, rare circumstances? We're just headed for a really weird future. You know, we've got all these, there's no limit to it. You know, all these things are valid concerns. They often are in direct tension with each other. I don't I'm not one who, you know, wants to see one tech company take over the world by any means. So I definitely think we would do really well to have some sort of broader, more buffered ecological like system where, you know, all the AIs are kind of in some sort of competition, you know, mutual coexistence with each other. But we don't really know what that looks like and we don't really know We don't really know what an invasive species might look like when it gets introduced into that very nascent and as yet not battle-tested ecology. So, yeah, I don't know. Bottom line, I think the future's going to be really, really weird.
Erik Torenberg: Yeah. Well, I do want to close on an uplifting note. Maybe as a gearing towards closing question, we could get into some areas where we're already seeing some exciting capabilities emerge and sort of transform the experience, maybe around education or healthcare or any other areas you want to highlight.
Nathan Labenz: Yeah, it's, boy, it's all over. One of my mantras is that there's never been a better time to be a motivated learner. So I think a lot of these things do have kind of, you know, two sides of the coin. There's the worry that the students are taking the shortcuts and they're, you know, losing the ability to sustain focus and endure cognitive strain. Flip side of that is, as somebody who's fascinated by the intersection of AI and biology, sometimes I want to read a biology paper and I really don't have the background. An amazing thing to do is turn on voice mode and share your screen with ChatGPT. And just go through the paper reading. It's, you don't even have to talk to it. Most of the time you're doing your reading, it's watching over your shoulder. And then at any random point, you have a question, you can verbally say, what's this? Why, why are they talking about that? What's going on with this? What is the role of this particular protein that they're referring to or whatever? And it will have the answers for you. So if you really want to learn. in a sincere way. You know, the, the things are unbelievably good at helping you do that. Flip side is you can take a lot of shortcuts and, you know, maybe never have to learn stuff. On the biology front, you know, again, like we've got multiple of these sort of discovery things happening. The antibiotics one we covered. There was another one that I did another episode on with a, a Stanford professor named James Zhao, who created something called the virtual lab. And basically this was an AI agent that could spin up other AI agents, depending on what kind of problem it was given. Then they would go through a deliberative process where you'd have, you know, one expert in one thing would give its take and they'd, you know, bat it back and forth. There was a critic in there that would criticize, you know, the ideas that had been given. Eventually they'd synthesize. Then they were also given some of these narrow specialist tools. So you have agents using the AlphaFold type, not just AlphaFold, you know, there's a whole wide, wide array of those at this point, but using that type of thing to say, okay, well, can we simulate, you know, how this would interact with that? Agents are running that loop and they were able to get this language model agent with specialized tool system to generate new treatments for novel strains of COVID. that had, you know, kind of escaped the previous treatments. Amazing stuff, right? I mean, the flip side of that, of course, is, you know, you got the bioweapon risk. So all these things do seem like they're gonna be even, even on just the abundance front itself, right? Like we may have a world of unlimited professional private drivers. But we don't really have a great plan for what to do with the 5 million people that are currently doing that work. We may have infinite software, but especially once the 5 million drivers pile into all the coding boot camps and get coding jobs, I don't know what we're going to do with the 10 million people that we're coding when 9 million of them become superfluous. So yeah, I don't know. I think we're headed for a weird world. Nobody really knows what it's going to look like in five years. There was a great moment at Google's IO. where they brought up some journalist. I know you, we, we're skeptical of journalists. This is a great moment to, we're going direct, right? This is a great reason or example of why one would want to do that. They brought up this person to interview Demis and Sergey Brinin. They, the guy asked like, what is search going to look like in five years? And Sergey Brin, like almost spit out his coffee on the, on the stage and was like, search. We don't know what the world is gonna look like in five years. So I think that's really true. Like the biggest risk I think for so many of us, and I, you know, include myself here is thinking too small. You know, the, the, the worst thing I think we could do would be to underestimate how far this thing could go. I would much rather be. I would much rather be mocked for things happening on twice the timescale that I thought than to find myself unprepared when they do happen. So whether it's 27, 29, 31, I'll take that extra buffer, honestly, where we can get it. My thinking is just get, you know, get ready as, as much and as fast as possible. And again, if we do have a little grace time to, you know, to do extra thinking, then great. But I would, I think the worst mistake we could make would be to, dismiss and not feel like we need to get ready for big changes.
Erik Torenberg: Should we wrap directly on that? Or is there any other last note you want to make sure to get across regarding anything we said today?
Nathan Labenz: One of my other mantras these days is the scarcest resource is a positive vision for the future. I do think it's always really striking, whether it's Sergey or, you know, or Sam Altman or Dario, like Dario probably has the best positive vision of the frontier developer CEOs with machines of love and grace, but It's always striking to me how little detail there is on these things. And when they launched GPT-4L, which was the voice mode, they were pretty upfront about saying, yeah, this was kind of inspired by the movie Her. And so I do think like, even if you are not a researcher, you know, not great at math, not somebody who codes, I think that this technology wave really rewards play. It really rewards imagination. I think literally writing fiction might be one of the highest value things you could do, especially if you could write aspirational fiction that would get people at the frontier companies to think, geez, maybe we could steer the world in that direction. Like, wouldn't that be great? If you could plant that kind of seed in people's minds, it could come from a totally non-technical place and potentially be really impactful. Play fiction. I had one other dimension to that, but yeah, play fiction, positive vision for the future. Anything that you could do to offer a positive, oh, behavioral too is like these days, because you can get the AIs to code so well, I'm starting to see people who have never coded before. I'm working with one guy right now who's never coded before, but does have a sort of behavioral science background, and he's starting to do legitimate frontier research on how our AI is going to behave under various kind of esoteric circumstances. So I think nobody should count themselves out from the ability to contribute to figuring this out and even to shaping this phenomenon. It is not just something that the technical minds can contribute to at this point. Literally philosophers, fiction writers, people literally just messing around, Pliny the jailbreaker. There are almost unlimited cognitive profiles that would be really valuable to add to the mix of people trying to figure out what's going on with AI. So come one, come all is kind of my attitude on that.
Erik Torenberg: That's a great place to wrap. Nathan, thank you so much for coming on the podcast.
Nathan Labenz: Thank you, Eric. It's been fun.