Confronting the Intelligence Curse, w/ Luke Drago of Workshop Labs, from the FLI Podcast

Luke Drago joins the Future of Life Institute podcast to discuss his Intelligence Curse thesis, examining how AI that replaces human economic actors could concentrate power and exploring societal, corporate, and personal strategies to keep humans in control.

Confronting the Intelligence Curse, w/ Luke Drago of Workshop Labs, from the FLI Podcast

Watch Episode Here


Listen to Episode Here


Show Notes

This cross-post episode from the Future of Life Institute podcast features Luke Drago, co-author of The Intelligence Curse and co-founder of Workshop Labs, in conversation with Gus Docker. PSA for AI builders: Interested in alignment, governance, or AI safety? Learn more about the MATS Summer 2026 Fellowship and submit your name to be notified when applications open: https://matsprogram.org/s26-tcr. They explore whether it’s wise to build AI systems that directly compete with and potentially replace humans as economic actors, and how this could create an “Intelligence Curse” where those who control AI gain extreme power. Luke outlines societal strategies like open-source AI, company-level design principles that keep users in control of their data, and personal tactics such as N-of-1 careers and pursuing moonshot projects early.

LINKS:

Sponsors:

MATS:

MATS is a fully funded 12-week research program pairing rising talent with top mentors in AI alignment, interpretability, security, and governance. Apply for the next cohort at https://matsprogram.org/s26-tcr

Tasklet:

Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai

Agents of Scale:

Agents of Scale is a podcast from Zapier CEO Wade Foster, featuring conversations with C-suite leaders who are leading AI transformation. Subscribe to the show wherever you get your podcasts

Shopify:

Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive

CHAPTERS:

(00:00) About the Episode

(03:07) Defining The Intelligence Curse

(09:34) Pyramid Replacement And Work (Part 1)

(15:55) Sponsors: MATS | Tasklet

(18:55) Pyramid Replacement And Work (Part 2)

(25:29) Local Knowledge, Data Control

(33:58) Dystopian Intelligence Curse Future (Part 1)

(34:04) Sponsors: Agents of Scale | Shopify

(36:52) Dystopian Intelligence Curse Future (Part 2)

(48:07) Open, Safe AI Futures

(01:02:47) Loyal Agents Versus Ads

(01:14:06) Moonshots Over Safe Paths

(01:17:49) Outro

PRODUCED BY:

https://aipodcast.ing

SOCIAL LINKS:

Website: https://www.cognitiverevolution.ai

Twitter (Podcast): https://x.com/cogrev_podcast

Twitter (Nathan): https://x.com/labenz

LinkedIn: https://linkedin.com/in/nathanlabenz/

Youtube: https://youtube.com/@CognitiveRevolutionPodcast

Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431

Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk


Transcript

This transcript is automatically generated; we strive for accuracy, but errors in wording or speaker identification may occur. Please verify key details when needed.


Introduction

Hello, and welcome back to the Cognitive Revolution!

Today, I’m sharing a special cross-post episode from the Future of Life Institute podcast, hosted by Gus Docker, and featuring Luke Drago, co-author of The Intelligence Curse and co-founder of Workshop Labs. 

I wanted to bring this conversation to your feed because it highlights a critical question that I think society should be grappling with much more than we currently are: is it wise to design AI systems to compete directly with, and potentially replace humans as economic actors?

Personally, while I'm relatively optimistic about humanity's ability to adapt to the social and economic changes associated with AI, and tend to worry more about catastrophic scenarios where we lose control of AI systems entirely, this conversation did force me to confront the possibility that things might go seriously wrong even if we do manage to solve the alignment problem.

Luke focuses on a particular failure mode, which he calls the "Intelligence Curse."  This concept echoes the "Resource Curse" phenomenon that we see in some resource-rich but underdeveloped countries today, where an extractive elite maintains power, without democratic legitimacy, or even cultivating much in the way of productivity from the broader population, simply by controlling key resources.

By analogy, in a future where AI systems power the economy and human labor is no longer much of a bargaining chip, whoever controls the AI could have a dangerous level of power.

I have to say, for as much as I'm hopeful that the AI Revolution can finally free people from doing work they don't enjoy, this dystopian vision is a pretty natural extrapolation from what happens in today's world when human workers are rendered economically uncompetitive for whatever reason — and as we’ve seen in many parts of the US, the results are not pretty, nor without consequences for the rest of the country and the world.

Listen to this episode, and I think you'll have to agree that, at a minimum, if AI is going to get anywhere near as powerful as I, and all the frontier lab leaders seem to think it will... we are going to face a massive challenge.

Luke, as you'll hear, has some very interesting ideas about what we can and should do to solve this.

At the societal level, he recommends investments in open source AI to commoditize the intelligence layer and prevent excessive economic and political rents from flowing to model owners.

For companies, he emphasizes the need to design AI systems that empower individual users, while allowing them to retain control over their economically valuable data. 

And for individuals, he suggests guarding your valuable know-how carefully, developing N-of-1 career paths, and chasing moonshot projects sooner rather than later.  

I'm excited to see how Workshop Labs delivers on this vision, and hope to do a full episode with them when they launch to the public in the coming months.  

For now, I hope you enjoy this conversation about The Intelligence Curse, and how we might break it, from the Future of Life Institute Podcast, with Gus Docker and Luke Drago.


Main Episode

[00:00] Gus Docker: Welcome to the Future of Life Institute podcast. My name is Gus Ducker and I'm here with Luke Draco. Luke, welcome to the podcast.

[00:07] Luke Drago: It's great to be here. Thanks for having me.

[00:09] Gus Docker: Great. So you have this essay series on the intelligence curse. And maybe we should just start at the very core of that and ask, what is the intelligence curse?

[00:19] Luke Drago: Yeah, so I'd summarize the intelligence curse pretty simply. The idea is that if you have non-human factors of production, and they become your dominant source of production. Your incentives aren't to invest in your people. And this sounds very abstract. What does it mean to have a non-human factor of production? And what does it mean that we can build things that actually replace us? And why doesn't this just result in like AGI utopia? But I think we have some concrete examples. And one of the ones that we point to in the essay and what we actually named the effect after is the resource curse, where there are states that rely primarily or have a significant amount of their income that come through oil revenues as opposed to investment in their people. And what you end up seeing is because investments in oil produce a greater return than investments in their people, those states oftentimes funnel money towards the oil investments as opposed to their people. The result of this is a worse quality of life for their people who have much less power because at the core, your ability to produce value is a core part of your bargaining chip in society.

[01:20] Gus Docker: So the worry here is that as we get more and more advanced AI systems, governments and companies will be incentivized to invest more in building out even more advanced AI systems as opposed to empowering workers and citizens.

[01:36] Luke Drago: Exactly.

[01:37] Gus Docker: Yeah, I guess one objection here that I hear from economists is just that if we look at previous technologies, we see that They basically increase wages and increase living standards unevenly and with setbacks, but over time we see increased wages and living standards. Why isn't the same just going to happen with advanced AI?

[02:00] Luke Drago: I think this is a category distinction of what we're trying to do. The last thousand years of technology has been technologies that have been extremely adaptive for humans, that have helped humans do new things. And they haven't encroached upon our core fundamental advantage, which is our ability to think and then do things in the real world. Obviously, during the Industrial Revolution, there were lots of concerns that replacing and automating large parts of physical labor would result in a world in which people didn't matter. But I think the actual outcome was a bit different because, of course, There isn't a machine that was produced in the Industrial Revolution that completely automated human thinking and our ability that's kept us at the top of the food chain. And if you look at the goal of a whole lot of companies in the field, you'll find that they stake their claim, their reason for existence is to create technologies that can do everything that any human can do better, faster, and cheaper. And of course, the question then is, if it is the case, that this allows capital to convert directly into results without removing the need for other people in the middle, why wouldn't companies just invest more and more money into this? I don't think it's Machiavellian. I don't think it's an evil plot by them. What I think instead is that if you have the opportunity to save 50% on your wage bill while also getting better, faster, more reliable results, most people are going to take that option. And so my concern here is that as we continue to build technology that is designed to replace rather than to augment, that we moved closer and closer towards a world where people just don't matter. And then, of course, you're reliant on other forces, you're reliant on government to make sure that you still have a high quality of life when you can't produce it for yourself. I think it's a very precarious situation to be in.

[03:42] Gus Docker: If we think about pensioners today, for example, they don't produce much for society. In fact, they are, in a sense, a draw on society's resources, but they're still protected. Why couldn't we imagine an expansion of that system? This is kind of the obvious solution that comes to mind for people. We will have universal basic income, and we will have protection of individual rights. And so we will maintain agency and relevance in an age of advanced AI.

[04:15] Luke Drago: So I end up arguing something like, the core proposition is that your economic value is an important part of your political value. We've seen in the history of democracies that oftentimes they start at the moment where there are diffuse actors who have varying amounts of capital who need to find ways to settle disputes about violence. The emergence, for example, of British democracy. And the Magna Carta came because there were lords that had power that wasn't equivalent to a king necessarily, but sure had a lot of influence. And that came from the material possessions that they controlled. This necessitated free courts and some sort of a way to solve disputes in Parliament. And the evolution kept moving backwards and backwards and backwards. And we continue to see that this economic liberalization is oftentimes a precondition for the democracies we really care about. Now, there are non-democracies that are fine places to live. that don't wildly trample on human rights. But of course, we know that there's an extremely strong correlation between governments that respect your rights and enable you to be prosperous and governments that are democratic. These things aren't one-to-one, but they're pretty damn close. And so the concern that I have here is that as we level the underlying economic structure that creates these bargaining chips that put us in power, that we end up reducing those. Pensioners are a fantastic example here because of course, a pensioner isn't someone who appears and never works for the rest of their life. Pensioners have 40 years of working extremely hard, paying into a system, and then being active members of society who then have a bargaining chip so that in the last 20 or 30, 10, 20, 30 years of their life, they get this exemption. It's because of the system that we have built that this is stable. And I would also add that, of course, in the history of the United States, for example, we treat our retired folks way better today than we did before things like the New Deal, which involved mass amounts of unrest, and workers trying to use their bargaining chip. So I'm very concerned about the world of which we all are pensioners forever with no way to actually bargain at the mercy of the next election for what happens in our subsequent years.

[06:27] Gus Docker: Which economic metrics should we be looking at if we want to try to confirm whether the intelligence curse is actually happening or disconfirm the hypothesis?

[06:38] Luke Drago: There are a couple of things that I take a look at. Income inequality seems quite important. Is it the case that you have kind of, we talk about sudden takeoff in AI, where there's suddenly a foom and all of a sudden AIs are way, way smarter than us. I think you might want to also look for this in economics. Is there a sudden moment in which capitalist immediately begins compounding? Because every dollar you put into a system produces some sort of an outbound return. And if you see this kind of rapidly accumulating, remove talent from the equation and suddenly capital gets begets more capital, then the actors who already have lots of capital can really rapidly accumulate. Now, it's already the case. that having capital makes it easier to get more capital, but there are a bunch of boundaries, a bunch of restrictions, and outsized players can still win. So outside of mass income inequality, I'd also take a look at things like economic mobility. Is it the case that people who aren't rich can move upwards in society? The United States, of course, is a very famous society for having this as a marker of its success, that you can come from anywhere, start from nothing, and win. Doesn't mean you're guaranteed to win, but there's always a pathway. And I think if those pathways start to close, that would be a very alarming signal here. Now we'll talk about, I presume we're getting the pyramid replacement here. And I think there are some things I really want to look at as well include like rising unemployment rates, especially among like your earliest age brackets right there that are just entering the workforce. But those are a couple of the metrics that I'm taking a look at here and that I've advised others to look through.

[08:05] Gus Docker: Yeah, actually explain that concept for us, if you would. Pyramid replacement, what does that look like?

[08:13] Luke Drago: So at the beginning of the paper, we are the beginning of the series of essays, we say that it's pretty likely that if the technological trend continues, you're going to lose your job. And we try to tell a story of how we think that's going to happen. And we start with the example of the multinational white collar firm. These are very large companies oftentimes that do a whole lot of work. Every year they hire a new class of analysts or a new class of entry-level employees whose goal is to work their way up the pyramid. And they hire a lot of them. They spend a whole lot of time recruiting from the top universities They show up on campus and their goal is to create this pipeline of talent because the company has a lot of people at the bottom and a few people at the top. But as people at the top leave because they retire or because they find other opportunities, you need a funnel of leadership. And our claim is that AI first makes it very easy to replace the people at the bottom. Now, there's actually a paper that came out, I believe yesterday, starting to show some empirical evidence for this. In some fields, AI's being augmenting, but in others, it's being just quite replacing. And what we've seen here in these targeted fields, I can't recall each one off the top of my head, but obviously software engineering is one of them. We've seen a shrinking in the number of job postings and in the number of job offers and overall employment between like the 22 to 25 year old brackets in these fields. That's exactly what you would expect if it is easiest to automate the entry level work first. Our claim then is that AI is like going to move up the pyramid. As it gets better and as it gets more and more agentic and capable of doing more tasks with long horizon planning, and as companies are able to capture more and more of that knowledge for themselves, that what they're able to do is move up the pyramid, replacing people bottom up, as opposed to a kind of middle out or top down replacements, One day you wake up to find that all of your colleagues are AI, and the next knock at the door is booting you out too. We think this could happen at every level of a white-collar firm. Now, there are a bunch of exceptions here. Obviously, it'll work different in some industries. Some sectors within a company are going to be easier to automate than other ones. And I think this is not exactly how it works in blue collar work. I think blue collar work looks, speculatively, I think it might look more zero to one, as in there aren't the robots required to do lots of blue collar work, and then there are. And I think I'm less familiar, and I've spent less time in the literature on the structure of blue collar companies, but my understanding is there are a lot more people who work doing like a similar job. It's a bit less pyramid shaped, it's a bit more flat, with like a small pyramid at the top. That's a pretty disastrous situation if robotics is able to rapidly automate those jobs.

[10:48] Gus Docker: Yeah, you might even imagine that the managers of a bunch of physical workers or blue-collar workers might be replaced before the workers themselves. So you could imagine systems that can automate invoicing and scheduling and so on being easier to do with, or being replaced by AI before we have fully functional robotics to actually do the blue-collar labor. I do wonder if we're talking about the trend already happening. I mean, this is a quite complex question, but how do we know that it's happening because of AI? Say there are fewer job postings related to programming. Could that be because of a general market trend or interest rates or something different than AI?

[11:38] Luke Drago: So I'll flag, the paper that I'm talking about is one that I've looked at. I've not spent a ton of time with yet. So I don't want to speak as an expert on that paper. I'd love to link that in the description as well, and I'll spend some time on that myself. But that particular paper, if I understand it correctly, works to isolate that, to try to understand what the mechanism was here. And my best guess here is you want to look at a couple different factors. One, you're going to want to see what industries are being affected. We have a pretty good sense as to what tasks are automatable right now and what tasks aren't. We know, for example, that software engineering is extremely automatable at its base level. And so you would expect to see, if it's AI, that the tasks that we know were easier to automate are the ones that are falling, while other ones are being augmented or much less affected. And my understanding, again, haven't read the entirety of the paper, have just skimmed the initial findings there. My understanding is that is roughly what you're seeing. And if that's not the case, that is what I'd be looking for here is what based on a projected existing and projected AI capabilities, which sectors are seeing changes in employment and does that match their expectations?

[12:43] Gus Docker: Yeah, Actually, let's dig into that a bit more and think about which sectors or which jobs or tasks would be protected from automation. And I've suggested some mechanisms of protection that we can talk about, where, for example, if you're a lawyer, there might be kind of legal restrictions on replacing you. I don't think we're going to see an AI judge employed by the government very soon. Or at least, I don't think we're going to see that until, that's basically probably the last job to be automated. So how do you think about legal restrictions to automation and could those become more important as we face this increased market pressure to automate?

[13:29] Luke Drago: Derek Chang, who's at the Windfall Trust now, but was at Convergence Analysis, Convergence Research, one of those. I think there's a lot of things with similar names in this space. Derek has a really good piece on what jobs are likely to be more and less resilient to automation. And there's some of the ones that you expect. Obviously, things like physical labor is more resistant right now. And I think there was a story for 50 years that automation hits physical labor first and mental labor second, and actually we're seeing the exact opposite given a way we're making progress and capabilities. I think your judge point is actually quite interesting to me, and I think it's correct. The jobs that have strong legal protections are going to be harder to automate. Now, of course, that doesn't mean the people who are in those jobs aren't going to automate their own work. And this is both an example of opportunity here and also an example for some sort of gradual disempowerment where you just automate a way to a generic model that makes decisions on your behalf. I think it'd be a bad world, potentially, if every judge was using the same AI model to make the same decisions. And great, there is a human judge, but it's the same prompt, same output. At the very least, you'd want some more diversity that represents the actual beliefs, feelings, understandings that the judge involves. Other roles that I think make sense here to talk about, lawyers, kind of. I think the lawyers who are at the partner level are going to be very easy to not automate. Paralegals are a different story. And I think like entry level law work is an interesting one here because of course, your first year lawyers who've just been hired, their job is mostly grunt work. And if a firm can hire half as many of them, it might be the case that on paper, it's hard to automate lawyers, but the law firms who have lawyers working there automate their own work to such a degree that either A, you get an abundance of new law firms arising, or B, larger ones continue to accumulate capital without hiring new people. And I think an important question for what happens next is, at that moment of initial automation where a whole lot of entry-level jobs get cut and we start to be reduced, what happens next? Is it A, that large firms continue to grow and monopolize the industries? or B, that we get an abundance of smaller firms that allow for more diverse economic output. We are very, Rudolph and I are much more excited about that second world than that first one, the one where this creates a bunch of opportunity, but I don't think it's by default. I think we have a lot of work to do to get there.

[15:48] Gus Docker: Yeah, It's actually an interesting point that you could see a job such as being a judge staying and not being automated, but in practice being automated because the judge is using an AI model to make educated guesses about cases. And so that would be a way for society to maintain the formal structures we have today and without actually thinking about which functions in society we're interested in automating. And so I think that would be quite a bad situation to end up in, because then we haven't actually grappled with the question about whether we want to outsource kind of the profession of being a legal judge to AI.

[16:33] Luke Drago: Yeah, exactly. And I think one of my real concerns there is, again, that same model. You know, if everyone's using like GBT-7 and they're calling that thing in to do all of their judge work, then whatever flaw exists in GBT-7, that's now your judge. And I think my concern isn't just have we automated the task, but with what information are we automating it.

[16:52] Gus Docker: Yeah, We also have perhaps another barrier to automation is judgment in a broader sense and taste. So for example, you can have hundreds of AI models generate whatever you want, whatever piece of writing or imagery you want. But judging what is actually interesting to people is something that's perhaps more difficult to automate. Do you think we might become kind of employed because we have human judgment and because we have taste, or do you think that's ultimately also automatable?

[17:25] Luke Drago: So it really depends on the pace and progress of capabilities and exactly what we aim for. I am much more excited about a world where that is a strong, durable human advantage, that diversity of taste. One example here, are you familiar with Nomads and Vagabonds? He's an artist on Twitter. He actually did the art for the intelligence course, did the art for Workshop Labs. My understanding after working with him a bunch is he takes a stable diffusion model and fine tunes it on his own work and the kind of work that he's aiming for. He's gotten very, very good at prompting it. And he produces these absolutely brilliant results. I just cannot get that kind of result out of the model. I don't have the taste for it. I don't know what kind of data should be going in the 1st place. I don't know how to write my prompts like he does. And I'm sure I've worked with him before because obviously we work in the intelligence cursor. And I know he gets hundreds of outputs and yet he releases a very select few. I think that's a fantastic example of someone, how you could use AI to be an exceptional tastemaker. I think his judgment is really exceptional there. It's still his work going in and his work going out. And because of this new medium that he's using, it's been one of the best examples I've seen of an artist fully embracing new technology while still maintaining their own distinct style and taste. And I don't think anyone could look at the art that he's outputting and say it's anyone but his own. And that is one of the things that I'm really excited about moving the technology towards. But I don't think that's the goal of the major companies. Again, this definition that OpenAI uses of AGI is predicated on doing most economically valuable human work. That is a very different game than the, oh, we're going to do some economically valuable work, but it's all going to be tools in your hand that's going to allow you to change and shape the world. That's a different ballgame to do all of it versus to do some of it. And the target right now is total automation. It's a very, very different outcome.

[19:17] Gus Docker: Yeah. One barrier to automation that you mentioned in the essay series is local and tacit knowledge. And this will be knowledge that's spread out, that's difficult to formalize in the way that you can train models on it. And it's knowledge that's perhaps shifting constantly. And so it intersects with taste and judgment in a sense. Yeah, is this knowledge and this local and tacit knowledge, is that a way for us to remain relevant?

[19:55] Luke Drago: So this is part of our belief at Workshop Labs. I think if I summarized our thesis in two sentences, it's that We believe that the bottleneck to long-term AI progress runs through high-quality data, specifically data on tacit knowledge and local information. That's both the skills that you have, that you accrue throughout doing the things that you do. That's really hard to digitize, not because it's impossible to digitize, but because it's hard to know where to get it, because you have it. And second, local information, the kind of things that you see around you, the opportunities that you can spot because you are an embodied person with access to real-time information about everything in your sphere. Right now, the labs really want this data. It's why there's a rush to integrate into your browser. It's why there's a rush to build these bespoke RL environments where an expert gets involved in helping to create a model that's really good at this one task. But you have a distinct advantage, which is that right now you have that data. The kind of data that is valuable to AI progress is in your pocket and on your laptop, and it's in your day-to-day life. So our proposition is Why don't we take that data and put it to use for you entirely privately so that you don't have to trust us. We just can't train a model and sell the data to your boss. We can't train a larger model to automate you, but we can take an existing model and dramatically tune it towards your work, lock it down so that only you can use it and let you put it to work. I think you should have control over the tools that augment you and you should reap the benefits of the data that already exists in your world. And that's what we're aiming to do here at Workshop.

[21:30] Gus Docker: I actually think you could see a future in which there is this form of, there's a tension between leadership at a company and the workers at a company where the workers are unwilling to give up their tacit and local knowledge to be to model for that model to train on. And company leadership might be quite interested in gathering that data and training on it so that they can reduced labor costs. So is that perhaps some, and the new tension in the economy?

[22:07] Luke Drago: So I think that's one of the tensions. But I also think, one thing that I think people oftentimes forget is that 50% of Americans work at a small and medium-sized business. These are not the kinds of companies that have hundreds of people with which they can mine surface level data from. These are the kinds of companies where like most people on the team are doing something that actually matters. as in, if they didn't show up for work, something wouldn't work. And because of that, they have lots of specific information about their processes that are really important. I think the outcome I'm excited by is 1 where AI shifts the direction away from extremely large companies, because look, candidly, a lot of those tasks are automatable today, but Humans retain this advantage or are able to put to use their existing advantages with that embodied experience and are able to train models that can help them compete much faster and better, creating an explosion of small companies and small enterprises that really understand what's going on locally and ultimately help break that efficiency gap we usually see where large companies are more efficient because of their scale, because we can put so much intelligence to work for the average person. But I think this really, really means that those important things that make you competitive just shouldn't be given away. I'm a strong believer that data is kind of the new social security number. And I read a piece about this a while back, where the thing that you got for caring about privacy in 2015, candidly, was worse ads. There are some exceptions, right? Like dissidents obviously need to care about privacy. People in authoritarian countries who are talking bad about the government need to care about this. But for the vast majority of people and the vast majority of cases, you got worse ads. I think in the next 10 years, if you aren't careful with that proprietary info, if you say, all right, lab A, I'm going to give you everything in my life to get moderately better ChatGPT results, and they don't lock this down for you, and they don't take extreme care to make sure they aren't going to train on it, you are one button push away from having someone hoover up that data and sell it to the highest bidder and use it to automate you out of the economy. That is a much different situation for the value of your data, and I think people would do a whole lot better if they'd start caring about that that soon. I don't think we're there quite yet, but part of the reason that we care so much about privacy at workshop is because we are aiming at creating a solution that is able to guarantee these things so that we can't use that data to automate you.

[24:29] Gus Docker: On A societal level, what you might get from handing over your tacit knowledge is a slightly better AI model. But on a personal level, if you're a maths PhD student on a low salary, and you might get offered hundreds of dollars per proof that you provide with a step-by-step solution to train a model on. That is quite an economic incentive. Do you think we as a society will be able to overcome this incentive to give up our data just when the individual incentive is so strong?

[25:10] Luke Drago: This is part of the arms race, and it's why we are laser-focused on delivering models that aren't just like kind of okay and private, but are better at your existing work than an off-the-shelf model because of the data that they have. And because of this, your work improves. I don't think it's the case that you can win this game by walking in and saying, look, we have worse tools and we can't pay you, but don't worry, it's private. People don't make decisions like that. The answer has got to be the default tool that you want to use cares about what's going on here. And I think Apple is a fantastic situation here, where Apple at its bones is what I would call a privacy second company. It is for very few people, the selling point for Apple is rarely, oh, this thing is entirely private. But Apple understands that they are, especially in the United States, they are the infrastructure with which almost all modern communications happens. And so they understand they have a responsibility to protect user privacy. And so unlike many other companies, they have locked everything down to ensure that your messages are private, that your phone calls are private, that your interactions are private, that your device doesn't get a virus, and they've gone through painstaking efforts so that you know that device is always reliable and always works for you. Anthony Guire at FLI has a paper on loyal AI assistants, and I know he talks about it as well, in Keep the Future Human. But you have got to know that the model that is helping organize and orchestrate your life works for you, not for someone else. And that means it has to be good at working for you, and it has to be verifiably working for you. And I think that's how we plan on overcoming some of these incentives. I don't think the labs are going to pay every single human on earth a couple $100 to gather up all their data. And I think that might be kind of the scale of what they need to do to actually beat this with that kind of incentives. So I think by delivering an actually better experience for users, and then secondly, layering on extraordinary protections here, we can both serve customers well and fulfill our impact.

[27:02] Gus Docker: How would we guarantee that the data that I'm providing, say, that data remains private? Is there a way to do that without just trusting Workshop Labs?

[27:14] Luke Drago: So I'll have more to preview on this soon once we launch here in September and October with a couple of blog posts that I think we'll walk through what we're working on here. What I can say for now is that we are getting, as an industry, there are now increasingly ways to do this. You can do things like encrypting all information in transit, decrypting it within what we call a trusted execution environment, where we're using NVIDIA Secure Enclaves. and then attesting to the code that is running so that you can see that nothing is being extracted from that. And you could store the weights of a model, for example, also encrypted.

[27:46] Gus Docker: Got it, got it. If we move back to the intelligence curse for a bit here, we talked about, or you mentioned social mobility as an indicator, kind of decreasing social mobility of an indicator, as an indicator of the intelligence curse happening. Perhaps you could sketch out what what a bad scenario looks like here. What does it look like if we have a more static society with lower social mobility, where capital is the main driver of progress, but that progress is not made by a set of diverse actors, it's made by companies that are larger and larger. And yeah, what does that kind of society look like?

[28:32] Luke Drago: So I think there are a couple of examples here, but I'll just kind of tell the story to the perspective of 1 guy. Let's say I'm like a college graduate. Let's in the year 2020 or the year 2030. I've graduated from college. I'm struggling to get a job. I for some reason studied CS. I'm not sure why I did that in 2020, but you know, in 2026, it wasn't obvious what was going to happen. So I've woken up in 2030 and I cannot find an entry level job. I also couldn't find internships. Maybe like one or two companies here and there, but on the whole, it's just way cheaper not to get me involved. Okay, so I can't get a job. I'm relying on unemployment, which is increasingly strained because I'm not the only undergraduate who can't get a job. A whole lot of undergraduates can't get a job. Meanwhile, Microsoft has published record earnings because they've been able to halve their expenditure on employees and double their output. This is exciting for a lot of reasons, but remember that in the US, Corporate taxes are a very small amount of the federal budget. 50% of federal budget taxes, tax revenue comes from income tax. So we have a smaller and shrinking income tax base because less people are making that income while companies are posting record profits. And of course, they have the kind of money to work to evade those taxes as well. So our social safety nets are increasingly strained. Unrest is increasingly popular. People are very upset. They don't have a lot of time on their, or they have a lot of time on their hands. The thing they do is they protest or they get very upset. And the result of this is our social safety nets just stop working. They're not able to keep up with the strain. We aren't, we have to reduce payments. We have to make fiscal cuts. It's in the name of tightening our belts and not pulling ourselves up by our bootstraps. And in 2040, a whole lot of people just aren't employed. And there was a battle. There was a political debate of what we would do, and we've passed some sort of UBI for a while, but that UBI wasn't sufficient for the kind of standard of life that you would expect. And it's increasingly unstable. And of course, now we have a couple of companies who are really, really powerful. And those couple of companies are increasingly realizing that they'd be better off if governments weren't getting in the way all the time asking for things. And so if you look at the Tom Davidson Koo paper about how an AI or an individual armed with AIs could take power, you've got increasing social unrest, instability in institutions. This is a ripe environment for someone to come in and disrupt an existing order. Maybe that happens democratically, maybe it happens non-democratically. But the result is that suddenly, not only are you less economically safe, but you're also in a situation where the rights you took for granted to return your economic stability are now out of grasp. They're harder for you to get.

[31:05] Gus Docker: That doesn't sound so great. Isn't it the case that companies, say Microsoft and Google and Nvidia and perhaps OpenAI and so on, that there will be fierce competition in providing products for consumers at the very top? So even if you have the main drivers of the economy being capital deployed by massive companies, you would see innovation, you would see from competition, and you would see better products and services.

[31:39] Luke Drago: Yeah, potentially. One of the ways that you can break the intelligence curve, or one of the necessary components is commodifying the intelligence layer. If it is the case that one or two or three players have total access, a monopoly on intelligence, it's then the case that they can continue to raise the rents. I saw something, I saw a tweet, I think, recently that said something like, if you were a wrapper around a commodity, you're a landlord. And if you are a wrapper around a monopoly, you are a renter. And you are totally at the mercy of the monopoly to continue to set your rates here. And so a world in which there's like prolific, cheap intelligence, and then your job is to specialize it into the thing that you do, that's a better world to be in. But I think, you know, the goal of the labs is to get this recursive of self-improvement and just take off here. And in that kind of scenario, it's a very different game. That's one player that's won or a couple players that have won. Now, I don't think commoditizing intelligence fixes the problem entirely, but I do think it's a necessary precondition to breaking this intelligence curse.

[32:41] Gus Docker: Yeah. You mentioned Microsoft posting record profits and so on. Perhaps a naive question here is to ask, who are they selling to in this world? If the college graduate is not, doesn't have a job, who are they actually selling to? Which services and products are they providing?

[32:56] Luke Drago: So I feel bad that I'm picking on poor Microsoft here. I don't know if they're like the right people to pick on. But you know, I don't mean it Microsoft, it's not you specifically. I just picked the first tech company that came to mind. But let's go a bit broader. Who are the companies selling to? I think we talk about this in the piece, but the core thing here is probably to each other. The B2B environment is quite large, and it is not necessarily true that there has to be the, like, what we now call the consumer level in a technology space. And a whole lot of companies get by just fine selling to each other. I think you can expect that to continue to occur across a variety of areas, especially as the core fundamentals become more important. These are primarily land, compute, energy, intelligence. And the more important those get, the more important the businesses that can provide that get. Of course, governments are their possible clients. But I think it is not the case that you have to have this vibrant consumer style economy that we have today. I think this world has way fewer Starbucks. Sorry to pick on them. I think it's got way fewer cafes and way fewer phone cases, but it's probably got a whole lot more data centers. And you can see labs trading with each other, AIs trading with each other, providers trading with each other. this increasingly closed loop.

[34:09] Gus Docker: Yeah. The intelligence curse is a kind of a riff on the resource curse. Are there any lessons we can take from how countries have dealt with the resource curse in trying to deal with the intelligence curse?

[34:23] Luke Drago: Yeah, so the resource curse is not guaranteed doom. It's a curse, but it's breakable. And there are, of course, great examples of countries that did break it. The obvious one here is Norway. Norway is a state that has a sovereign wealth fund. It is fueled by oil revenues. It does have a real economy on top of that. And I think one of the things to be careful in this comparison is that, of course, oil is not a one-to-one replacement from all human labor. It's a very tempting investment target if you already have a lot of it. You still need humans somewhere in the chain, and you can get a more diverse economy. More diverse economies tend to win than these oil states in like direct comparisons, but it's a very tempting curse. But what happens in Norway? Norway is, you know, by many, many metrics, one of the best countries in the world to live in. Excellent education, excellent social services, really stable government, really democratic government. How does this happen? Well, we use some of the quotes. from officials at the time. And we looked at some of the case studies in the paper. But a core thing here is Norway had extremely resilient institutions before the resource curse was possible. Before they discovered oil, they had an excellent civil service that was really good at understanding what to do when this happened and a very low corruption society. The question for me is, do we think we currently live in a world with excellent institutions and exceptionally low corruption? I don't think so. I think basically every American that has looked at our government has said something here is fundamentally broken. And it's been that way for decades. And it seems like every time we think we get a reformer in, what we get is increasing brokenness. I don't think we're at the case right now where we have selfless members of Congress and extremely resilient institutions. And I think what it's going to take to withstand the pressures if you actually get total automation is stronger. It's more resilience than you would need to withstand the kind of oil pressures here. Of course, another thing going from Norway is that there is still room for a dynamic human economy on top of that. And so you can reinvest that money. Saudi Arabia is a great example of this, where as Saudi Arabian officials have become increasingly concerned that we are near peak oil and that renewable energy is going to be increasingly the way of the future, they are trying to invest their into creating a more sustainable and not like uppercase S environmentally sustainable, just like a more dynamic economy that attracts large businesses. Dubai's in this as well. Now, of course, important question here, while the economics are now starting to move towards democratizing, you'll notice that these states I'm mentioning here that are sometimes cited for high quality of life for some people, Saudi and the UAE, have high quality of life for certain kinds of people. for people that are economically important to the state. But of course, they also rely on an underclass. And in Saudi's case, I wouldn't say it's the beacon of gender equality in the world. For half the population, I wouldn't say those freedoms are well afforded. Now, as it has been the case that Saudi Arabia in particular, I want to zone in on them, has moved towards this more diverse economy, they've also concurrently started liberalizing their gender relations. I mean, you've seen under MBS, there's been, I'm not going to call it heaven or anything, but there's been a real effort to somewhat liberalize this relationship in an otherwise pretty conservative society. It is not on accident that these things are happening concurrently. And I think one of the things you should be wary of is arguments that we're going to centralize all power in the hands of a couple of actors. We're going to automate the entire economy, but the incentives are going to exist for the state to really care about you. The examples that we have of states where this is true is Norway. In other states, if you're not economically useful, it's a bit harder of a sell. It's not always true. There are exceptions. We talk about this case study in Amman, where like there was a credible threat of revolution, and this helps, you know, this helps force. the state to dole out its rents. The argument is that like rent seekers would like, or the rentiers would like to have all of the rents, but they also really want to remain in power and continue to get some rent. And so if it's cheaper for them to capitulate than to lose, well, then that's an easy out for them. But of course, when we're talking about AI that can automate every job, we're also talking about the automation of repression and increasing surveillance. As we make things more legible, it's easier for governments to trend towards this despotic realm where they can also put down dissent and prevent these kinds of forces that would otherwise force states to capitulate. So increasingly, by increasing the state's ability to such a dramatic degree, you have this moment where states are very weak, and then once they're able to automate repression, they're suddenly very strong. In both outcomes, you risk losing the ability for democratic processes to work.

[39:02] Gus Docker: Do you think we'll be able to shape the future economy using our culture, using our values? Or do you think that what matters most in the end is the underlying features of AI as a technology and the economic incentives that it causes?

[39:21] Luke Drago: Yeah, incentives are a powerful thing, but they are not predetermined. One, they're not predetermined and two, they're not ironclad. We have So many examples in history of great people defying incentives. I mean, I can just rattle them off. Washington deciding to step down, becoming the great Cincinnatus and not making himself king is one obvious example here, where a leader looked at the incentives, looked at the ability for them to gain power and said no. And oftentimes, I think one of the ways to reconcile like structural views of history and great man views of history is that these structural forces set up the incentives. but individuals can then defy or alter those incentives and make different choices. Incentives aren't law, but they are really powerful. And you want to align your incentives so that you're not hoping that every time a bad thing could happen, you are totally reliant on the character of the person in power such that they ignore every incentive in front of them. We talk about this in the paper. We said that economic forces are a predominant force here and a very powerful force, and that societies are extremely exposed to these incentives. But there are other things that shape their values as well. Cultural forces are very powerful, and oftentimes countries make decisions in favor of their culture, or societies do, that are culturally good for them, even if they're economically bad. The existing powers dynamics that we have also enable this. One example here is like Brexit is an obvious example of a country's population choosing a thing that is probably against their economic interests for a different value set. And I'm not commenting on the merits of that debate. I'm simply saying that there isn't strong economic argument on one side and an argument on sovereignty on the other. And that sovereignty argument won the public, even if it failed to persuade their elites. I'm not saying that every outcome should be like Brexit, but I'm saying that this is the kind of thing where you actually can make different trade-offs here. But of course, you know, there's that very famous quote. about, I think it's Charlie Munger that says, show me the incentives and I'll show you the outcome. And I think if you have the opportunity to move those incentives in a positive direction for humanity, you really should.

[41:26] Gus Docker: One way to do this is to think about which technologies we want to develop first and which technologies we want our most talented people to work on. We can talk about differential technological development. So if you look at the landscape as it is now, which technologies are currently undervalued? Where should we be pushing such that we can kind of change the incentives that the technologies create?

[41:52] Luke Drago: So I'm biased, but my company seems to be doing a pretty good thing here. And obviously, we're not in stealth. We've announced that we exist. We've got a one-pager of what we're doing, but no one's seen the thing we're working on yet. This fall, we're very excited to roll that out and really show people what we're working on here. But I think there are a couple of categories. We walked through kind of three in the piece. One, and this is kind of counterintuitive, we talk a lot about these kind of defensive acceleration technologies. The idea that you actually have to mitigate AI's catastrophic risks in order to get over this barrier. And the reason for that is because... AI's catastrophic risks provide a very good reason to centralize them in the hands of a couple of people. It is true that by default, AI could be extremely dangerous. It could be extremely powerful and extremely dangerous. It could make it easier for actors to develop bioweapons. It can make it easier for random people to do bad things. And governments are going, and governments and companies are going to use those as credible arguments, real arguments, to centralize its intelligence and de-commoditize it, to have a couple of actors who have dominant control over it. And of course, the downside of that is we know that the more we centralize this into the hands of a couple of people, the more it looks like a monopoly instead of a commodity, the worse off regular people are likely to be in the long run. So what we want to do instead here is de-risk the technology fundamentally. If we're going to build it, and I'm not saying that we do, but if we're going to build it, you should make sure that it's safe. And I think there's been this long-running argument in the AI safety space that doing this is not possible or a waste of time. And we're increasingly seeing interesting results here that indicate maybe actually there's something to be done. Kyle O'Brien had a paper with AC a couple days ago talking about how if you just remove biological materials information from the training data when you do pre-training, that you end up with models that are somewhat tamper resistant, even when you try to reintroduce that later in fine-tuning. That is the kind of research you want to be seeing a whole lot more of right now. You want to find the kind of research that means that if we develop it, doesn't have to be in the hands of 1 actor forever. That one guy has not declared the total controller over intelligence. And then, of course, you really want to work on technology. that helps democratize this tech with humans still in control. Again, part of what we're working on here is trying to find use for these last mile of automation tasks, of taking advantage of an individual's data, finding ways to make that even more competitive for them, even as there are larger models. This sometimes looks like modifying an existing model. It might look like doing something entirely different. But finding ways to put existing human data to use so that the tools that you control are the ones that are helping you do better and that they don't disempower you. also want to work in the kinds of tech that could help strengthen democracies. I think Audrey Tong's kind of vision here is quite inspiring. And so I think those are kind of like the three buckets I talk about. Tech that actually makes it possible so that if we build it, it's going to be diffuse as opposed to a monopoly. Tech that keeps humans firmly in charge. and technology that is able to help strengthen our democracies such that if we can't prevent them from being a monopoly, we have fallback options. One of the ways to think about this to close this is, to close this loop here, is on social media. I think there are two problems in social media, or two approaches. I think you should take them both concurrently. One approach is to say, the kind of common one, is that social media is super addictive, and so the government should regulate it in some way. The government should restrict certain kinds of features that are in it or age-gated or something like this. I think an approach that is oftentimes less appreciated and is absolutely necessary because you can only regulate things so much is to also introduce technological alternatives. There has been a massive rise of like screen time maps, for example, Opal's one of them, where you download a thing and it helps you reclaim your focus because a whole lot of algorithms are pointed at you and now you need something pointed outwards. We're trying to build the thing that's pointed outwards that so many people are trying to take your job or take you out of the economy and we think we can build tools to keep you in it. And I think if we're right, that could be one of the largest markets in history because if you are building the tools that help keep people involved, people are going to want to be involved. They're going to want to stay involved in the future. And I think that's a pretty powerful tool to be building, both from an impact perspective and from a market perspective.

[46:20] Gus Docker: We're facing this tension between trying to control the downsides of AI by centralizing it and then spreading the upside by giving as many people as possible access to the models. So one answer to this tension is just to say that we need to open source AI fully. What do you think about that vision and how does it interface with what you're talking about?

[46:47] Luke Drago: So I am probably more like pro open source than I think the average person on the podcast. And I think part of this is because of this real fear of monopolization. I think it is the case that if open weights models are not a core part of the future, that you can increasingly charge these wild rents for them. I think there are a couple people who have strong incentives to build them. So I don't think it's the case that like I don't think it's the case that they're going to fall behind in some near future. And I also think there's this very pervasive argument, I think, especially within the AI safety community, that open weights models are always going to be behind. It is absolutely true that in a hard takeoff scenario where you just foom and go straight to superintelligence, that that's going to be the case. Someone's going to win that race. That's game over. In basically every other scenario, what we have seen is the exact opposite. I remember hearing a couple years ago that there's like no way that open weights models could catch up. The 2 behind, and especially like there's no way that China could catch up. It's just impossible. Chinese models right now, Chinese open weights models are like six months behind the frontier, and some of them I think maybe are even more ahead. Kimi-K2, for example, is a really excellent English writing model. I would wager it's probably the state-of-the-art at that. This does not look like that we are seeding, open weights models are slowing down, the gap continues to close, even on providers that have less access to high-quality computes. There's something going on in both the way in which we train them and the data that we're using that still provides advantages such that compute isn't everything. And so I think if the argument that I oftentimes hear is like, open weights can't catch up, it's not a core part of the story, I just don't think this is true. I think if you're taking AI safety seriously, you're going to have to focus on making open weights models safe because open weights models are going to be a reality and they're going to be quite powerful.

[48:37] Gus Docker: How do we do that though? I guess that's the main worry with open weights models. This is just we can't If we put something out there that's open weights, we can't then take it back. Exactly. So we don't have this feedback loop of trying to test something and then pulling back and then perhaps putting a more limited version of that model out there. So how do we deal with the technology where if we release it, that capability suite is now out there indefinitely?

[49:08] Luke Drago: Yeah, this is where, like, again, I'll cite Kyle O'Brien's work here is just quite important. The kinds of work that you want to do here to create tamper-resistant open weights models, such that reintroducing the information by trying to tune them in a certain way breaks them or doesn't work. I don't have a lot, I know I've talked with Kyle a bunch, and I know some of his work is forthcoming, so I don't want to jump the gun on anything here. But as a separate note, the kind of holy grail here is a model that when you try to reintroduce this, it just stops working or it breaks because of something that they've done. I don't want to preempt any announcements. I know there are people who are working on this in a broad variety of sectors, but that's the kind of safety innovations that I think are extremely important and that move our option space. If you are someone who thinks doom is really likely, the best thing to do is not like continue to evaluate the model to see if we're getting closer, because if we're getting closer, we're going to actually have to do something about it. And I think from a technical safety perspective, right now, you're either betting on this catastrophic warning shot that I'm not convinced actually slows anything down. I think we have a footnote, like 7 paragraph footnote, the intelligence curse. We couldn't fit in the main thing, I footnoted it, talking about how in a whole lot of scenarios, a warning shot actually just increases the speed at which AI progress happens because somebody gets spooked over it. And the response is we need better defenses faster. So I think if you're counting on like, we're going to keep evaluating the thing and then we're going to see that it's dangerous and we're going to stop building it. Best of luck. Like, I don't think that is an extremely tractable approach. I think more investment is better spent. by a whole lot of extremely talented technical experts on actually building out the capabilities that are required to make even open weight models tamper resistant and safe. And I think this is genuinely achievable. I don't think this is an intractable agenda. We have seen more progress on it than I expected to see. And I think as people have kind of chipped at it, as papers have made it clear that this could be possible, more and more people are starting to get excited about this. And I think that's more of the direction I want to go here.

[51:01] Gus Docker: If we don't have the option of controlling AI using a central authority, it seems to me that we are somewhat at the mercy of how the technology just turns out to be. So if it is the case that we can limit what models can output and perhaps have the models stop if you try to use them to create biological threats, say, that's great. But what about the next the next possible danger and the next possible danger. If we don't have a way to control AI as at least a backup option, are we just kind of at the mercy of how the technology turns out to work?

[51:44] Luke Drago: Yeah, this is one of the concerns. We are at the mercy of how fast we can rush our defenses. But that means that rushing our defenses is perhaps one of the most important things that we could be doing. And in other forces, we recognize this. On pandemic preparedness, for example, we can't ban pandemics. It's not possible. Pandemics are always a background risk throughout the world. And yet this means that our response can't be to do nothing. Our response has to be, we know this is a possibility. This is on our threat map. What's everything we can do to build the kind of Swiss cheese model of defense for pandemics? And I think that approach is extremely relevant with AI dangers. One other thing that I'd say here is The kinds of proposals that I'm talking about, the ones that I'm like explicitly proposing here, are those that try to do this like controlled super intelligence explosion. The kinds where like we say, all right, 12 people running after AI, too much. One guy's going to do it. We're going to monitor him every step of the way. And what that policy results in is one person has a unilateral or one body, one entity, has a unilateral advantage over everyone else forever if they actually achieve this kind of hard takeoff. And then you are just at the mercy of the people who control the weights. Aligned super intelligence in the hands of one person makes that person a de facto dictator unless they choose not to be. And that is not a good outcome. Now, there's a separate category of policies which, you know, I'm not necessarily supporting. This is not me endorsing these, but I don't think they unlock the kind of intelligence curse style risks. And that's if we just don't build it. So if it's the case, I think you can very consistently say the intelligence curse is real and therefore I'm going to advocate for never building systems that can replace humans. I don't know how tractable that policy is. I'm not sure that's the right approach, but I don't think no one gets it unlocks the risk. The concern that I have is a whole lot of well-meaning people are going after one guy gets it. And I think the much more likely outcome is not between zero and one on extremely powerful AI. It's between one and many. And if those are my two options, man, I'm definitely on the ladder than the former. And I think the latter is a world that you can move towards.

[53:42] Gus Docker: Spreading AI capabilities, that seems to me, when I read the kind of founding essays of OpenAI, that seems to be the vision that they had. They wanted to make sure that Google didn't have a monopoly on AI technology, and they wanted to empower everyone with AI models. And that vision seems to have kind of degraded over time. How do you make sure that doesn't happen to the vision you have for Workshop Labs?

[54:10] Luke Drago: It is one of the things I think about the most, because the road to hell is paved with good intentions. It is paved with people who are working on things that ultimately end up working against their cause. Now, there's a couple of things here. I mean, there's the basic legal stuff. Like we're a public benefit corp with a fiduciary mission not to automate people. It's in lawyer speak for like enhancing economic opportunity, but that is like explicitly our goal. And this is instead of saying like, you kind of can do the generic thing of like to make sure AI benefits people. And it's like, okay, but what does that mean? Does that mean like we're going to put it in charge and then we think it's going to benefit people? Or does that mean we are going to try to do a certain thing. And in our case, this is this economic empowerment argument. It is our mission to make sure that AI actually meaningfully increases your power in the economy rather than decreasing it. I think also I'm A believer that personnel is policy. And so the kinds of people that you bring under the team will push you in certain directions. Our hiring process is laser focused on mission alignment, and it helps that we have been incredibly public. We kind of stumbled on this company on accident. We had worked on a bunch of research in the area quite publicly, and then realized that we had to propose a technical agenda and wanted to go after parts of it ourselves. But of course, there's also kind of the broader question of like, what do you do technically? This is why we are so committed to launching on day one with extremely strong privacy guarantees. Because you shouldn't trust me that if you hand all of your data to me, that I'm going to be a good steward of it. What you should instead know is that there's literally nothing I can do to use it in a nefarious way. That's a much more powerful guarantee. It's not this trust but verify thing. It's I can demonstrate to you that we have taken every measure humanly possible to prevent ourselves from training a larger model on your data. And so that every piece of data that we get from you is used at your benefit and we can't use it against you or use it to sell it to your boss. And I think that's different than a promise. We're trying to give an actual guarantee here so that we can't use the data in this way. That presents lots of novel challenges for our team. But I think it also presents some novel opportunities, both as to how we position ourselves and the kinds of things that we can do to help make your experience better as opposed to worse. We want these models to genuinely be aligned to you and loyal to you alone. And we're going to keep that vision centered as we continue to work on this.

[56:26] Gus Docker: It is really one of the big kind of technical, perhaps even political questions of our time. We have AI models that are aligned to certain interests. There's a whole separate question of whether we can even align them to certain interests. And that, in my opinion, is an unsolved problem. But they happen to have certain goals, certain preferences. And those preferences are a kind of a mix of what the companies are interested in, what governments are interested in, and what end users are interested in. And the balance between which preferences should be strongest in the model, that is a very interesting question and something that, I think there's a lot of work to be done there. For example, I say in not that long, I expect us to have personal agents that can do our e-mail and our calendar for us. that agent, is that agent working on my behalf when I ask it to book a hotel for me? Or is there perhaps a kind of a corporate preference to book a certain hotel that OpenAI might have an agreement with, something like that? You could quite easily see the incentives of the model or the preferences of the model becoming modeled between what the end user wants and what the companies are interested in. Do you see a principled way to solve this? Or is this just like any other product where the company selling the product is interested in something and the consumer is interested in somewhat of the same thing, but the preference sets do not perfectly overlap?

[58:11] Luke Drago: I think if you talk to a model and you ask it for something, it should do one of two things. It should either answer in your interest or tell you when it's not. If we're going to go down the kind of rabbit hole of LLM monetization via advertisement, it should be exceptionally clear what is an advertisement, what isn't. I think it was started this way in search, and it's less so now. But even still, if you're searching something on Google and you type in something, you can see which things are ads. This should be really obvious, because of course, I happen to believe in what we're building is that these things should be loyal to your interests, that OpenAI or Anthropic or us, that we shouldn't sign some sort of a deal and then disguise or nefariously let you know, hey, by the way, have this hotel you should be looking at. It's a really bad situation to be in if your model doesn't work for you. And I think that this is just true as a consumer. You want to know that when you are asking something for advice, you are getting the kinds of advice, the kinds of information, the kinds of truth that you would give to a friend because you genuinely care about them. It's what makes these tools useful, is that they work on your behalf. Imagine if, you know, there's a Black Mirror episode recently that really stuck with me. It was in the new season where like a woman has a brain transplant and they upload like half of her brain to the cloud. And this is great because she's still alive. But like every couple of hours, she turns off and gives a advertising pitch about something. She has no recollection of giving the advertising pitch, and then she wakes back up and doesn't even know what's happened. And she only finds this out because other people tell her, hey, why did you just bring up this travel site in the middle of your lecture? And it's great that the technology has enabled her to do this really cool thing. She's still alive. She's able to live her life, except, of course, if she suddenly needs to give a sponsored ad on something where she goes out of the coverage area. And because they have this monopoly control over her, because you don't have competing vendors for your brain upload. You've got like half your brain here, half of the processing power in the cloud. Only one guy's got that chip. And so what ends up happening is they start her on a very cheap plane. It's only a few 100 bucks a month. It's so good to be alive. And then they say, oh, we have this deluxe plan now. And you can go outside the coverage area if you buy the deluxe plan. And it's like, oh, you're now on our like freemium tier. And if you just upgrade a little bit more, you can get rid of the advertisements. And suddenly, the thing that made your life so much better is now a massive hindrance to your quality of life because one guy has total control and gets to jack up the rents as they see fit. That is the kind of scenario that we're trying to avoid. Part of this comes through democratization, the technology. Part of this comes through ensuring that they're actually loyal to you. And my expectation here is in the future, if we get to the good future, everyone has got an agent that's aligned to them that advocates on their interests that they know is working for them. One thing I'll add here to close it up, to kind of close the loop here is that one of the places I really agree with Sam Altman on is his concept of AI privilege. The idea that actually, if you're giving this much information to a system, It probably shouldn't be used against you, and this is different than other technologies, so I'm probably someone who'd advocate for more privileging technologies rather than less, even on the status quo ones, but if you are constantly interacting with this thing and it's helping organize your life. That's a powerful tool in the hands of someone who wants to be nefarious to you, who wants to understand your life, who wants to interrogate it instead of you. And because it's a chatbot, you know, it's not going to know when it should reserve it. Maybe it could, but maybe it doesn't know when it should like, you know, use its Fifth Amendment rights. Not clear, it doesn't have the Fifth Amendment right now. It probably doesn't have a right against incriminating you. And if it has that much access to your life, it probably should. That's one of the more, I think, like really value-aligned things OpenAI's called for recently is some sort of concept like that, and I endorse that wholeheartedly.

[1:01:57] Gus Docker: Yep, both on kind of clearly stating when there's advertising happening in model outputs and on the privacy or AI privilege. I do fear that consumer preferences are just not up for these things. So if we look at social media, if we look at kind of digital services in general, it seems to me that Consumers are interested in free products that are ad-supported, and companies are interested in hiding to the maximal extent what is an ad and what is not an ad, just because it's more effective if you can't tell the difference between an ad and generic information. This is like, it's more effective when an influencer personally endorses. Yeah, when an influencer endorses a product, but that is kind of like happening because they're getting paid, not because they actually like the product.

[1:02:54] Luke Drago: Yeah, like sponsored content, things like this.

[1:02:55] Gus Docker: Yeah, exactly, So you have those two things that we see now. Isn't that then, doesn't this point in the direction of the default AI future being ad supported and being a future in which it's difficult to tell what's in advertising and what is not.

[1:03:12] Luke Drago: Yeah, no, I think that is the default future. It's why we exist. If I thought the market was on its own, through the forces of nature going to correct itself here, it didn't require an insurgent actor who was going to work on this. That wouldn't exist. If we didn't think it was required for someone to build the technology to make the future better, we would do something else. But I think part of this is aligning your incentives with your customers. I could not talk enough about Apple. I think this is a fantastic case study of aligning your incentives so that you're serving the right people. Where does Apple make the money and the device they sell to you? as a consumer have a very strong preference for that device working. And one of the places where we haven't seen this trend of injected advertisements really work is in actual personal devices. The one device you have that's your gateway to everything. Sure, lots of content on that device has this injected information, but you know your device works for you. And actors tried this. Amazon's Kindle had like a, I think that might still have ads on like the front black and white ink page. I'm not sure if anyone has ever, that's ever worked. It's never worked for me at the very least. But even with strong incentives, the vast majority of mobile devices don't serve you ads natively. The apps on top of them do. And I think this speaks to a very important point. that sometimes you need the thing to work for you. You need to know that it works for you. And this, I think, is, again, a really massive market opportunity. And I think it's especially true when you're building things that have a lot of data on the user, that the user proactively hands over, that helps them do their job. And that kind of thing, I think users, at least in our initial conversations, are more skeptical of handing over all this data unless they know it works for them. And I think being the provider of the thing that people know works for them, that also delivers value to them, is a really powerful position to be in. I think a lot about companies like Apple.

[1:05:01] Gus Docker: Yeah, We are, perhaps as a final topic here, we talk about a great essay you had on how to respond to the special time we're living in. So it is a time in which AI progress is moving incredibly fast, and you call for moonshots, starting a startup, say. What is it that especially young people should be looking at in these times?

[1:05:29] Luke Drago: The default paths are closing. And this is true no matter, you know, my, I wouldn't bet the house on any one intervention, right? But you know, my company could win everything. We could do everything we set out to do, and the consulting jobs are still going away. I have no interest in changing parts of this pattern. I think it's not our job. Our job is to ensure that the next iteration of the economy works for you, that when this change is said and done, that you're in a better position than ever before to achieve as opposed to a worse one. But the economy is still going to change. Even technology to create new jobs, if that is the way we can move the pendulum instead of being a job replacer, a new job creator, even those change the nature of the economy. And I think that's going to happen basically no matter what. And you're already starting to see it. Your Fortune 500 company that your parents told you have got to join when you graduate from this prestigious college because come on, man, we didn't pay for all that tutoring for you to do a startup or for you to go join a think tank or for you to go to this small company no one's ever heard of. Those are now the least risky options because those are still opportunities for you to win it. They require you to think on your feet and be bright and do well and really understand the environment around you. Those safer jobs are the first target for automation because companies with 500,000 people on their payroll are going to want to cut some of that payroll. If you are an N equals one person at a company, if you do an important job that nobody can replace by virtue of being there, you are much safer than if you do a job that 1,000 other people at your company also do because you are extremely automatable in that role. And I think that's what we're going to see. The automation of rote tasks has the opportunity to do one of two things. It can be the start of a total pyramid replacement where we as a society decide that our value is to replace all work and hope the next thing works out. Or it can be an opportunity for us to build an economy that is more local, that is more individual, that allows you as an outside person to have more opportunities than ever at moving in and becoming somebody. But that's not going to happen if you don't change your path now. And I think this is especially true for like the kind of classic prestige paths, people who like got all straight A's and nailed their SATs and went to the right college and have only ever done the right thing according to the status quo. No matter what happens in the next 10 years, I think now is the time for these moonshots because we know the window is still open. It's become easier than ever and everything else looks more risky. So if you are someone who's hesitated on doing the risky thing and like Jane Street has knocked on your door and McKinsey's come calling, they're like, look, here's this massive paycheck. Come do this for a year or two. Know that you are going to be on the last chopper out of Saigon. If you manage to get yourself through that, you are the last breed of consultants. That industry is dying. You are the last breed of entry-level whatever. We are moving towards, if we can win, we're moving towards a more specialized economy. And I think no matter what happens, if that's the winning play, I think you should take it. So strong urge of people to take more risks during this time. I think it's more important now than ever.

[1:08:38] Gus Docker: Luke, thanks for chatting with me. It's been really interesting.

[1:08:41] Luke Drago: Yeah, Gus, this has been great.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.