The Machines Are Taking Our Jobs - Thank God? Emad Mostaque’s Guide to the next 1000 Days

The Machines Are Taking Our Jobs - Thank God? Emad Mostaque’s Guide to the next 1000 Days

Today, Emad Mostaque, founder of Intelligent Internet, joins The Cognitive Revolution to discuss his book "The Last Economy" and his radical "intelligence theory" framework for reimagining economics in the AI age, exploring concepts like the abundance trap, metabolic rift, and his proposed cryptocurrency-funded system of collectively-owned AI infrastructure to prevent digital feudalism and ensure AI benefits humanity rather than just capital owners.


Watch Episode Here


Read Episode Description

Today, Emad Mostaque, founder of Intelligent Internet, joins The Cognitive Revolution to discuss his book "The Last Economy" and his radical "intelligence theory" framework for reimagining economics in the AI age, exploring concepts like the abundance trap, metabolic rift, and his proposed cryptocurrency-funded system of collectively-owned AI infrastructure to prevent digital feudalism and ensure AI benefits humanity rather than just capital owners.

Shownotes brought to you by Notion AI Meeting Notes - try one month for free at: https://notion.com/lp/nathan

Read the full transcript: https://storage.aipodcast.ing/...

Sponsors:
Fin: Fin is the #1 AI Agent for customer service, trusted by over 5000 customer service leaders and top AI companies including Anthropic and Synthesia. Fin is the highest performing agent on the market and resolves even the most complex customer queries. Try Fin today with our 90-day money-back guarantee - if you’re not 100% satisfied, get up to $1 million back. Learn more at https://fin.ai/cognitive

Linear: Linear is the system for modern product development. Nearly every AI company you've heard of is using Linear to build products. Get 6 months of Linear Business for free at: https://linear.app/tcr

AGNTCY: AGNTCY is dropping code, specs, and services. Visit AGNTCY.org: https://agntcy.org/?utm_campai... Visit Outshift Internet of Agents https://outshift.cisco.com/the...

Claude: Claude is the AI collaborator that understands your entire workflow and thinks with you to tackle complex problems like coding and business strategy. Sign up and get 50% off your first 3 months of Claude Pro at https://claude.ai/tcr

Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs. Try OCI for free with zero commitment at https://oracle.com/cognitive


PRODUCED BY:
https://aipodcast.ing

CHAPTERS:
(00:00) About the Episode
(05:15) What is Intelligence Theory?
(13:31) Differing Views on AI (Part 1)
(21:36) Sponsors: Fin | Linear | AGNTCY
(26:02) Differing Views on AI (Part 2)
(26:21) The Abundance Trap
(34:05) The Caring Economy (Part 1)
(39:52) Sponsors: Claude | Oracle Cloud Infrastructure
(43:07) The Caring Economy (Part 2)
(50:57) Harbingers of Crisis
(01:00:54) Laws of Living Systems
(01:12:17) Flows and Economic Theories
(01:20:39) Three Possible Futures
(01:32:56) Is AGI Scaling Plateauing?
(01:41:04) A New Monetary System
(01:56:17) Challenging the Nation State
(02:08:23) Who Owns Your AI?
(02:19:09) Outro

SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://linkedin.com/in/nathan...
Youtube: https://youtube.com/@Cognitive...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...


Full Transcript

Transcript

Nathan Labenz: Hello, and welcome back to the Cognitive Revolution. Today, my guest is Emad Mostaque, famously the founder of Stability AI and currently the founder of Intelligent Internet and author of the provocative new book, The Last Economy, a guide to the age of intelligent economics. Emad has long been one of my favorite thinkers in the AI space. Very few people managed to grapple seriously and honestly with the world changing nature of AI while also building something that matters in the here and now, but Emad has. Since founding stability in 2019, he's demonstrated a deep understanding of AI technology trends, a keen eye for talent, the ability to inspire people with a positive vision for a wondrous future, and an appreciation for the stakes and risks, as evidenced by the fact that he signed while still CEO of Stability, the famous 2023 pause letter. The fundamental problem that Emad addresses in the new book is that human society is built on the premise of scarcity. This makes sense. In humanity's hunter gatherer past, everyone had to contribute to the group's survival, and freeloading simply couldn't be tolerated. And still today, as Elon Musk puts it, if you don't make stuff, there's no stuff. But what happens when an AI doctor, which doesn't need to eat, can provide better frontline medical advice than a human doctor at one one-thousandth the cost? Such technology represents abundance for patients around the world, many of whom will enjoy better access to medical expertise than ever before. But taken to its logical conclusion, it implies poverty for human doctors. Similarly, what happens when all the cars can drive themselves and the millions of Americans who earn a living by driving are no longer needed for that purpose? And zooming out, what happens when this pattern repeats itself across a majority of the economy, leaving displaced human workers with nowhere to go all in less than a generation? Emad argues that there's no escaping these questions. Even if AI capabilities stalled out today and we never got a truly powerful AGI, the AIs we already have with proper implementation and integration into existing systems are powerful enough to support this change. And in reality, with Frontier AI developers racing to build AI agents that are explicitly designed to replace human labor, we have maybe 1000 days to find good answers. With that in mind, Emad is simultaneously working to assemble the open source datasets and train the small models that are needed to ensure that this abundant future is accessible to all, while also trying to answer the question of what the future and the transition to it might look like in as much concrete detail as possible. In this conversation and even more so in the book, which I do encourage everyone to read and ponder, Emad coins a number of memorable terms, including the intelligence inversion, the metabolic rift, and the abundance trap, and also proposes a new way to think about the health of the economy, which would measure not just the monetary value of the material goods and services sold, but also the levels of intelligence, connectivity, and resilience in the system. He also makes fascinating analogies between the mathematics of neural networks and the economics of firms and markets, and even proposes a new dual currency system with one for physical goods that are rivalrous in consumption and intrinsically scarce, and another for intangible goods that are nonrivalrous and fundamentally abundant. The realist in me recognizes that these are underdog ideas. But as you've all know, Harari has famously explained, the stories we collectively tell ourselves are a huge part of how society operates. Money itself is a shared fiction, but a useful one because it helps us allocate scarce resources relatively efficiently. So the idealist in me says that in context, Emad can't be any crazier than whoever it was that came up with the idea of using gold or shells as a medium of exchange in the first place. Big picture, while I usually tend to assume that the economic upside of AI will take care of itself, I think it is important to recognize that what Balaji Srinivasan calls the nuclear outcome, where we get the weaponization and constant threat from AI without the material abundance and accompanying personal freedom is still a real possibility. So can we collectively start telling ourselves a story of abundance in which a person's right to a decent life isn't predicated on their economic contributions and in which caring for one another isn't something we do to meet our own needs, but because such interactions are a core part of the human experience. And can we do it in time to give people something to believe in before the inevitable modern Luddite movement shows up and tries to shut the whole thing down? While so many people, myself at times included, are focused on the latest model updates, on the horse race coverage of who's winning and losing, and on making our apps work, Emad invites us to stop thinking so small, to recognize that we have agency, and to challenge ourselves to intelligently imagine and intentionally build our own shared future. Or as he puts it in the book, the machines are taking our jobs. Thank God. Now we can get to our real work. With that, I hope you enjoy this challenging and inspiring conversation with the one and only, Emad Mostaque.

Nathan Labenz: Emad Mostaque, founder of the Intelligent Internet and author of The Last Economy, a guide to the age of intelligent economics. Welcome to the Cognitive Revolution.

Emad Mostaque: Thanks for having me back.

Nathan Labenz: Yeah. Welcome back, I should say. So I'm excited for this conversation because one of my common refrains regular listeners will know is that the scarcest resource is a positive vision for the future. And this book, which you describe as an engineering manual for building the future, is a combination of a diagnosis of a bunch of things that are going wrong in our society today and also some vision and even recommendations, some of which are fairly opinionated about what we might do to build a much better world. So I think that's great. I really applaud you for taking on the challenge and doing the hard work of putting something like this forward. It it all is really based around this notion of intelligence theory. And maybe the a good place to start is just giving you a chance to kinda describe, what is intelligence theory?

Emad Mostaque: Yeah. So, basically, I've been thinking a lot about what the new economy looks like, and we've seen existing economics might be challenged, etc.. So I was like, let's go back to first principles. Previous company, Stability AI, we've built stable diffusion and other models, hundreds of millions of downloads, and they were getting better than humans at doing various things. And now we see that with the new models that are coming out, agents, etc.. Intelligence theory basically goes back to a principle whereby I was like, what is the core axiom or principles that define reality? And so there was this observation of persistence. Certain complex adaptive systems persist over a long time in uncertain environments. And the ones that do that the best are the ones that basically have the closest match between their internal model of reality and reality itself, which looked like the loss function in generative AI. And in fact, mathematically, it was the same. So intelligence theory is that the ones that do the best are the ones that minimize that loss or surprise that have the best models. And, again, we see that in everyday life. And we see that from the fact that the generative AI itself has created the best models of reality. The best agents now are AI agents. The best models of reality are AI models. And then I was like, can we have economics deriving from that one base principle? What does the mathematics of that look like when we apply the equations of generative AI that came from physics to economics? And so we started building a new economics from that basis as opposed to the classical economic bases of scarcity, of this concept of utility or general equilibrium or other things which can't even be measured, that are built up over hundreds and hundreds of years, which always assumed that humans were gonna be the top and the main producers, which may not be the case anymore in a few years' time.

Nathan Labenz: So let me ask a couple, really naive questions. 1, I sort of have some basis in this idea that predicting your environment is really key to acting effectively in the world. one of the best blog posts I've ever read, I think, amazingly, goes back to 2017 from the old Scott Alexander blog, Slate Star Codex. It's called predictive processing and perceptual control. And it's basically a book review of a very long dense book that I think he does a great job synthesizing into the idea that, a simple model of us as biological humans is that we have a lot of layers of, prediction that are happening between our sort of peripheral neurons that receive the signals from the world and our sort of higher highest order neurons, in the in the prefrontal cortex. And along that, those many layers, which, again, we are already kind of, sounding pretty similar here to neural networks in some ways, the role of each layer is to predict what's about to happen. And if the signals that it's getting from the lower level are consistent with the predictions that it's making, then it can just be quiet. And this kind of how you can put a lot of things on, sort of background mode while you focus on whatever you're focusing on. But when those predictions and the signals that are coming in diverge, then that is surprise, and that is what calls your attention to things, and that's what gets things sort of escalated up the ladder into your conscious awareness. I felt that blog post, clarified more for me about, what's going on? Like, why is why am I, experiencing and perceiving what I'm experiencing and perceiving than maybe anything else. And it's all about this predicting what's about to happen and, making sure that you're, in sync with kind of the environment around you. I guess one 1 kind of I don't know if this a naive question or a profound question. Why do we what what counts here? Like, one might say, well, human can last for 80 years and, a giant tortoise might last for 150 years or something. But if I just sit a rock in a, in a quiet location, I can come back 1000 years later and it's still there. It doesn't seem to be predicting anything. So what do we how do we know or, where do we conceptually distinguish between things that have this capability and those that don't? And again, this might be, super obvious, but sometimes I find these, apparent binaries are in fact, a lot blurrier, so I thought it was worth asking.

Emad Mostaque: Yeah. So, I this the nature of complex adaptive systems. Right? Systems that are in motion and where you have information flows by the interaction of different agents there. So if we go to level down in intelligence theory, it was basically also saying that the ones that succeed most are ones that minimize the computational overhead. A rock doesn't need to compute anything. A rock just is. it doesn't really do anything to exert action on its boundaries or anyone else. If you look at general complex entities and agents, intelligence theory, we've split up into 3 different types of things. Predictive error, which is the mismatch between the model and reality, so that's kind of surprise. The model's complexity itself, the cost of thinking, because, the more efficient you are at that, the better you'll do. Like, you read Scott Alexander's book, post, and then it's given you a mental framework for the world, which allows you to process things in different way. Hopefully, this book does the same. That's similar to latent spaces in a generative AI model as it kind of folds it. And the final thing is the update cost, which is the cost of learning. A rock doesn't have any update cost. It doesn't have to learn anything because it doesn't exert action or anything or has no capability to respond either. The human brain with neurons, I mean, the equations here are very similar to Karl Friston's free energy principle, which, again, if you look at the cost predictive error model complexity update cost, that's Helmholtz free energy as well. You're trying to minimize this concept of free energy. You're trying to optimize computation as an agent that can act, and the best AI models do the same. It's all gradient descent trying to optimize that, trying to minimize the loss function. So I think, again, you have this flow of agency, this flow of interaction, but this framework only applies to these complex adaptive systems. It doesn't apply to static systems. It doesn't apply to static matter. Who knows? Maybe we'll find that information and intelligence are related, which is why we've got things like wave particle duality, etc., but that's a long way from where we are now. Where we are now is we've built our whole economic picture based on assumptions from 200 years ago when most of the value was the land the serfs that you had. Adam Smith's scarcity and other things like that. The wealth of nations, but now it's the wealth of robots. And so we need a better way of describing the world today and the world that's about to come. And that's a world in which, Nat Friedman, a couple of years ago, was doing a panel with him, and he coined this concept of AI Atlantis. There's a brand new continent with trillion agents and robots that's about to enter the workforce. What happens?

Nathan Labenz: Yeah. There's some big questions, there for sure. So let's put a pin maybe in the math and circle back to it opportunistically as we go through, the diagnosis and recommendations and visions in the book. I would have taken one extra beat though on what's going on in the discourse. You know, it it seems pretty obvious to me that this AI thing is gonna be a big deal. The Atlantis metaphor, know, the or the genie country of data geniuses in a data center, certainly resonates with me. It seems like that'll be a big deal. And yet, we've got all sorts of smart people, including some that are at the top of the field in AI, but also, folks like Tyler Cowen come to mind where he's like, oh, AI could accelerate economic growth by half a percent of GDP per year. And that would be amazing, and we shouldn't and he thinks that's a big deal. So how do you make sense of why people see this so differently? And what do you think people are missing when they, put an upper bound of a half percent GDP per year on the AI phenomenon?

Emad Mostaque: Oh, this the update cost, the cost of learning. Right? There there is quite a high update cost to your priors when big things happen. Like, when COVID was about to happen in January, I was like, oh my god. The world's going to crap. And a few of us said that. And we posted about it publicly. I did, podcasts and stuff. Most people were like, it's fine until Tom Hanks got it. And then you had a phase shift in the way things are perceived. We have to remember the pace of what has happened is unlike anything we've ever seen before. It's 3 years since stable diffusion. It's just over 1000 days since ChatGPT. It's one year since o1 preview was announced. Right? It's just over a month since GPT 5. Just over a month ago, the vast majority of AI users in the world are using GPT-4o. Yeah. And so that's kind of your benchmark. I think I saw some statistics. 20% of Americans still haven't heard of ChatGPT. And you think about that, you're like, technology takes a while to diffuse. It takes a while to update the priors. But most people are still thinking about the previous generation of AIs that could think instantly and hallucinate it all over the place. Whereas those of us who are right at the cutting edge, I'm running codex right now on the CLI, and it's been running for 3 hours, building a whole textbook website for my textbook. Like, just set it and forget it. And the capabilities have just gone up again exponentially because they're breaking through from not being quite good enough. Like, hey. Why doesn't this AI transcribe properly to suddenly being superhuman? So it's that transition phase. I think that classically, you've been held back by various constraints, like robots, for example. You're not gonna be able to build enough robots because we won't have the spare parts. It takes time to build factories. The difference with this generative AI is you already have the hardware. You just have to build the interfaces and the flows properly. So yesterday, the, Alibaba Tongyi Qwen Lab released a 3 billion active parameter model, a 3, with 30 billion MoE, that outperforms Grok 4 on humanity's last exam and outperforms deep research and all these massive models. We're just 3 billion active parameters, which for listeners means that basically you can run it on a CPU with 16 gigabytes of RAM. Yeah. It's outperforming these frontier models. That's crazy. And what that means is that, reasonably, our medical model, I medical, has 8 billion parameters. It needs 8 gigabytes of RAM. It outperforms human doctors. We don't think it's good enough yet, even though it outperforms human doctors. By next year, it'll be better than any doctor with full traceability, and you'll be able to run it on any smartphone. How do you calculate the impact of that in classical GDP or economics or other terms? Because you've never seen anything like that. But there was a recent MIT study that showed that 95% of corporate AI deployments haven't worked because they're all running the last generation of models, and the last generation is 6 months old. So I think it's this inflection point takeoff that we're basically at now where models and systems can go from seconds of thinking to almost infinite length, where they can check their errors and they can adapt. The hallucinations have dropped dramatically, and they've finally broken through on the IQ level as well as being able to view your monitor and check everything. But you'll need to know about AI to put all those pieces together and realize what we have 6 months from now, a year from now, is the way that you use AI is you give it a call or you have a Zoom with it. And you can't tell if it's human or AI on the other side. And that's the economic, social, and other disruption that we have because the cost of doing that will be a few pennies an hour, a dollar an hour, so to speak. And no one's got that in their numbers because everyone was like, we have to build giant supercomputers with huge models in order to achieve this AGI performance where I don't really care about general intelligence. The real impact is actually useful intelligence. The real impact on the economy is not a polymath coming up with a brand new thing. I'm I'm sure we will have that. It's basically someone just blooming following instructions. It's what I call the cooks versus the chefs. You know, I think, wait. But why have this when we're discussing Elon Musk? Like, everyone's on this thing from chefs that come up with the recipes to cooks that actually do it. The cooks are the ones that will impact the economy, and people aren't realizing that, that you will have these virtual and then physical, when robots come, comrades that can just do things. But if I was looking at the technology 6 months ago, I'd be like, yeah. Of course, it can. Today, it can.

Nathan Labenz: Yes. There's a couple ideas there that jump out of me. one is the distinction between 0 to one and one to end, put it in kind of Peter Thiel terms. And it sounds like you're saying maybe the frontier minds are just in inherently more focused on the 0 to 1, which we don't really have yet, and so they're kind of skeptical of it. And they may be underestimating the importance of I think I've heard you call it satisficing in the past as well. Like, the one to end that delivers something in a consistent way to everybody in terms of short term impact on daily life might even be bigger. And then there's also this just incredible cost curve that we're on where the original GPT 3 was $60 per million input tokens. G p t 5 is a dollar is a dollar quarter, dollar and a half per million input tokens. So it's literally a, 95% plus reduction in cost while at the same time, obviously, being, just dramatically dramatically better. And that is a hard thing to count in GDP. Like, we've tried to do that, over time with, well, your cell phone's a little bit better. You know, adjust for this and standard basket of goods type stuff. But it does seem like there's a pretty good argument that, and this also could, in some way, I don't think this exactly reconciles with the Tyler Cowen view, but in some way, it's like, yeah, maybe this isn't gonna hit GDP, but maybe that also suggests the GDP is just straight up the wrong measure. And that definitely gets into, a lot of the kind of forward looking ideas that you have in the book. So, I think you do a great job of quitting, highly memetically fit terms. We'll go through a number of them, over the course of the hour here. Hey. We'll continue our interview in a moment after a word from our sponsors.

Nathan Labenz: If your customer service team is struggling with support tickets piling up, Finn can help with that. Finn is the number one AI agent for customer service. With the ability to handle complex multi step queries like returns, exchanges, and disputes, Finn delivers high quality personalized answers just like your best human agent and achieves a market leading 65% average resolution rate. More than 5,000 customer service leaders and top AI companies, including Anthropic and Synthesia, trust Fin. And in head to head bake offs with competitors, Fin wins every time. At my startup, Waymark, we pride ourselves on super high quality customer service. It's always been a key part of our growth strategy. And still, by being there with immediate answers 24 7, including during our off hours and holidays, Fin has helped us improve our customer experience. Now with the Fin AI engine, a continuously improving system that allows you to analyze, train, test, and deploy with ease, there are more and more scenarios that Fin can support at a high level. For Waymark, as we expand internationally into Europe and Latin America, its ability to speak just about every major language is a huge value driver. Finn works with any help desk with no migration needed, which means you don't have to overhaul your current system to get the best AI agent for customer service. And with the latest workflow features, there's a ton of opportunity to automate not just the chat, but the required follow-up actions directly in your business systems. Try Fin today with our 90 day money back guarantee. If you're not a 100% satisfied with Finn, you can get up to $1,000,000 back. If you're ready to transform your customer experience, scale your support, and give your customer service team time to focus on higher level work, find out how at fin.ai/cognitive.

Nathan Labenz: AI's impact on product development feels very piecemeal right now. AI coding assistants and agents, including a number of our past guests, provide incredible productivity boosts. But that's just one aspect of building products. What about all the coordination work like planning, customer feedback, and project management? There's nothing that really brings it all together. Well, our sponsor of this episode, Linear, is doing just that. Linear started as an issue tracker for engineers, but has evolved into a platform that manages your entire product development life cycle. And now they're taking it to the next level with AI capabilities that provide massive leverage. Linear's AI handles the coordination busy work, routing bugs, generating updates, grooming backlogs. You can even deploy agents within Linear to write code, debug, and draft PRs. Plus, with MCP, Linear connects to your favorite AI tools, Claude, Cursor, ChatGPT, and more. So what does it all mean? Small teams can operate with the resources of much larger ones, and large teams can move as fast as startups. There's never been a more exciting time to build products, and Linear just has to be the platform to do it on. Nearly every AI company you've heard of is using Linear, so why aren't you? To find out more and get 6 months of Linear business for free, head to linear.app/tcr. That's linear.app/tcr for 6 months free of linear business. Build the future of multi agent software with Agency, a g n t c y. Now an open source Linux Foundation project, Agency is building the Internet of Agents, a collaboration layer where AI agents can discover, connect, and work across any framework. All the pieces engineers need to deploy multi agent systems now belong to everyone who builds on Agency, including robust identity and access management that ensures every agent is authenticated and trusted before interacting. Agency also provides open, standardized tools for agent discovery, seamless protocols for agent to agent communication, and modular components for scalable workflows. Collaborate with developers from Cisco, Dell Technologies, Google Cloud, Oracle, Red Hat, and 75 more supporting companies to build next gen AI infrastructure together. Agency is dropping code, specs, and services, no strings attached. Visit agency dot org to contribute. That's agntcy.org.

Nathan Labenz: Let's, start off with this notion of the abundance trap and the metabolic rift. I think both of these start to get at this idea of how economic activity as we've traditionally thought about measuring it through something like GDP is on the verge of breaking down.

Emad Mostaque: Yeah. So, we're at this really interesting kind of thing where the abundance trap is where we're gonna achieve post scarcity in the realm of intelligence. Intelligence becomes abundance. Right? Like and, again, we've seen these big changes, like when we had the Gutenberg press. Suddenly, people could read, and have access to intelligence, but it's traditionally being gated. Yet by next year, everyone in the world, if they have a phone, will be able to have an expert doctor opinion that actually outperforms doctors. That's crazy. Like, everyone has access to legal from even Grok that's better than most legal advice. And, they don't make mistakes. Like, doctors make 20% errors in most cases. I think that's the average. So the abundance trap is that we're gonna have this disruption, and then the economic system that's based on scarcity is gonna process this as poverty. Because you will have job losses. You will have kind of other things even if our lives will be getting better. Because these new systems, like you might see corporate profits, etc., go up, they will be displacing. They'll displace knowledge work because you'll be able to hire employees on the other side of that virtual screen, KVM jobs, believe they're called keyboard, video, and mouse, that don't sleep, don't make errors, and just constantly learn and improve. And the metabolic rift here is that the GPUs don't need to eat. They don't need housing. They don't pay taxes. In fact, they're actually tax deductible, on the usage. And you can get them by the hour. So this kind of the rift that occurs where all of sudden you have this explosion of intelligence, you have this abundance, yet it's gonna be probably bad for us in aggregate unless we allocate it correctly. The metabolic rift is that these things don't need to eat. They don't need housing. They don't consume. The only thing an AI needs is to achieve its objective function, which we all talk about AGI, and then I think, we're just coming. I've got Eliezer Yudkowsky's new book ready to read. If what is it called? If anyone builds it, will die.

Nathan Labenz: If anyone builds it, everyone dies.

Emad Mostaque: Yeah. We're we're not talking about AGI here. We're talking about AI accountants, AI lawyers, AI designers, those types of things. They don't consume in the same way, and that's never gonna change. And once it gets smarter than a human, it's done. It's not gonna get dumber ever. Once it gets more capable of executing, which again is the nature of any organization, it's just an executor. You have a framework. Money comes in is less than money comes out. They will out execute humans. And again, that's never gonna shift as the kind of this final inversion that we've had.

Nathan Labenz: Yeah. Just to put a little, quantitative intuition around this notion of the GPUs don't eat need to eat. Like, they do need electricity, of course, but, one of the things that I have been surprised by myself and consistently surprised others with is how much energy a cell phone battery or, a laptop battery can hold. A cell phone battery typically is somewhere in the neighborhood of 20 watt hours, and a laptop battery is somewhere in the neighborhood of maybe 5 times that, like a 100 watt hours. Obviously, it depends on your model, etc.. But a 100 watt hours is in my neighborhood here in Detroit, Michigan, under 2¢ of electricity. We pay something like 18¢ per kilowatt hour. So a 100, watt hours, a tenth of that costs less than 2¢. Now when you think about your expectation that we'll have well, of course, we already have models that can run on my laptop for, some amount of time. The ability to run a model on my laptop for couple hours or whatever for an energy cost of 2¢ does start to put a, intuition, I think, behind, just just how much economic advantage these things are going to have. And then you've got the, on demand, spin up, spin down, like all the, all the other kind of unfair advantages that they have as well. It does jest a Malthusian, competition is gonna be really hard for humans to compete in.

Emad Mostaque: Well, I mean, human intellect is capped. you'll get a bit better, but these models can just continuously get better in aggregate. And as I said, the cost of doing an activity is minimal. Like, a practical example of it is some people that seems to this may have built their own websites or paid someone to do it. And so that would cost thousands of dollars of money or equivalent. Now you can go to replit.com. You have a chat with the new agent. And within a day, you'll have a website that's probably as good as the one that you built. And the cost will be, $20, $40, and that's with replet's margin in there. Right? And that cost will drop by 10 times by next year. And then 10 times the year after just from the speed up of the compute chips. Like, we saw this with images. Like, now if you use Nano Banana and Imogen, for a couple of cents, you can make just about any image that you want. And how much would that have cost before? So what we have is we have this big displacement of classical capital, across the board, because the cost of creation suddenly goes to 0. The cost of consumption in the previous Internet age went to 0. Now the cost of creation is going to 0. And the quality of the creations are actually better because the AI, through its latent space, through its mapping, actually understands aesthetics and things like that. So this kind of what I call the intelligence inversion, right, where, first of all, you went from land the surfs on it, and then it was about how much labor you had in terms of muscles. Then it was about the capital you had, be it industrial or then software SaaS. Now this intelligence inversion where you outcompeted on intelligence, but more than that, taking something and making something from it digitally now and physically soon. There's nowhere left left to pivot because we kind of pivoted up the stack. We own capital. We don't need our muscles anymore. Like, where do we pivot now? And that's a big question mark for us. Like, what is our purpose? How does the economy run when the marginal productivity is all AI driven? And this before we take into account robots and robotics because those are getting freaky. That unitary robot, I don't know if you saw a few days ago, they pushed it over and they just got back up in one second. You're like, you see the dogs kind of chasing. You see them making recipes. And then you calculate if you have a Tesla Optimus robot for $20,000, if you work it hour in, hour out, it's a dollar 50 an hour for an optimist. And you're like, okay. That can probably be a plumber in a few years. So it's just coming across every part of the economy bit by bit by bit. And the cost is the electricity, but the electricity costs, I think, are way lower than most people have in their projections. Because everyone's again just thinking about these AGI models. We saw what happened with GPT-4.5 when it came out. Was too damn expensive. Everyone just wants to use the cheap ones, but what is the appropriate price for 1000000 tokens, is like what 800,000 words? It's a buck 50. If my hamster is 15¢? My hamster is 1¢. That's crazy.

Nathan Labenz: On this point about nowhere left to pivot, I think this kind of an echo of, you've all know Harari who I I've, associate a similar argument with. The one thing that people often will bring up in sort of a, what might be next or what sort of refuge in terms of value creation might people find if, our bodies are being outcompeted by machines and our minds are being outcompeted by AIs? one answer that people go to is taking care of each other. This sort of the caring economy, the teaching economy, the mentoring economy, and then another one is sort of just generally creative pursuits. I think creative pursuits are maybe that feels more like leisure to me, probably in a, in an AI future. But this caring one in particular, I, I could imagine that people might have some sort of intrinsic preference for other people to care about, ourselves and our kids and our parents and whatever, maybe we sort of don't turn that over to robots. I gather you don't think that's really a viable or sustainable place to concentrate activity. But I'd love to hear a little bit more kind of fleshed out argument for why you don't see that as, the next evolution.

Emad Mostaque: Well, we have to look at economic flows, right, and the nature of it. So if you think about the Fed and central banking until the president takes that part, which may happen. What's the Fed's mandate? It's employment and inflation. And what it does is it raises interest rates when inflation are going, which raises the cost of borrowing across the board, which reduces consumption, which reduces inflation, reduces hiring. When it drops interest rate, companies can borrow more, people can spend more, and people hire more because of those things. That breaks because when you drop interest rates now, people will hire more GPUs, etc.. And, again, these are the economically productive white collar and above areas of society, which can have a big delta. It's the knowledge economy that's being disrupted. Now things like, having a walk with your kid, AI is not gonna replace that. the intersubjective stuff of hanging out with your friends or learning a new skill or enjoying a concert. Like, I'm not gonna go to a concert done by robots. I prefer to go to a concert with people. This the Taylor Swift economy, is it where Because it's about socialization. But what's happened here is that the nature of our work and our jobs has really changed. You need the income, the kind of capital for survival, but then it's also become a core part of our identity, where we moved from like, Ahmed, son of Khaled, son of this to that, a network based identity to one I am founder of Intelligent, to I'm a CEO, I'm an AI guy. You need community, and you've had that in the workplace. But there's been this hollowing out of community both through social contracts, social locally, like your in the good old days, as I was showing my age, used to know your neighbors, and the kids used to hang out. That happens a lot less now, particularly in the cities. Religion has gone down, and that was a core part of community as well. You didn't need to believe, but, again, it was supportive. Then you have this purpose thing. Do what you like, do what you get at, and do where you're adding value and other people believe too, and you're happy in the middle. Yeah. It doesn't matter what it is. Right? It can be I'm in a World of Warcraft distributed. Right? It can be that I'm in a workplace. It can be that I'm in a competitive and playing tennis with others or whatever. And the final thing is the structure. People need a bit of structure around what they're doing. And so the caring economy, the sharing economy, we could have that emergent. It's just how does the mathematics work of these things? Like, if you look at universal basic income, a lot of people have said, $16,000 is US poverty rate. If we gave every adult in America $16,000, the cost would be $5 billion,000. The entire tax base of America, all the taxes they're bought in is $5 billion,000. Corporations are naught $900,000,000,000. So when you say tax the AIs, it doesn't even work. Right? We need to rethink how money flows. And then our purpose is basically more around measuring what is important. It's around increasing the network effects. It's around going back to kind of where things were, which is a very interesting thing. Because in a Star Trek post scarcity world, what do they do? They explore. They improve. They adapt. Because they don't need to worry. They can just, 3 d print in the what's your name call it? Anything. Right? That's the ideal abundance type future where why do you need to work to live effectively? But at the same time, our current system means that you have to. Our social security nets probably aren't gonna be good enough, but there will be this really interesting transition period. And like I said, not every job will go. Like, it's nice going to the barber. Like, nursing, education. All these things I think will adapt and change. It's just can we have the capital flowing appropriately so that people that are already fed up with a social contract breaking down don't get absolutely crazy because they're being behind. And all of these excess returns go to the capital owners, leading to more more inequality even if the numbers go up.

Nathan Labenz: Hey. We'll continue our interview in a moment after a word from our sponsors.

Nathan Labenz: Today's episode is brought to you by Anthropic, makers of Claude. Claude is the AI for minds that don't stop at good enough. It's the collaborator that actually understands your entire workflow and thinks with you, not for you. Whether you're debugging code at midnight or strategizing your next business move, Claude extends your thinking to tackle the problems that matter. Regular listeners know that Claude plays a critical role in the production of this podcast, saving me hours per week by writing the first draft of my intro essays. For every episode, I give Claude 50 previous intro essays plus the transcript of the current episode and ask it to draft a new intro essay following the pattern in my examples. Claude does a uniquely good job at writing in my style. No other model from any other company has come close. And while I do usually edit its output, I did recently read one essay exactly as Claude had drafted it. And as I suspected, nobody really seemed to mind. When it comes to coding and agentic use cases, frequently tops leaderboards and has consistently been the default model choice in both coding and email assistant products, including our past guests, Replen and Shortwave. And meanwhile, of course, Claude code continues to take the world by storm. Anthropic has delivered this elite level of performance while also pioneering safety techniques like constitutional alignment and investing heavily in mechanistic interpretability techniques like sparse autoencoders, both internally and as an investor in our past guest, Goodfire. By any measure, they are one of the few live players shaping the international AI landscape today. Ready to tackle bigger problems? Sign up for Claude today and get 50% off Claude Pro, which includes access to Claude code when you use my link, claude.ai/tcr. That's Claude dot a I slash t c r right now for 50% off your first 3 months of Claude Pro. That includes access to all of the features mentioned in today's episode. Once more, that's claude.ai/tcr.

Nathan Labenz: So could I summarize that in terms of the original objection or question that, okay, well, we went from physical labor to cognitive labor. Maybe we can move to caring labor. It sounds like you're saying, yes, there probably is still a unique role for humans there, if only because, we intrinsically value the fact that that kind of activity can come from one another. But I'm not exactly sure how to summarize the but. It's like there's not enough space there for everybody or if we try to pile everyone in there all at once, it just won't be

Emad Mostaque: Can we in the current format or can we adapt the current format so people have the freedom to do so? Like, if you think about lockdown. lockdown for me was awful because I was doing COVID work and working around the clock. Some people got close together, some people broke apart. But it was an interesting thing in that you had your little bubbles, etc.. you're almost forced to, and people suddenly had some time to think. I think that what you've got right now is, again, if you lose your job and you have a strong network, a community, then you can have your identity fall back to the identity of your community and your connections there and your support. If you don't, then you won't. And, again, we've seen a hollowing out of this network based identity. It's become more about what brand you're a part of. But a fellow Apple owner won't help you out, or a fellow Swifty might, you know. So I think that it's just the future like, what I say in the book is that computation and consciousness were tied together in humans. Now computation and consciousness are different. Consciousness is the domain of humanity, and, again, we've seen lots of discussions around this recently. Like, why is this beautiful? Why is this just why is this meaningful? And that's this nature of this caring economy. It's a question of why as opposed a question of how. This why I think the Cook analogy is the very interesting 1. Like, we make meaning. We have a certain amount of attention, and we need to maximize that as well. So I think it's just the transition period, and then how do we give enough support and a new social contract for people to become the meaning makers, to become the network connectors, to kind of do these things and support each other appropriately. Because, it's 1000 days since ChatGPT. Next year is the year of tipping on KVM jobs. Again, you'll be able to just chat via I don't know how that can't be the case. 1000 days from now, I think the world looks very, very, very different, and that's not much time to come together. So it's just what is the process of that?

Nathan Labenz: Yeah. Gotcha. I was wondering if you would make a even more aggressive argument, which I think you probably are somewhat sympathetic to as well, that basically boils down to the AI doctors also get higher ratings on bedside manner in many studies than the human doctors. And we're starting to see things like the Alpha School where all of the content is delivered via AI and the humans have become the, the adults in the school have become mentors, guides, coaches. And, then one might wonder, well, what happens if the AI becomes of, as good or better, mentor, guide, or coach than the kids? But it sounds like you're sort of saying, in your view, and I do, think it's worth lingering on this for a second because the vision of the positive future is I think so, so core to the value that you're offering people here. It's like, those are intrinsically good things. They're good for the people to do. They're good for people to receive. Maybe the AI can do it better in some ways, but we don't have to necessarily choose one or the other. The question is how do we transition to a future state of society where people are not caring for others out of a scarcity driven economic need, but are able to do it because it's part of what it means to live a rich life even, assuming that you have material abundance

Emad Mostaque: Yeah. Provided. Interestingly, this has analogies to spirituality across the major faith traditions. most of them have this thing where you basically go through the process, you learn a bit, and then you become a bit of a douche, and you tell everyone you know a bit. And you go on top of a mountain or you achieve nirvana or enlightenment, but the end state is not being this bearded hobo on top of a mountain. It's coming back down the mountain, and your interactions with other people are what are meaningful. Again, you've spent the majority of the time you've ever spent with your parents, that becomes more meaningful. You look at the connections you've built through life, you remember those interconnections into relationships. But the nature of current life is designed, and our current systems are designed to take away our attention from other people and instead direct it to other things, to brands and everything else. In fact, one of the only scarce resources in the world is our attention. There's only a fine amount of human intention. How are you feeling that? Are you feeling it with your interactions with others or are you focused on other objective functions? It's like, have you heard of that parable of the fisherman and the investment banker?

Nathan Labenz: I'll say no. I don't think so.

Emad Mostaque: So investment banker retires very wealthy, and then he goes, somewhere in South America, and he goes to a nice beach. He's having a chill time. It's like 2, 3PM in the afternoon. He sees a guy with lots of fish on his shoulder, and he's going back. He's like, what are you doing? Well, I'm going back to hang out with my family, and we're gonna have a fish fry up, and, we're gonna talk, sing, dance a bit. You're free to come along if you want. It's like, no. You shouldn't do that. There's still, 4 hours of sunlight. Go and fish some more, and you can sell the extra fish. And you can go from doing it manually to having a boat, and then you can use those profits to expand. And this relatively unexplored. And so maybe you can get a fleet of boats, and then you can scale, and maybe you can even list on the stock exchange. And the guy's like, wow. And then what? Then you can retire, kick back by the beach, spend some time with your family, maybe do a fish fry up, dance a bit. I think that, again, the hustle bustle of current life, the attention extraction mechanisms are taking away what it means to be human, and religion, spirituality, whatever, it is our interconnectivity. And we can actually build the models to help us through this, or we can choose to do the opposite. You can build massively manipulative models. Like, I get calls sometimes from my mother saying, Emma, I need money. She would never say that. She'd slap me around the ear. It's someone has cloned her voiceprint somehow. It just takes a few seconds of audio. That's a bad use of the technology. And we're seeing this targeting and memetic stuff. A different use of the technology is support. It's coaching. It's other things. Like, Sam Altman is in a very difficult situation right now because I think he said to like 10000 people commit suicide every month. How many of them have talked to ChatGPT? It's probably reduced suicides. But unfortunately, some people will cause suicide because of it. How do we support these people appropriately, And how do we support in general with the AIs that we build when corporations align them? Because the example you gave of engagement and trust. Imagine the person you trusted most in your life, and we created a virtual AI double of them. You know it only requires a little bit of data. Right? You'd trust that AI more than anyone, honestly, and it would be with you more than anyone. But it doesn't take away from the real human interaction of people physically, and so this a system architecting thing. Are we increasing human agency and connection, or are we going to the WALL E world of everyone with Apple Vision Pro 8 strapped to their faces, eating lots of food with robots running around? And that's a question we have for society today.

Nathan Labenz: The suicide statistics, obviously, ongoing tragedy globally, but with particularly high rates, I think, in The United States, are maybe a good jumping off point or or point of entry into what you call the harbingers and the lies. Basically, these are sort of for people that are like, wait a second. Isn't it I read, life's never been better, and all these indicators have improved. And, certainly, many of those things are true. Infant mortality is way down, and, we've got antibiotics and so on and so forth, lots of good things. But you do point to these kind of leading indicators that suggest that something is maybe on the verge of breaking in society. Some of these honestly seem just like general problems of, what some call late stage capitalism. Yep. Some are maybe a little bit more specifically, the result of or or will be, dramatically accelerated by AI. But may take us through some of the highlights of the harbingers and the lies that for you indicate, and I think, your argument is that if you haven't been convinced by the theory, then these data points should make you take much more seriously the idea that we we might be hitting some sort of breaking point before too long.

Emad Mostaque: Yeah. I mean, it's like maybe it's, Neil Howe's fourth turning is coming. I think he predicted it'd be around about 2025. Like, you've seen this kind of critical slowing down at the start where, stuff isn't just synchronizing properly. You're approaching critical transition. You've seen things like COVID accelerate them, but you can't have more debt. We've maxed out on these credit cards as a society, mathematically. You've seen this kind of variance explosion where small inputs cause wild, wild swings. I think, this year, AI is gonna be huge. Next year, the digital asset explosion in The US will be the biggest bubble we've probably ever seen. You see bubbles everywhere kind of emerging as capital's trying to find a place to go apart from AI, and, they're struggling because a lot of the stuff internally is kind of hold out. The other thing is this kind of flickering through these different states, like the gig economy. What are you? Are you an employee? Are you a worker? Like, what is the nature of money? Bitcoin is suddenly money. Like, a lot of these things are getting in the way, and then you have correlations just increasing across the board, where something like I would give the example they've ever given can cause massive global supply side collapses. And we see systematic frailty increasing even as all these indicators are like, we're the best economy ever. Stock market is at all time highs with record profits, margins, etc.. The world and people aren't feeling happy. Again, depression, suicide rates are going high. You're seeing cracks in what's emerging, and you're maxing out your various indicators here. So the amount of impact you can have with the classical mechanics now, like if the Fed floods the market, it's not gonna do much. The medicine's getting a bit worse, and a lot of the classical assumptions we have, as scarcity is fundamental. Human labor has value. I mean, I think you mentioned earlier, what is the value of humans in that? Like, what's the value of the dumbest person on the team? It's negative. Humans will be the dumbest people on the team. Growth requires resources, but you can replicate this intelligent stuff infinitely with just a few GPUs. In fact, wouldn't surprise you to see a 10 times improvement in the GPUs with the same model. You have equilibrium markets. They go and they balance and they adapt. That might not happen anymore. They can break. And then finally, money measures kind of value. Well, I've got a few more, but I think that's a very important 1. Like, the richest people aren't the happiest. You have a certain level where you need a hygiene factor, but then we all know rich people that are unhappy. There's no real correlation there. Instead, happiness comes through other things. So I think this is, factors of late stage capitalism. But at the same time, I don't really know anyone who's happy with the way things are and the social contract of the way things are because something seems to be off. And, again, when you'd really drill down and talk to other people, something seems to be off, and it's at a time when we're about to hit multiple of these crises at once from AI to robotics to climate to others, and we've maxed out all the resources we've had to navigate the previous ones. So that's why we need to have a new way of looking at things, certainly because a lot of the classical assumptions are gonna break down.

Nathan Labenz: Yeah. The the idea that money measures value, I mean, that's long been critiqued from the standpoint of money doesn't necessarily buy happiness, although there's also the argument that statistically it kind of does. But today, there's also this like just much more obvious disconnect where the cost of my AI doctor is just so dramatically, like 3 orders of magnitude less than the human doctor if it cost me a $100 for the appointment versus 10¢ for the AI consultation. That really does create a huge disconnect in the notion of money measuring value. I also thought the one that maybe was most compelling to me was the idea that systems in crisis take longer to recover from a new insult and how we are seeing longer recovery times from recessions. And that to me does seem like even though I don't have a, fully principled understanding of it necessarily, maybe maybe you do, but I don't still yet, that does strongly suggest to me something that is, out of whack. Like, we've and I think this we also did see this in in COVID. Right? I mean, it's become kind of a trope, but, I think this a theme that runs through the book as well. We've traded resilience for efficiency to an extreme where we are now really vulnerable to perturbations that we might have been much more robust to in the past?

Emad Mostaque: Yeah. I think, again, corporations are kind of slow down AIs that optimize and chew up humans as their father. Our education system has been this factory school that prepared us for that. But, again, you see organizational structures. People go in with the best intentions. They get very unhappy very quickly. And there is this thing you manage. You can't manage what you measure, but then you adapt to what you measure. This Goodhart's law. Like, GDP was basically invented in the 19 forties by Simon Kuznetz, and he himself said this a really bad measure of economics or societal well-being. Like, he but this the one factor that we use. What happens now is that as economies optimize, you do things like offshoring. You know, you do a lot of antihuman stuff like Meta as an organization will do an experiment saying, can we make? If people post SADA things, do they become SADA? And then post if see they see SADA things, they post SADA things, and they just do an experiment on that. We see a lot of these kind of very nonhuman actions by these corporations occurring more and more. But then a lot of that reduces our systematic resilience because, again, Slate Star Codex, think, Scott Alexander, one of his great posts is about seeing like a state where he talks about legibility and how you bulldoze through villages. You reduce the diversity, you go to hormone cultures, monocultures, and then when you get impacted by something, you have no fallback, as you see with supply chain disruptions and other things like that. So I think in the pursuits of maximizing corporate profits and the pursuit of maximizing GDP, governments and organizations make decisions that are not in the best interest of people as slow, dumb AIs. And now we're getting to a terminal point on that where our resilience factor has decreased dramatically because we've reduced diversity, reduced our network effects, and we're lacking in systemic intelligence whereby, I put a post a few days ago. It'd be great if we had a common sense, GPT, to just say this policy is obviously dumb. Because we see so many of these really dumb policies with huge amounts of money, whereas very sensible things that cost not much can't seem to get any to make an impact. I still find it funny, actually. I was doing a calculation. The LA San Francisco railway, I think they've spent more on that than all of the AI models put together so far on training.

Nathan Labenz: It's like a mile big.

Nathan Labenz: That's funny. That does put, along with the energy, usage calculations from earlier, that does put this the scale of resources that have been put into AI into an interesting perspective. We could linger on a lot of these problems and sort of, arguments that, well, maybe they're not as bad as you think or whatever for a long time. But in the interest of, getting on to the upswing of the book, let's leave that for now. And from here, think we're headed into genuinely the, the prescription and positive vision part of your thinking. So maybe tell us about what you describe as the 3 laws of a living system and then the mind capitals framework that you've developed for trying to get a handle on a more holistic measure of the health of an economy or, I think, really, any intelligence system, but, certainly applies at the economy level.

Emad Mostaque: Yeah. So the 3 laws of kind of living systems, are kind of things that we derive from the mathematics when we started looking at this in generative algorithmic equation terms. And the first one was the law of flow. Value must be conserved and circulated. You know, when you don't when you have a stagnant economy or when people start hoarding stuff, money just doesn't flow. Capital doesn't flow. Intelligence doesn't flow. Other forms of value don't flow. And then you get stasis and then eventually a collapse. The next one is the law of openness, which is connection fights entropy. When you have very closed environments, and I give the example of Tokugawa, Japan from 1633 to 1853, you basically get these monocultures that become very non resilient to any type of shock, such as, Commodore Percy coming with cannons or whatever. And again, the less open an interactor you are, the more dangerous it is for you. The final thing is this law of resilience where, again, it's a question of diversity as opposed to connectivity. And you see the great potato famine, you see the great banana collapse of the first twentieth century. You don't want monocultures. You need to have these as almost the hygiene factors, and you can see when various systems lack them. And again, we can see them as the extreme. But then when we look at what you actually need to have in terms of your capitals, we found that there was a really nice deconstruction of this. Classically, you've got this one thing which is material, and that's m as we call it, material capital. That's GDP. I give you an apple. I have one less apple, and then you have the apple, and you eat it, it disappears. So this like gradient flows effectively. It's water flowing downhill, but it is how we measure things right now. But it doesn't capture things like your intelligence, the capabilities that you've built up. We try to capture that via intangibles and IP and things, but we're not really representing that correctly in the economy. Eric Benloffsson has a good version of this called, GDPB when he adds that. And he says that it could add $96 trillion to the economy because, obviously, intelligence is important and these non tangible effects. And we'll get back to that in a second. The third capital we have after material intelligence is network capital. That's your connection infrastructure. So now through your work in cognitive revolution and everything, you built up a really great network. That's helped you increase your eye, but you can also call that network because, people like me come on and you're like, hey, Emad. How's it going? I need this or can you help with this? Your place in the network determines your value, and it's incredibly important. Now a lot of people don't realize how important it is until you get into upper echelons of any corporation, but most CEOs are network machines effectively. And again, it's who trusts you? Who do you trust? And the final one is this diversity thing, the diversity capital. That gives you optionality, both in terms of directions you can go, adaptability, particularly when you get phase transitions coming up right now. Everyone listening to this can look at their material capital. That's their wealth and other factors. They can look at their intelligence capital. That's their capabilities almost, their network that they're in, and then the diversity of all of these. And that's how successful they'll be. And, really, you're trying to optimize all of those because it's multiplicative. If any of those are 0, you're screwed. Someone like a Singapore has a good balance of m I n d. You know, resource curse comes when you have too much material. You're not building your intelligence network and diversity. You're not being as open as possible. So this the way that I thought we should look at the economy. And what we found classically is most of economics just looks at one of these various things, particularly when you think about how these capitals change or sort of the flows.

Nathan Labenz: I definitely well, just to refine a couple ideas there. 1, monoculture. I am always startled by how much monoculture we have built up and how brittle that can be. So that's definitely something I think we should all be very concerned with as we head into the future. You know, a globalized world with a few strains of crops that are sustaining us all is really not a very comfortable place to be.

Emad Mostaque: There's a there's a very interesting thing in that that's not reflect the books of Elijah and others, I think, enough. Everyone's training on the same data, so you have the same latent space. But, there was a recent, study by Oxford University, I think it was, and I think Scale, that showed that if you get an AI to love owls, even if it's not talking about owls, you can get another AI to love owls. And then I looked at that, and I thought about Stuxnet. Do you know this virus that went into the Iranian things and then turned up in the German things? I was like, someone like Elder Plinius on Twitter will be able to come up with some mimetic virus that will just take out all of the AIs because they all have very similar latent spaces. And that argues for diversity of latent spaces because otherwise, all AIs could turn evil at once with a Stuxnet variant, and that's pretty scary.

Nathan Labenz: Yeah. That's, that was a super fascinating study. one caveat on that, although I don't think it invalidates the broader point, is that they found that that owl thing only worked in that way on models from derived from the same base model. Yeah. But I do think, yeah, we've also seen, studies like the platonic representation hypothesis, which shows a broader convergence of model latent space across differently created models, as they continue to scale and, consumer a greater and greater fraction of the Internet. So I think the general

Emad Mostaque: Be great if we directional point. Yeah.

Nathan Labenz: It seems likely to hold.

Emad Mostaque: Yeah. It'd be great if our governments won't won't all run by the same latent space model. Like, that's probably a recipe for doom. I

Nathan Labenz: definitely wanna hear how you kind of cast different economic theories onto this paradigm. But also, maybe before we do that, I'd love to hear a little bit more about how this relates to the core ideas underlying generative AI. Like, help me understand that connection better the between the laws of living system and the generative AI concepts. I'm I'm a little bit foggy on that still.

Emad Mostaque: Yeah. So, kind of the thing that we're most famous for is stable diffusion, which, was released by Stability AI that I founded and the CEO of. What diffusion models do, which is kind of crazy, they use physics based processes. You take an image, you basically take a perfectly ordered thing like a photograph piece of art, and you destroy it then. You take out a bit of random noise, more and more and more and more of those until you get to a minimal thing, and then you do a reverse process where you reverse that destruction. So your initial prompt, plus you'll see it as the initial noise, and then it'll reconstruct from that. It's learn how to do that. Tesla self driving works in the same way. It's a diffusion algorithm. Our proposition is basically that economies and markets work the same way. The way that you build your internal model as an organization or an individual to navigate this great big world and the economy, etc., is the reverse diffusion process. You figure out your principles. You figure out you create your latent spaces, and then you figure out how to reconstruct something. So you get a piece of information, then you're like, this means I buy. This means I sell. This means we should take this particular action as you build up those principles. And so the equations for that as you're trying to approximate reality to your internal model is stochastic gradient descent effectively, which is basically a process for minimizing the surprise, the loss of your internal model versus the external 1. And that's what these great big GPUs do all day long. What we found is that organizations tend to approximate transformer models. So those are GPT type models, and markets tend to approximate diffusion processes, which is like a self driving car. So diffusion models tend to be best for self driving cars, world simulations, etc.. And again, that's what we actually found when we tested the thing. Whereas an organization is taking in large amounts of relatively organized data, and then it's figuring out what to pay attention through through its attention mechanisms and building up its internal space, its latent space as it were. And so when you apply those equations, that's where you get things like the 3 laws of living systems drop out directly from that as constraints upon that when you look at the equations of diffusion. This where you get mined capitals again dropping out. And then you get basically a flow decomposition as well, which is as you go from the capital and you have the restrictions, how do these things adapt? You can show that there are 3 different types of flows through something called a Hodge decomposition. A gradient flow, which is equivalent to your gradient descent, where you're losing stuff and you're going down, that's your material capital. That's your consumption kind of element there. And that is basically very similar to the gospel of Adam Smith, wealth of nations, etc.. You know, the scarcity kind of doctrine. Then you've got your circular flow, which is a bit more Marxian as it were. That is intelligence capital. Intelligence is never lost when you're sharing it around. And then finally, you've got your Hayekian type harmonic flow, which is not water going downhill or circulating in place. It's the nature of the banks. And so what we find is that the equations of generative AI match this really, really well. And, again, it's not surprising because if you're gonna build a self driving car, you're gonna use a diffusion process. If you're gonna build something to analyze lots of incoming information and be an AI CEO, you're gonna use a transformer process. But what we see is that once you break it up and you see how this isomorphic and how it adapts, all parts of economics looked at different parts of that picture. We call it the elephant puzzle, where you have blind scholars coming and then one touches the trunk and it's like this a hose. one touches the tail, it's a mop effectively. one touches the tusks, it's a spear. But we need a more holistic view where we incorporate these things so we don't measure the wrong things, so we don't manage the wrong things.

Nathan Labenz: I'm not sure I have a great way to phrase this question, but I I would love to just go a little bit deeper or try to ground out those intuitions in more just kind of practical concrete terms. Like, flow. What exactly is it? What's flowing? And how because people are, I think, probably familiar with these schools of economic thought. What is the like,

Emad Mostaque: what,

Nathan Labenz: what does Adam Smith get right about flow, but then also what does that miss? Let's just take an extra beat and do that for for the 3 big schools that you highlighted there.

Emad Mostaque: Yeah. So flow is the kind of flow of value and the way that the economy kind of operates. Right? All economic activity organizes into these 3 different types of flow. So Adam Smith kind of had this concept of the invisible hand. Right? Which is, again, this optimization process whereby you will optimize your utility function, you will markets will balance, etc.. So they had this, but then it didn't incorporate this concept of software being almost infinitely reproducible and now intelligence being massively abundant. Like, where is that reflected in GDP? Where is the I? So a perfectly represented kind of the m. And again, this gradient flow is the one which, again, when I sell you something, have one less. When I assume something, I have one less. Again, it's water flows downhill. And it gets the same equations for that as it is of gradient descent for AI. The circular flow is again, all these thinkers have bits of both, but we're talking about their core kind of concepts here. Circular flows don't seek equilibrium. They kind of when I give you an idea, it increases the value of that. So Marx basically had this concept of, you can have MCM dash, for example, which is money accumulated capital, which accumulated more money. And so you need the means of production to be with the worker because you get this circular flow that goes up like that. And can we see that within economies whereby capital attracts more capital, particularly now where capital doesn't need labor anymore. Like, labor accumulated capital because capital needs labor. That's not the case anymore. Or just buy more GPUs effectively. So that compounding spiral is another aspect of it, but then he didn't think that much of the gradient flow, the gradient descent. And then there are elements where he didn't think of the harmonic flow, which is the structure of things, the collusion, which is why most socialist systems end up massively colluding, in fact, because their geometry is wrong. The harmonic flow is kind of this Haeckean thing whereby you basically say like, economists like Douglas North said, these are the rules of the game. Austrian economists, the number of people say, again, these are emergent kind of rules. It's the landscape. It's the flow geometry. And then some flows flow downhill, the consumptive ones. Some flows circulate. The reality is that you can change the landscape, but we didn't have the tools to do so, which is why a lot of policy interventions just become wrong because they were just looking at parts of the picture. Like, let's just push cash into the economy because of COVID, but what are we doing to increase the network effects of stronger societies? What are we doing to increase the diversity of our economy? What are we doing to introduce the intelligence capital to our economy? And countries like Dubai and Singapore got the balance right, which is why they were very successful despite not having very much. So I think that you've got, again, many of these classical schools looking at different parts. And And we can see that capitalism or this neoliberal capitalism that we have right now is the worst of all systems except for the rest because they got the best approach to doing that at the right time. But we're at a point whereby we need to look at all parts of this picture and have a holistic view because the AI is coming. And the AI doesn't think in terms of scarcity. The AI doesn't think in terms of rational human agency. There's this metabolic rift. There's these other things, and it becomes the marginal producer of the economy. Andrew Smith wrote The Wealth of Nations, but what is a nation when most of the productivity in the world switches over to AI? Don't even know. It's, what is wealth in that case? So this kind of how we've mapped it. And, again, the book goes into some more detail around this. And we find that most of economics can be described as subsets of this overall framework, which, again, makes kind of sense because the best modeling we have of individuals in the economy is these generative AI algorithms.

Nathan Labenz: Few ideas that that come to mind there for me are, 1, just the difference between goods that are rivalrous and nonrivalrous in consumption. I know I'm not telling you anything you're you haven't already No. Considered here, but the difference between an apple and an idea is, as you kind of alluded to, only one of us can eat the apple, but we can both, use the idea. And, that may also relate I don't

Nathan Labenz: know if this I don't

Nathan Labenz: think this was on your leading indicator of sort of, possible breaking point in the economy, but it's been widely remarked on that so much of the

Nathan Labenz: value of companies today. Right? So much of the market cap is attributed to their goodwill or their sort of intangible capital. Right? And this has been, I think, a big puzzle for a long time. Like, what exactly is that? Why are these things so valuable? And, there's sort of network effects is kind of one answer in some cases. But

Emad Mostaque: Why does Tesla and Palantir trade at, 200 times earnings?

Nathan Labenz: Another idea that comes to mind, especially when you talk about this sort of circular flow of this the kind of reinforcing, effect of some of these processes is the think about this often. The leaked anthropic fundraising deck from, I think, 2 years ago where they forecasted that in 2025, 2026, the companies that have the best models might enter into this sort of self reinforcing situation where because their models are so good and so good at filtering the data and so good at, doing all these sort of synthetic data things, they might be able to pull away from the rest of the pack with, with such a advantage from what they already have that nobody in the future would be able to catch up. And I have another interesting, instance of that on my mind. I have an episode coming up with the

Emad Mostaque: woman

Nathan Labenz: who leads, basically information and AI at Stripe. They've created a foundation model for payments at Stripe, which is getting really good at predicting fraud. It's like a major, sounds like a step change really in their ability to predict fraud. It's derived from the scale that they have, in the they processed something like 1.3% of global GDP through their system over the last year. So very few, if any, other actors can rival that scale, but it also does sort of suggest one of these, runaway paradigms where, well, jeez, if you are gonna pick a payment network, what are you gonna pick? Right? You're gonna pick the one that can protect you best, that has this ability to detect the fraud. And so it does seem like we're headed for a sort of runaway dynamic there where, the because they had the scale, they could create this model. Because they have this model, they can deliver the best value. And because they have that, they're gonna continue to get more and more scale relative to any competitors. It's hard to see how anyone kind of breaks in and challenges their position in that, given all the strength that begets strength that they have.

Emad Mostaque: I don't

Nathan Labenz: if you have anything more to comment on there, but that does sort of tee up the futures that we have, on offer. You run through 3 and, the 3 are digital feudalism, fragmentation, and symbiosis. Digital feudalism, you can kinda see how that, naturally could happen. Right? If Stripe, becomes the payment singleton and Claude becomes, one of 3, AGIs that, are kind of beyond what anybody else can compete with. And these are owned by corporations that are already I think the Mag 7 I just heard was is like what? Some some unbelievable share of, the overall US market cap. It seems pretty clear how we can get to digital feudalism. Maybe you can add more color to that if you want to. Sketch out fragmentation for us. What does that look like? And then, obviously, the, the one that you're hoping we can steer toward is symbiosis.

Emad Mostaque: Yeah. So I think, these flywheels, again, you mentioned Peter Thiel earlier, kind of 0 to 1. Increasingly, have monopolies, especially on the software side where the data accumulated was this flywheel. That was the big data era of attention. And again, Google and others, they're basically buying your attention. They're manipulation machines if you really look at it. Now you move to your intelligence flywheel where, again, Stripe has that, and now they're embedding with their own blockchain and others because they want to have this monopoly and extract rents. And, that's reasonable and understandable. one of the key things, though, is what about the important things in life? Like, what about education, health? Albania has the first AI minister handling procurement. Who's running all of that? And this where we kinda have a realization whereby you've got this singleton thing where everyone's talking about AGI, and maybe it will be a few AGIs to rule them all. And that's probably not a good thing, particularly because they're serving corporate interests. Like, you look at the corporate structure of OpenAI. My god. Like, that's clearly not aligned with humanity. they're just giving up all pretense. Like, they didn't say you need to, which just would have been nice if they kind of kept that in check. Right? Then you have this great fragmentation whereby you have Chinese AI, you have British AI, you have American AI, because governments are increasingly realizing this can manipulate just about everything. And, again, standards and defaults become expressed from the earliest level to the greatest level. You need to have sovereignty, and you have great firewalls between because, today, we've had TikTok purchasing being announced by I think Oracle, Andreessen and Silverlake. Right? Why? Because TikTok adjusts kids' minds and other things like that. We I think there was an expose report that just came out about Brainco, basically checking and adapting neural patterns for Olympic athletes and others being funded by China on the slide. Like, there's gonna be more and more crazy stuff because these AIs are really persuasive. And that's what got really positive future because it seems weird that you have this balkanization Mad Max style of, info, hazard, information, graphic things. And again, who owns the AI? Who runs it? Who decides the objective function has the power? My proposal is AI symbiosis, where basically we have a decentralized system that's optimized for human flourishing with the core being benefit. And I think we can utilize a mixture of this decentralized technology and others to do so Because once you build models that satisfice and interfaces that are appropriate and we build state of the art AI agent and release them open source, I think that's what actually really matters because I was thinking about this a lot because I used to be an open source maximalist. And I realized, do I care if ChatGPT is teaching my kid? And I was like, yeah. I don't really want the data to be there because you see all sorts of weird things like Claude saying 5 year attention. You don't know exactly what they're optimizing for, etc.. Do I care if the interface and memory of the education app is controlled by an aligned entity, ideally myself or my family, and then I use ChatGPT? I care a lot less. So I think certain models need to be transparent and open, especially decision ones for regulated industries. And those should have collective ownership. And those should be collectively driven as well as a utility. And they should be aligned with human flourishing and optimizing for that. But then you should be able to use all these other models as well because I don't think anyone cares if you have a singleton for creative writing effectively. Right? You can use the open models, but then OpenAI is the best creative writer. What's really great at business strategy? I think what matters is who runs the governments, who runs the finances, who runs education and health and others. And it's probably not gonna be a good thing if that isn't collectively owned, if that isn't aligned, if it is, again, serving kind of other interests here. And so those are 3 perspective futures, and I think, to be honest, we're running out of time a bit. We're already seeing governments adopt big tech whole. We're seeing again this capital thing, like OpenAI just announced Stargate UK with $30,000,000,000 of investment. Like, the capital requirements are going up dramatically. And I think this year is the takeoff year as well. The key thing will be how much better is a Grok 5 than Grok 4. That'll probably be our first indicator if this thing continues or if we're now reaching a plateau though. Because if we're reaching a plateau, that will lead to a very different kind of future. But if we aren't, then it basically means that the most capable entities in the world will be the owners of the big GPU clusters, and that's also where the marginal productivity is. Your capital stock is no longer your schools and your factories and your universities. It's just GPUs. And in that instance, what you'll see is that Anthropic OpenAIX will stop giving them API access, they'll just take on the entire economy themselves. Because remember, they're not doing this to be API companies. Their objective is one thing, which is AGI. All 3 of them. And why would you give away your intelligence when you can utilize the intelligence? The final bit of that, which I think is quite interesting, is again, the GPT-4.5 model was too expensive. Was a $150 per million, if I can remember tokens. Right? It was a really great model for IT. The model that you receive today, the Pareto efficient 1, is your GPT 5 or Gemini 3. The internal models they'll have will require 72 chips or more to run, and they'll be way better. It's like the IMO gold model that OpenAI has. They have no reason to give that to you. And so we have to look when we think about this gap at the really big owners of AI and AI algorithms potentially being market competitors to everyone because that's the most efficient use of their GPUs effectively. So there's so much going on right now. And again, I think we're at this tipping point and takeoff period where we've gotta set some better things in place. I think that that is worth just reemphasizing that one of the most important questions, certainly one of the

Nathan Labenz: things that I'm it's hard to watch. Right? Because we don't even at this point, we don't even have any sort of transparency or disclosure laws on the books that would require companies to even say, what they've trained or what they've got going on internally or what behaviors from their internal, latest training runs or latest fine tunings they have even observed. But AI 20 27 calls this out, and I my, friend Andrew Critch coined the term big tech singularity. And I think one thing that people really underestimate is exactly what you've just highlighted that so far, we have continued to see that there's basic parity between what they offer on the API level and what they offer in their first party products. And so, it it does give a fighting chance to the startups that are the quick, adopters and can, iterate fast and pivot quickly or whatever to take advantage of the latest stuff, but they don't have to do that. You know, there's there's no, literally, there's no law of nature or of, governments at this point that says that they have to allow other people to build on their latest models in the same way that they do. And maybe competition will sort of encourage it, but maybe not. And that is where you really start to see, jeez, how am I gonna if I'm cursor. it's all well and good as long as I have the same models that they do. But as soon as they start to have better models in their first party products than they're allowing me to use via the API, now I've got a real challenge. And we know, obviously, we've seen these things can go vertical in terms of adoption and revenue and, market presence and all that kind of stuff. But presumably, they could also go vertically the other way if all the developers from one day to the next are like, well, jeez, the best thing here is clearly over here. You know, how much loyalty is there to these sort of independent apps? I suspect maybe not that much in the end. So that is definitely something that I am really looking at and, kind of concerned about is how will we even know? Right now, we're kind of relying on, literally whistleblowers. I recently did an episode on, an organization that is set up to support AI insiders who are concerned about what's going on and wanna become whistleblowers. I think that one of the reasons I think that is so important is that we have no other mechanism really for reliably ascertaining as a society just how powerful have the AIs got inside these systems. We're like reading the tea leaves of cryptic tweets as it stands today. And, yeah, that's not, that's not great.

Emad Mostaque: Well, yeah. And I mean, an interesting thing that you've got here is if you look at, I think the information had this chart of OpenAI's projections on q one 20 25 to q 3 20 25 over the next 5 years and the composition of that as they go to their $200 billion revenue run rate. API actually shrank in absolute terms there, and you have new products and agents now at 80,000,000,000 of that, with, ChatGPT, another 80,000,000,000 of that. Like, again, what is the agent product gonna be? The agent product is a replacement for human workers. Straight out. But then you can have fully AI companies. It would actually be against their fiduciary responsibilities not to do this because you can have the increased margin. Similarly, again, they can give you the AI and they can make a profit from it, but people like Google and others build their own chips. So they will be able to beat you regardless as long as they have access to the GPUs. So I think that everyone's doing this build out, but you will move from the utilization of that on economic basis to out competing your competitors, having more influence than others because, again, you can have the big computers to have better strategy than others. And that's before we get into this whole AGI fully autonomous kind of thing. This just, again, standard reality. And this the first point where we see that model divergence, think, was GPT 5. Now it will increasingly happen because also, this important, it's impossible to give the IMO level model most likely to every ChatGPT user. No matter how many GPUs you have. Like, they just have to double the amount of GPUs or something they have for codecs. So obviously, you would use the best models internally. And again, you've got this divergence between the 2. And the final factor of this you don't need more data. I think this the other interesting thing. You've had companies like Merkor and others hit $500,000,000 kind of run rates, labeling data, etc.. I think next year, year after, you're basically done with all the high labeled data for these big labs, they've got these big repositories. And then it's about compute and even self simulation of data. It's about getting the right things and the models themselves. Like, Phi was really great last year, relatively speaking, but it was very boring. The textbooks Phi could write versus the textbooks you can write today, there's no comparison now that you have agentic models. So I think we will let them look more inwards. I think we will have them as the biggest competitors. And, then they become the magnificent whatever. Like, OpenAI is at 500,000,000,000 raising money because people think it can get to trillions. Google just hit 3,000,000,000,000. OpenAI is worth a sixth of that. The funny thing is even if OpenAI got to a 100 trillion and we had a 10% shareholding as Americans, it'd still only give you, 1000 bucks a year in dividends. So, again, that doesn't work. But people are thinking these get bigger because you do have this cumulative loop effect, especially if scaling walls continue. I think they won't. I think they'll s curve, and then you have a collapse of intelligent costs to 0. But, again, we'll find out in the next 3 to 6 months.

Nathan Labenz: Before we go on to the specific recommendations, what would you what are you looking out for that will tell you that in the next 3 to 6 months? Because I strongly suspect that the discourse will continue to debate this for much more than 3 to 6 months. even today, we have these sort of it's stalling out. No. It's not. G p d 5 is not a big deal. Wait a second. It's, got all these additional capabilities relative to GPT 4. What are the most important questions in your mind for resolving your uncertainty in that not super long time frame?

Emad Mostaque: So for me, it's probably the Grok 5 training run. Like, I think it's very unlikely that if it does do well, Elon won't tell everyone about it.

Nathan Labenz: Well, he just tweeted in the last 24 hours that he now thinks GPT 5 could be AGI. First time he thought that. So, yeah, there's,

Nathan Labenz: loose loose loops at the top.

Emad Mostaque: Grok 4 was the first sub mega model run. Again, getting way above this 10 to 27 now just across the board. So I think it will be the performance of that model will be a good indicator if the scaling laws continue in terms of capability, particularly as, by 2030, I think Epoch AI said all the benchmarks will definitely saturate. Like, they're all heading towards that anyway. And above human performance anyway. The thing is, again, what's better? Is it 1000 small models or one big model now as we optimize, as we have verifiers, as we reduce hallucination rates and others? That'll be the other interesting thing. LEIs are things that one big model will be able to outperform everything. I'm not sure that's the case, but we're seeing more and more benchmarks now as people are building multi agentic systems. Again, this why I think the Tongyi model by Qwen yesterday was super interesting with the way they did their synthetic pipelines, their continuous learning, and others. And when you have only 3,000,000,000 or 5 billion active parameters, continuous learning is quite easy versus, these giant behemoth trillion parameter models. Like, you can do a lot around that. So I think these will be the key indicators of whether we're hitting an s curve or if we just continue to go up. And if we are on the classical scaling laws and you look at the model training and the clusters coming next year, next year is the year where we break AGI, full stop. Like, you just have to extrapolate what that looks like in terms of the capability aspect of AGI. But this comes at the same time as the scaffolding of this. But, again, if you're trailing on a 100,000 GPUs, you're not gonna have a 3 billion active parameter model. You're gonna have a 300 billion active parameter model that runs on a Grace Hopper, Grace Blackwell integrated chip with 72 or a 144 chips at once, not one h 100. And there just aren't enough of those to give everyone access to it, so only a few people will have access to superintelligence. And the question is, what are gonna use it for? So that's why I think this Grok 5 training one will be the most interesting, full stop. And then the final thing is just as we move into these multiegetic systems, the meta kind of METR type thing of longer and longer, Like, we've been building agents now, and they're working for hours and hours and hours. Performance seems to be going up. Like, if that's all there is to it, utilizing these latent spaces appropriately, then again, our assumption of 0 cost intelligence will be accurate, A 120 IQ for every human, and that really messes up the economy. Like, maybe it doesn't kill us all unless you have swarms and things like that, but I don't see how it's not gonna mess up the economy even if it kind of stalls out around about now. But my base assumption is that you will see GPT-5 Pro level edge models in 2 years. And I don't see how that doesn't change the world, honestly.

Nathan Labenz: Yeah. Seems like a big deal to me. I do love the vision of a sort of ecology of smaller models. I've my kind of first exposure to that was Eric Drexler's comprehensive AI services years ago. You've spoken about the sort of, almost like Hindu panoply of, of small gods as opposed to the 1, monotheistic, AGI to rule them all or super intelligence to rule them all. And, in a very practical level, there is some reason to think that that could work. Right? I mean, the cost, the privacy, the control, is there are, a lot of desirable properties for those smaller models. So it would, I think, be great and a and a real strengthening of our overall system against the possible eventual introduction of a something more like a superintelligence if we had, narrow superintelligence superintelligences, plural, doing a good job in a lot of local niches that could really kind of create a buffer for us or a, a new form of d capital to put it into your framework that I think could be could be really, really good buffer, against more and more powerful things to come.

Emad Mostaque: I mean, again, we have complex adaptive systems as hierarchical and loosely bound. We've seen that they are more resilient. Right? And so swarm intelligence, not Borg, but, improved is probably gonna be better when we augment every single human. Now the question is, how do we do that without being evil? So you talk about rivalrous and nonrivalrous goods. Vitalik Buterin has this great blog post about the revenue evil curve, which is that, a lot of things start out very good. But then once you start to be rivalrous and exclusive because you have to shut off access for premium features and things, you start becoming evil as an organization. So are there better ways to fund and align these things? Because a lot of the question of alignment is if I'm building a model for maximum engagement like Meta, it's gonna be really hard to align it properly. Leaving aside kind of instrumental alignment in all these discussions because I'm optimizing for manipulation ultimately. Right? I'm optimizing for profits. I'm not optimizing for well-being. And in fact, a lot of models don't encode any type of ethics they're like, well, we can't have an ethics for everyone, we shouldn't have an ethics for no 1. When I'm thinking again about the models for creativity and things like that, I kinda make sense. When I'm thinking about the models to teach my kids, I wanted to kinda know my ethics and teach my kids my ethics. And I wanna know what's inside that model, which is a bit different, from these generalized intelligences. But if we can capture what everyone thinks in different cultures, we're probably gonna have a more sustainable solid model that understands the nature of different cultures, understands diversity, and, again, has this mixture of reasoning and being able to look up its data as well versus these 36 trillion, 100 trillion model training runs on tokens. I think we only need trillion good tokens. What those are, that's the question. But I think, again, for the important things in life, they should run on that, and that'll be far more resilient as a distributed swarm, particularly if the objective function of the AI is your or my flourishing, the flourishing of your or my communities and of society as a whole, which I don't think anyone's encoded in current models. Like, we talk about constitutional approaches of where are the laws of robotics for AI. Those should stem from our shared ethics, our shared concepts of reality. And, again, I don't think you can have one model to rule them all because a Japanese consensus is different from a German one is different from that, plus communities, plus the different identity layers you have. And it feels like open source is the best way to do that.

Nathan Labenz: So let's get into the new social contract and the role that open source plays in that. And you've got a pretty sweeping, vision. So lay it out. What, what do we do in terms of a new social contract? There's a new a call for a new monetary system in there. There's a new framework for how governments should think about policy. Take it through us. Take take us through it.

Emad Mostaque: Yeah. So I think the headline of this that I think that the AI that runs the important things in life should be a university, and it should be owned and controlled and optimized for the people. So if we look at our current monetary system, because we start with money, and what does money measure? Money is mostly made by banks when you put a deposit in, and they create money for credit, which is debt. So the basis of money is debt. And you see this constant, traversal from the young to the old. Like, old people own the properties. They literally extract rents from the young people. They have their credit scores, and then money capitalizes money, which is why we've got billionaires, almost trillionaires right now. It's very effective. When labor can't attract capital anymore, how does labor get capital? Like, again, there was a good recent study by Stanford elect Eric Benlaussen again that showed that early sector jobs are starting to fall off a cliff because AI models are kind of a graduate level now. Why would you hire graduates rather than AI? They don't complain. They do hire graduates. Right? Again, that will go up the curve in the coming period. So my proposal is that we need a new form of money and a new way of looking at the economy. So we're building my company, fully open source of great individual multiplayer models for finance, education, health, and others that will give way free. But then what we're doing is we're using the computation from verified deployments of that to secure a version of Bitcoin that we call FoundationCoin. But unlike Bitcoin, we have lots of different miners. We're basically saying, what if there was a national champion in every country owned by the people of each country that stacked compute to give free universal AI to the people and to have supercomputers for cancer and education and culture and more to organize our collective knowledge and make that available to everyone. Because you've got trillions of dollars of compute coming online. Public sector GDP is like 20% of global GDP. Health care is another 10%. Patients is another 10%. Let's tap into that, to have a new type of money at a time when digital assets are being legalized and use that as the core. This your gold. This your store of value. So money becomes about benefit. Every single computational cycle you can use to organize cancer launch and make it available to cancer people. Every single person needs a universal AI next to them that isn't optimizing for what I want or what Sam wants or Elon wants, but instead is designed fully open source to optimize for the flourishing of that individual or a community level that community have for that community or a societal level for society. The more people it helps, the more trust in the asset, the more people it can help because the value goes up because most crypto is a bit rubbish now. But it's still $4 trillion. Actually, this interesting. The total amount OpenAI are gonna spend on inference this year is the same as the total Bitcoin budget on compute. And the amount of money that OpenAI Anthropic and others have made this year is about $20,000,000,000. The total amount of money going into crypto has been, 160,000,000,000. So I was like, let's use that as the basis. Let's build agents that can operate and run these productive systems that individuals and communities and governments can deploy themselves, and let's see if that can be a better way to have money as a store of value on day one so you have your Bitcoin equivalent to fund it all. Then the next part, which is the part we haven't quite figured out yet, but we're working on the paper for, is what if then you had a version of cash against this gold that basically you got for being human? What if we switch from the banks making money, literally, to you receiving that as a result of being a human conscious person, effectively? That could be a very interesting thing because UBI doesn't work with tax because, in fact, the tax base will come down. And, again, $5 billion,000 just gets you subsistence level economy. Right? Poverty level. That's the entire tax base. Even if you tax all the AI companies, again, all the corporations in America put together only pay roughly 900,000,000,000.0 in tax. And yet you have 5 billion,000 cost of even poverty level. The only way to get people universal basic income, and this easier if everyone has an AI, universal basic AI, is if we actually let them make money by being human because you need that basic level of hygiene. You need to let them survive, and the AI can help them optimize the use of that capital and their capability to access more capital. Because the average IQ is like a 100 or 90, and these AIs have a 120 IQs. You know, your buddy will be doing better. So I think we need to rework the way money flows, and this our proposal. We've seen dual currency mechanisms work classically well, gold with the kind of fiat peg and others, but it's not easy. At the very start, what we're doing, though, is just this Bitcoin but with AI, where all the sales go towards cancer supercomputers and others because once we've built a great medical system, and again, we've already built models out before MedGen and others, our plan is next year, there will be a free app that you can download on any app store that will just check every diagnosis in every language, and that will save lives. Someone's gotta do that. Someone's gotta organize all the cancer knowledge and make it available to everyone in every language with an AI that performs human docs and empathy because it's a good thing. And, again, I think we can play that digital asset thing and leverage that technology because it's the only way we can build a swarm AI, which is a universal AI for every single person and then stacks above that to run communities and societies and others, that is fully open source and aligned. Because I don't think you can do that with a company, unfortunately, because you'll always have this revenue evil curve and this kind of cross basis. So that's what we're trying to do. And I think it's also the only way, this very important, to be able to get the compute. Because if we're in a takeoff scenario where the compute defines reality, the most successful compute coordinator in the world has been Bitcoin. So if you have the right Bitcoin but for AI, where every single foundation coin sold goes to a cancer supercomputer or giving people cancer help or educating kids, that could potentially be the highest marginal dollar to divert some of this GPU supplier to the public sector before the governments know what to do because governments aren't gonna be able to get their act together in the next few years. And to give people the opportunity they need because if you don't have GPT 5 access which let's see. If you don't have GPT 6 access, you're gonna fall behind. Like, someone fuss AI, but someone without AI, I think the gap was about that much, so to speak now. Whereas you, plus your AIs, in a year, you'll be far more productive than anyone because they just won't make mistakes anymore, and you can coordinate swarms of them to attract capital and performance and more. So I think the gap will grow dramatically, so we need to have that access element. So that's quite a lot, I know. It's not easy trying to take on the economic system. At the very least, I do think we need to rethink how money flows in our economy. And we should use this AI to build these good things that are valuable and maybe not captured within GDP or existing company systems.

Nathan Labenz: So there's a sort of several several pillars there. one is collective ownership of models and infrastructure, which certainly when you consider the fact that the data on which the models are trained is sort of the collective, product of humans over the course of human history, it it does seem fair to say, hey, maybe we should have a sort of at least take a stab at a collective ownership, model for these models. None of that is to say people couldn't continue to develop their own privately, but there's there's some, intuitive basis, I think, for, jeez, wait a second. If this if this knowledge is the collective product, maybe the product should be also a collective product. And then combining that with some sort of guaranteed, basically, access to compute, access to inference as a right of all humans, that's a kind of a key part of the social contract.

Emad Mostaque: Can you

Nathan Labenz: give me a little bit more on, the tokenomics of this? How is it the case if if I wanna go buy a token, why do I buy it? In most crypto schemes, it's a speculative bet. Maybe there's some aspect of that here too, but I sort of take the idea that I'm I'm, buying into compute, but you're gonna go spend that money on compute. Right? How do I then redeem if this sort of gold? Like, how do I do I get my gold out of the bank at some point in the future? If I wanna redeem gold, can I get, compute back out? Yeah. What is the incentive structure for the people that are buying in and contributing the capital now?

Emad Mostaque: Yeah. So our concept was digital assets are legal in America. Apparently, markets are gonna go on the blockchain. The government's super behind it right now. But, again, most of them, there just isn't a high quality one that you'd like to tell your grandma about. At the same time, the world needs high quality intelligence. The existing ownership schemes, like you have Nick Bostrom with his ODI thing of everyone getting ownership in AI companies. Like, we did the math, and, I think I alluded to this earlier. If OpenAI is worth a $100 trillion, so 30 times more than the most valuable company today, bigger than global GDP, which is $85 billion,000, and American citizens had a 10% ownership of that, that's $29,000 each per American of ownership. At a 5% dividend rate, that's one and a half thousand dollars per year per American. That's just with Americans. It doesn't work. You need to have ownership of a different type if you're gonna have this transition here. So we kind of looked at you need to have probably a dual currency system for optimization, and we worked on the calculations of this. Bitcoin is worth $2 trillion already, and it's secured by a margin amount of compute, but that compute's kind of topping out a bit right now due to the halving, just these other things. But it's a good model. So we said, let's create a version of Bitcoin, which is mined by national champions owned by countries everywhere, maintaining the ledger as a new type of money. But rather than being mined on ASICs, it's basically accelerated by the provision of compute to build great datasets, build great models, and make them available to people. Because the way that you build trust is by helping people. If I organize the autism knowledge, has autism, and I make it available to every old person going through autism journey in the world for free, they will trust the system more. The currency itself has a basis of being as distributed and good as Bitcoin and secure. And in fact, even swap because it's in private keys from one to the other. But when you take a Bitcoin and you swap to a foundation coin, not only do you get the foundation coin, but all of your proceeds measurably go towards compute for cancer, autism, education, government, culture, etc.. So we will have supercomputers on this basis. And that's playing the aggregate demand for a high quality digital asset against the demand for high quality intelligence, but separating the 2. Because a lot of crypto things try to create marketplaces, etc., utility tokens. I was like, let's just try to create money that is made out of crystallizing wisdom and making it available to them because that's valuable and someone needs to do it right now. Like, at stability, we gave away 20,000,000 a 100 hours. We had 300,000,000 downloads of our models, and we were good at allocating compute. Like, again, let's just do that because someone should. Why hasn't anyone organized the cancel launch of the world and made it available? Because it's not in anyone's incentive to do so. Whereas if you're trying to create a high quality digital asset as money, it does make sense. But then if you're giving everyone universal AI, because, again, everyone should have it as a right to teach their kids. Again, alpha school, as you mentioned, 2 hours a day, they're in the top top 0.5 percentile in the world. And that's not through a chatbot either. They do dynamic kind of stuff. That should be a right human right. High quality medical advice should be a human right. Having AI on your side to help you navigate this should be a human right. Then the interact with these services, the more human they are, and then we can think about UBI from a different perspective and monetary generation from a different perspective. Where the money is generated by people and then is purchased by the AIs effectively because you're creating new money as digital assets go from 4,000,000,000,000 to 40,000,000,000,000, I think. I think everything will be a digital asset now that's legalized in America. But the base foundation coin is just very simple, this loop. Sell coins, use all of the proceeds not for Lambos or whatever, but for good things. When you buy the coin, you can say I want it to go to cancer. I want it to go to Alzheimer's. I want it to go to this. I want it go to that. And you know that it can because you'll see the supercomputer itself. And then you can tell your grand I help contribute to this. And if there are breakthroughs from the grants or you see the organization, it's valuable. If someone uses it, you'll see that you've helped 33 people today through your holding. That's valuable, I think. And it moves to, again, we want to change the nature of money being from debt to benefit because this definitely benefits society. And we talk a lot about the benefits of AI, drug discovery, everything, but it's also this corporate capitalist perspective whereby you might have isomorphic and others having breakthroughs and there are amazing people there, but they will keep that to themselves versus having this I and n and d approach to things, which is just if we build something that's trusted and has this core asset where as we increase compute, it secures it more. As we increase the diversification, it secures it more. We believe that can be a self sustaining flywheel that can create the next Bitcoin, but it will help 1000000000 people in the meantime. And again, give that practical example of just look at the medical model. We know next year that releasing that fully open source model, we could charge for it, checking every diagnosis in the world. Anyone can install it on any computer, and it's fully open source. Will that save lives? Yes. Will organizing the cancer knowledge of the world, a great big honking supercomputer, accelerate a cure for cancer? Yes. So I think, again, it's a super interesting time where this might work, and it's the best idea we've had. If any listeners have better ideas, please tell us because we can't think of anything else that can scale like a Bitcoin except for a new type of Bitcoin. But it's not for censorship resistant classical money even though it is distributed and decentralized. It's trying to set the basis of a new economy where the most valuable thing is how many humans you've helped effectively. And then we can build with everyone else the infrastructure around that to ensure you don't have capture of these important things.

Nathan Labenz: You mentioned national champions in countries. You know, as long as we're thinking so radically different about the future, do you question at all the nation state as the right organizing unit for the future of humanity? I don't have a position on this really, but

Emad Mostaque: I mean, the way that we thought about it, if you think about education, health, government, financial stuff, the regulated industry AIs that we're focused on that we think are the most important not to have corporate capture, they're very regional, and they're very local. Like, your health care data shouldn't leave your town, your city, your country. Right? So it's just a natural organization of that. But rather than have a mistrial or a coherent approach, we were just like, what we're planning right now is to set the valuation at, a dollar and give all the equity to the citizens. Like, just have collective ownership of these things. Have improved DAO type formations for what the localized datasets of the generalized medical models, education models look like. And their job is to act as digital asset treasury slash mining pools to organize the compute buildup in all these countries to provide universal AI services. Just, again, because humans generally aren't in war on geography. However, earlier in this discussion, I did say the wealth of nations, it was about land. It was about factories. It was about these other things. OpenAI becomes this transnational thing that's even bigger than Meta. Meta, obviously, is doing AI now as well because the marginal productivity in the world, like the best Japanese speaking accountant, will be on an OpenAI server, and they won't be in Japan. So, again, that's gonna have such a massive impact across the world. Like, the best Bulgarian doctor will be on an OpenAI server in Arizona. Kind of how crazy is that. Right? So I think the nation state is gonna be challenged. But, again, just geographically, with the purpose of what we want to do, it makes a lot of sense. But, again, we don't need to do a coherent Mistral type model here where private sector, b to b, SaaS, etc.. We can just literally have them as miners, as mining pools, etc.. And, again, owned by the people of each nation because they should be locally owned because I don't really care that much about Bulgarians. Apologies to my Bulgarian friends. Right? But the Bulgarians care about the Bulgarians. And what they need is a stack they can run where it's very simple. Stack GPUs, give the people access to the technology, have a localized version of that, and the more they stack, the more coins they mine, the more they can fund until the government catches up, and then the government funds everything. And then they can just increase the wealth of the country, and then they can think about the local currencies, etc.. It's really not easy changing the way things work. Again, I think this the best approach that we've had to that. But I do think we'll see more and more network stays. We will see more and more of these alignments occurring as people look for new types of identity.

Nathan Labenz: Seems like there's a set of recommendations that you have for policymakers that they could adopt, regardless of your, stage of progress on all the grand plans that you've outlined. We wanna just give a little flavor of what that is for people in positions of power today.

Emad Mostaque: Yeah. So yeah. But a lot of the issue of government is that you had intelligence at the top, but you can never have the intelligence at the bottom. But now if you give everyone a universal AI, then you communicate very differently, coordinate very differently. The information coming through is very different as well. Like, right now, your health care information is about that much compared to what it could be if you just told the AI how you're feeling every day. So I think that the role of the government becomes leveraging AI to not do stupid things like stupid policy, because every US policy should be checked by an AI, and we'll build that if no one else does, to say, does it adhere to the constitution and common sense? Like, again, these big beefy bills, etc.. But then it's about changing the harmonic flow of this to optimize those capitals and stay within these FOR constraints as well. And that goes beyond, again, just this focus on GDP, which the inventor of GDP said was wrong. What does the diversity resilience of my community look like? Am I increasing the intelligence and capability of my society versus these work programs already pushing the boundaries and making this AI available to everyone in the right way because of the right way and the wrong way? And am I increasing the network capability and openness of my society, or am I going the opposite direction? And so, again, I think you need to go to geometry engineering versus policy engineering because, again, it's just you want water to flow downhill. Right? You wanna get out of the way. And most governments actually get in the way of that instead due to various misalignments of incentives, due to corporate power structures, all sorts of other things, because there was never anything that could check and balance that. And in game 3 terms, this where I find AI most exciting. Once you build appropriately, and again, this important because you need it to be trusted, an AI that can check every single US bill to see, does it match with the institution? Is it in the benefit of Americans? And does it contribute to flourishing? And it's reproducible, the analysis. I think that actually changes the way democracy works because there are no checks and balances right now, which is why you have so much corporate capture, and you have the Los Angeles, San Francisco railway, etc..

Nathan Labenz: 1 of the, maybe, also, most important play claims in the book that you have is that think all of this does lead to safer AGI when AGI is ultimately built. I know you think that's not too far out. You wanna sketch out the case for how all of this I kinda gave one version of it, which is the sort of buffering, but that's more of like a DEAC story. My sense is that you also have a story of why a lot of the things that you lay out here add up to a safer AGI, not in the sense that we're more prepared, more buffered, have better defenses, but that the thing itself is actually, safer, better, more aligned, etc.. Can you tell that part of the story?

Emad Mostaque: Well, there's a few things. If you become the marginal highest dollar for any compute because the value of the currency goes up, then OpenAI others will actually adapt their models to what you do. That's number 1. Number 2 is if you're building these great datasets that actually map the culture and knowledge of Malaysia and the ethics of various faith systems and others as we call these gold standard anchor sets that can adopt and evolve, we're actually putting computation towards figuring this out. It's what Kissinger and Eric Schmidt called DOCSR, underlying agreements of humanity. That's really valuable to actually input into these other AI models because I think OpenAI and others would like to do that, but they don't even think in that way. I think the other part of it is just a computational thing because a distributed computational network built correctly with Universal AI and then City AI and then others like, to attack the Bitcoin network, you need to have computation above the Bitcoin level of the minor level of computation. You need to copy certain amount of that. In order to attack a system of AI agents at every single level that can call upon these huge reserves, it has a lot of computation that can balance out the other computation. But the AI agents themselves, their instrumental objective is much more slow smallly defined than the classical AGI singleton thing. So you got a data thing, you have a incentive thing, and you have a resilience thing baked in there. In fact, I think probably the most important thing for AGI being on-site and not killing us all apart from the structural things is actually the data that goes inside it. Like, we can see that a small amount of data in those trillions of tokens being wrong for a definition of wrong can lead to massively weird behaviors. I actually believe that every AI company should be forced to release the data just like they're forced to release the ingredients that goes into models. And there should be data standards that you can't have certain types of data in those models. They don't necessarily need that. When you're trying to build a medical model, that becomes a lot more apparent because you're like, why do I need any Reddit data? Versus if I'm trying to build a classical AGI. So I think building this system creates the right incentives to have better AGI that doesn't rely entirely on these singletons, builds better data, builds these better things. And it's the best approximation I could have because in the absence of those datasets, in the absence of that incentive structure, you're only gonna go one way, which is the way of profit maximization of companies. Oh, actually, the interesting thing is that AI companies are about cash flow maximization. None of them will ever build profits. They get your subscriptions on day 1. They pay on day 60. They're doing the Amazon playbook. So even taxing their profits won't make anything.

Nathan Labenz: Yeah. OpenAI recently said that instead of burning $30,000,000,000 or so, they now plan to build a 100 and or burn a $110,000,000,000 or so over the next how many years. So, yeah, there's gonna be a lot of, losses to carry forward into their, future accounting.

Emad Mostaque: Well, again, the reason they can do that is because they're trying to capture the biggest prize of all, which is all human intellectual labor. And regardless of the things, again, if reasonably, if you are open AI and you're trying to achieve the goal of AGI, in a few years' time, everyone gets cake and you get the stuffed truffle pheasant. Right? You use your models to take increasing parts of the economy and get increasing influence. This also why we've seen the $100,000,000 pack copying from the crypto example. And governments will be forced to step in line. Like, this why, I look at all the pause stuff, and I look to all the regulatory stuff. I was the only signer of that AI pause letter. I was like, none of it's gonna work because the incentives are too high. Like, if you wanna change people's behavior, you have to change the incentive landscape. And so we have to create an incentive for actually useful AI that is about human flourishing, but doesn't exist today. If FoundationCoin takes off, it will exist because companies can be paid in FoundationCoin, and it can become the highest marginal consumer of OpenAI API credits, etc., providing it through this interface owned by the people. If it doesn't, then I haven't figured out how we create the appropriate incentives. And this actually the most discouraging thing, and this why we don't see positive futures. Like, a lot we're very good at diagnosing the problems, but no one's been able to figure out a solution yet because we found for economics, we literally had to go back from the first principles and reconstruct economics. And it just so happened that it worked. You have to really think about these things from first principles, but so much of the AGI alignment discussion has been around the end state when you have these incredibly powerful models or restricting. Like, how much of the discussion has been about the data that goes inside and how you optimize for wisdom versus intelligence? This why, Talib has this concept, intellectually an idiot. Our AGIs are very much gonna be intellectual yet idiots because they don't have this lived human reality and interaction with humans. In fact, we deliberately don't r l them on humans because they turn into Nazis with the way that we do it right now, like Tei, etc.. So that's why I think we need a different type of AI. We need different types of datasets and a different set of alignment patterns. And that's the best that we can do from where we are. Again, if anyone's got any great ideas about AI alignment apart from, getting rid of all the GPU farms or, freezing them, let me know. Or building an actually, the current best that everyone's got is build AGI first. So stop building AGIs. Like, go on. Yeah. Think, actually, there's a very interesting thing, which is just if you project yourself 10, 20 years in the future and you think about the AI that you're using every single day that's teaching your kids, managing your health, helping you be creative, helping you be the best you can be, who owns that AI? How is that built? What's the stuff that goes in it? What's the outputs? And that's where we realized that what we outlined is, I think, the ideal environment of that. I don't want it controlled by a government. I don't want the government allocating capital. I don't want it controlled by a private company. I need my sovereign AI that I own, and I need it to be clean on the input data. I need it to be aligned with me and looking out for me. Otherwise, it just doesn't work.

Nathan Labenz: Maybe 2 last questions, and I appreciate, your generosity with your time. It seems like you think we have a relatively you kind of alluded a couple times to thousand days since CHET GPT. The next thousand days, things are gonna look a lot different. Give us the argument for why this period is critical and why we sort of need to get things right in this phase before they sort of crystallize or, a future pattern becomes so entrenched that it may be hard to break out of. And then after that, I wanna hear just kind of your vision for human life in the, in the scenario where it goes well.

Emad Mostaque: Yeah. So I think, again, this leaked anthropic thing where they showed this takeoff, that could be quite reasonable, particularly on the worker basis as opposed to the model training basis. I'm a bit dubious that you actually need to have that. I must say 5 billion active parameters is all you need. You know, just like the 64 k RAM is all you need, Especially if you've got reasoning pure. I think GPT-o3 was a cross director of that. Having the defaults in a country like UAE saying ChatGPT for assistance is gonna be incredibly powerful. And so we need to set an open, communally owned interface layer. And you can do that very quickly within a few years if you have the model that we're describing where nations are stacking GPUs. They're mining this currency if we can get traction on it, and all the value is going to get it. We think that's a very powerful thing. The first entity that checks every single policy, again, has lots of power. The first entity that has the first supercomputer organized accounts and knowledge has lots of power. We need to make sure those things are collectively owned because these things do act as shelling points, I think is very important. And we need them because what's gonna happen over the next 3 years is that the AI will be good enough to displace jobs, but it won't immediately. The safest jobs in the world are San Francisco MTA administrators earning $400,000 a year because they're not about production or performance. Public sector jobs are safe for a while, but then income tax receipts, other things, consumption will drop because you have displacement of workers as a sand pile collapse. And that will affect different industries at different times, and then robots will come as well. So next 3 years is when the AI becomes good enough. Like, Dario, what, 6 months ago, said 90% of code will be written by AI by around about now. He should have said can be written. Like, a couple of years ago, he said it's about 5 years for programming, which gives me another couple of years. It can be, but it doesn't mean that it is. In fact, not every coder is using AI right now, which is crazy. Like, I'd say probably about 50% of coders are using AI. So the distributional effects will be longer, but the defaults we set now are important. If we're in the scale up environment, it takes a little while to scale up. And, it could be that you're locked out of that GPU share of the world. So we need to make sure as much of that GPU share of the world is diverted towards benefit and human benefit as possible. And the final thing is the power structure here as well. Right now, outside of the big AI companies and some of the Chinese ones, what does the power look like of that? Who can governments can't force open AI to build models in any way except for the military uses. You need to have a power block and a constituency that speaks for the people. And again, that should be some sort of collectively owned entity because people need to speak up because they're not being represented here. In fact, people are even saying, if you tax the AI companies and the main providers of tax, they become the most represented. Particularly, they're gonna use the AIs to effectively impact democracy itself. So all of these are converging at the same time, so we need to have something big now. And, again, the only thing we could figure out is what we've described. I don't think there's any other way to coordinate this. And I think it's at the ideal time now because once the models get good enough in a year or 2, once you have your ChatGPT and you've put all your life into it, etc., and medically, you're not gonna you're gonna switch. Right? The moats are gonna grow bigger and bigger. If you get the scale up, the moats are gonna grow bigger and bigger. But there's this period now where we can set really interesting defaults for the AI that matters. And again, you can still use these other AIs, but don't give them ownership of the control plane. Don't give them ownership of your data. That I think is a key thing. And let's think of new ways to have this collective ownership, this collective organization, and build stuff that really helps people as well. Because, again, if I a couple of years ago or, 2000, I was using AI to organize COVID knowledge, and then a whole bunch of AI companies didn't give me the tech which led to stability, etc.. I couldn't organize all the cancer knowledge in the world back then. But now I can say hand on heart, if we build a supercomputer for cancer, we will accelerate a cure for cancer and be able to help every single family in the world going through their cancer journey. And there's no actual debating that. Isn't that a wonderful thing to be able to bootstrap something like this? And for Alzheimer's and for neurodegenerative disease and multiple sclerosis. So I think it's a mixture of positive and negative here, with the positive enabling us to buffer a potential negative future because the way things are going right now is the great fragmentation or the singleton. There's no other way about it. The AI companies will run the governments effectively. On the flip side, the second question, the positive view of the future. What is the meaning of life? Nice small question. 42. Right? And maybe that's what super grok 8 42 will kind of figure out. It kinda goes back to our core thing is a reimagining of identity and purpose. And our purpose in this life is not to make money for companies or live to work. Like, you work to live. Right? If we can figure out a way that we can have that go to the side, then we can achieve much more if we orient it the way we think. And I think China is a very interesting example of that. The population of China pyramid is messed up completely. With robots, the robots can look after the people of China as they grow up. You don't need youngsters anymore. And in fact, I think in a few years, China will stop exporting robots because it can be a completely self sufficient society. But then what is the society optimizing for? Human flourishing. It's your interconnection with others, it's advancements in art, culture, all these kind of things. It's your relationship with your families, you don't have enough time to spend with your family or your kids because you're working all day long. But is your work really that much more important than helping your kids thrive and survive? And, I think alpha school has kind of shown education really is a factory school, and you'll see more instances of that. So we need to have a new story and a new social contract. Again, this why I think a Star Trek future is better than the Star Wars future, as it were. Like, again, what is that about? It's about exploration. It's about pushing the boundaries. Like, Picard was not a good series because it was about espionage. But, the spirit of inquiry of progress and others, we need to set that up better to allow people to be happier, we need to really focus on social connections and more, as you said, the caring economy. And that's an ideal because there's no substitute for robots for your interconnectivity. And I write in the book, we should be the guiders of antientropy, particularly if capital comes from us, from being human. Like, why should I earn money to a base level? It's because I'm a person, because I'm valuable. It isn't because of my contribution side life. I wanna get wealthy or if I want extra material because yeah. Sure. Work for that. Have the capability to access that. To put everyone on an equal capability thing, and then really allow people to understand each other, themselves and the universe better, and then things will be happier. Then depression will be lower. But we should be able to write that story ourselves. And, again, really think about how our social contract has moved from Rowles to Hobbes' Leviathan to others. Like, what is the nature of being and other things like that? This why I think there is room for a new philosophy and more. We've shut it down some things, but, again, let's leave it to the communities to figure out and build stronger communities, full stop. Let's decrease the hate, and let's increase the positive stories. Because I think as a final thing, again, this one week after Charlie Kirk and all of that, the biggest story that causes violence is that humans are not humans. That's the biggest lie ever told. And our echo chambers and our attention systems exacerbate that now. If we can tell better stories and protect ourselves better, then maybe we can realize that we're all people and we can work together. And when we all work together, there's nothing we can't achieve. So again, that's why we need to use the AI to increase the nature of our agency versus replace us with agents. And that's a design consideration that we have right now, and now is the time to decide that.

Nathan Labenz: This fascinating. Anything else you wanna touch on that we didn't get to or any other thoughts you just wanna leave people with?

Emad Mostaque: I think that a lot of stuff, man. Like, it's the most exciting time in history. And, this it's the tipping point literally right now. Like, I think people need to really think from first principles how they view their own families, their economies, and more. And, again, put yourself in the future. What type of AI do you want to exist, and can you help that happen? Because it's inevitable now. You're not stopping this.

Nathan Labenz: Yeah. The momentum is, only building, and it doesn't seem like it's gonna be turned back anytime soon. I definitely agree with that. The book is The Last Economy, a guide to the age of intelligent economics. The company that's training and open sourcing all these datasets and models is the intelligent Internet. And Emad Mostaque, thank you again for being part of the cognitive revolution.

Emad Mostaque: Thank you very much.

Nathan Labenz: If you're finding value in the show, we'd appreciate it if you take a moment to share it with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions, and sponsorship inquiries, either via our website, cognitiverevolution.ai, or by DMing me on your favorite social network. The Cognitive Revolution is part of the Turpentine Network, a network of podcasts where experts talk technology, business, economics, geopolitics, culture, and more, which is now a part of a 16 z. We're produced by AI Podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement at aipodcast.ing. And finally, I encourage you to take a moment to check out our new and improved show notes, which were created automatically by Notion's AI Meeting Notes. AI meeting notes captures every detail and breaks down complex concepts so no idea gets lost. And because AI meeting notes lives right in Notion, everything you capture, whether that's meetings, podcasts, interviews, or conversations, lives exactly where you plan, build, and get things done. No switching, no slowdown. Check out Notion's AI meeting notes if you want perfect notes that write themselves. And head to the link in our show notes to try Notion's AI meeting notes free for 30 days.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.