Stability AI’s Emad Mostaque and Slow Ventures’ Sam Lessin Debate Investing in AI

Emad Mostaque and Sam Lessin discuss AI's impact on society and its potential as a venture capital investment in the latest Cognitive Revolution episode.

1970-01-01T02:17:42.000Z

Watch Episode Here

Video Description

In this episode, Nathan Labenz and Erik Torenberg hosts a discussion between Emad Mostaque, founder of Stability AI, and Sam Lessin, GP of Slow Ventures and former VP Product at Facebook. These influential technology thinkers tackle topics pertinent to evaluating whether AI is a good investment for venture capital. They talk about the attention and information economy, SaaS markets, how AI-created entertainment will impact society and our relationships, and AI disruption of highly-regulated industries.

The Cognitive Revolution is a part of the Turpentine podcast network. Learn more at www.turpentine.co

OPPORTUNITY: JOIN NATHAN'S LIVE EVENT IN SAN FRANCISCO ON FRI JUNE 16th:
Why AI with Athena: https://v4p9mjmurpv.typeform.com/to/llRcJSF9

TIMESTAMPS:
(00:00) Episode Preview
(01:17) Nathan explains why we hosted a conversation with Emad and Sam
(08:25) Discussion kicks off: AI as an extender rather than a deck shuffler
(12:01) Is AI a disruptive or sustaining innovation?
(14:22) What is intelligence?
(16:00) The attention and information economy
(21:56) Storytelling, the stock market, and the economy
(23:23) Sponsor: Omneky
(23:50) AI in the media industry, Hollywood, and distribution
(29:10) Startups vs incumbents
(30:30) Disrupting highly-regulated industries
(32:30) AI in education: an AI tutor for every child
(35:15) AI as StackOverflow 2.0 for engineers
(37:23) Regulatory Arbitrage
(40:06) The oracle problem in AI
(42:00) Apple as the weakest BigTech player in AI
(44:30) AI in communities and the future of social media
(46:55) Replika on Valentine’s Day and AI-generated porn
(53:50) How does the market for inference shape up? Inference vs. cloud
(55:08) Price of primary care in US in the next decade
(55:38) What are the most likely incumbents to be disrupted?
(56:55) Will BigTech continue to opensource?
(58:56) Marriage rate in US in the future given AI
(01:00:24) Can any AI leader sustain high gross margins in the future?
(01:02:04) Stability’s business model

TWITTER:
@EMostaque (Emad)
@Lessin (Sam)
@CogRev_Podcast
@labenz (Nathan)
@eriktorenberg (Erik)

SPONSOR:
Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.


Music Credit: MusicLM


Full Transcript

Transcript

Nathan Labenz: (0:00) I think a key question to this is, people say hallucinations. I was like, what does that mean? Well, I mean, it doesn't get every single fact completely right. ChatGPT is probably like a hundred gigabytes down from like 10 trillion words. The fact you can get anything right is an absolute technical marvel that no one's really sure exactly how that happens. What if you had an AI tutor for every child? What does that look like? What if you had a hundred AI tutors for every child?

Emad Mostaque: (0:25) For the first time, every single person can have hundreds of characters that like and support them all the time. Basically, you log into social media or whatever, and you're like, hey, I'm Sam. And it's like, cool. What type of people, instead of who do you want to follow, it's like, who do you want to follow you?

Nathan Labenz: (0:38) For example, there aren't enough therapists in the world. And it is a regulated industry, but at the same time, there is a gap for therapists, just like the meditation apps kind of stepped in. And they created Calm, they created these other things that were huge.

Nathan Labenz: (0:54) Hello and welcome to the Cognitive Revolution.

Nathan Labenz: (0:57) Where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas, and together we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz, joined by my cohost Erik Torenberg.

Hello and welcome back to the Cognitive Revolution. Today's episode is a super interesting one on a number of levels as we're hosting a discussion between two super influential technology thinkers: Sam Lessin, former VP of product at Facebook, now early stage technology investor and writer, and Emad Mostaque, founder and CEO of Stability AI, whose work at Stability, highlighted of course by Stable Diffusion, has already been incredibly influential, but has also come under intense scrutiny in the months since he raised $100 million at a $1 billion valuation.

This conversation started on Twitter with a short essay that Sam wrote arguing that AI is mostly a bad investment for VC. Emad responded and suggested a podcast on the topic, and Erik and I were naturally happy to volunteer to host. Both Sam and Emad talked fast. This is a 1.5x speed episode for me, down from the usual 2x, and both had a lot to say. So we mostly let them speak directly to each other before I jumped in at the end to ask some concrete questions.

I think regular listeners to the show will know that I definitely share Sam's point of view around investing in AI. AI may well disrupt society at large, but it doesn't seem likely to disrupt many existing SaaS markets between now and then. There will, as Sam says, always be exceptions, but for someone whose focus is on looking for those early stage companies with 100x potential investment returns, I think he's quite right that they'll be few and far between, at least at the model and application layers.

For what it's worth, though, I do think Sam is quite wrong to limit his thinking about LLM function to the "no real intelligence, just association between words" paradigm. There is now ample mechanistic interpretability work that shows quite conclusively that AI models are indeed grokking much more than statistical correlation. But that's a topic for another episode.

For today, the subtext of the conversation seems to be this question: will Stability AI prove to be one of those exceptional, highly successful startups deserving of its unicorn status and valuation? From my standpoint, the answer may depend on your definition of success. Stability is as much a movement as a company and has already left an indelible mark on AI open source culture. Their impact goes beyond the groundbreaking Stable Diffusion, including major dataset releases such as the LAION 5 billion image dataset, various language models and accompanying open source RLHF libraries enabling further downstream training and customization, and many, many other projects across a wide range of modalities.

They've also established themselves as tremendous identifiers of and supporters of talent, including another upcoming guest, 19-year-old PhD Tanishq Matthew Abraham, who just published a literal mind-reading paper that converts fMRI data into reconstructed images of what the person saw. Truly mind-blowing work.

But perhaps more important than any of that has been Emad's unique ability to articulate an inspiring vision for the future of AI. While the positive vision of AI that we tend to hear, when we hear one at all, often centers around the possibility of large and powerful AGIs, of which there might only be a few, presumably built and owned by leading technology firms, Emad has not only signed the AGI pause letter and the extinction risk statement, but has articulated a very different positive vision for a panoply of smaller AI models, mostly presumably derived from the open source standards that he and the team at Stability are creating, but all highly specialized for specific purposes and localized to specific contexts and cultures.

This is an extremely appealing notion to billions of people around the world who don't want to be beholden to American or, for that matter, Chinese corporations for their access to AI. Emad has been criticized recently for allegedly exaggerating certain claims and affiliations and for some operational problems at Stability that resulted in people sometimes being paid late. And while some of that may well have happened, I will say that I've followed him quite closely now for at least a year and have generally found him to be very reasonable. He has, for example, always recognized the reality of OpenAI and Google's moats and has projected that open source models will continue to lag leading closed source models by a year or more. All of this seems quite right and reasonable to me.

Given Emad's comments about the centrality of stories, I think it's safe to say he understands the task of developing a positive vision for AI, a vision that others can really buy into, as a core part of his role and strategy. This is quite different from other AI CEOs who often seem to be sharing their plans more for your information than for your input, and it really does seem to be working. I've joined the Discords of many Stability-affiliated projects and have been very impressed with the quality of people and conversations that they contain.

So whether Stability will ultimately deliver a great return for the investors who bought in at that $1 billion valuation is, for me, not the most interesting question about the company. I'd be very surprised if they failed outright given the quality of talent that they have. And so the question that matters more to me is simply: what impact will they have? Will their push toward decentralization prove democratizing, destabilizing, or both?

If you fear centralization of power and you want to see a rich ecology of AIs develop around the world, you might expect their contribution to be extremely positive. If, on the other hand, you fear chaos and see AIs as invasive species colonizing niche after niche and ultimately perhaps competing with humans, you might feel quite the opposite indeed. For my part, as you can probably guess, I expect the outcome will ultimately be a bit of both.

Throughout this conversation, you'll hear just how much change both Sam and Emad take for granted as they think about the future. Culture, entertainment, and relationships, they agree, are in for a shock. The global South may well have leapfrog moments in education and even medicine. Online communities may come to contain AI characters that we can't even identify as non-human.

Given the magnitude of all these changes and the resources and talent that Emad has amassed, the inspiration he's provided, and the tremendous global need that AI seems so well suited to fill, I think Stability has a real chance not only to become a great company, but to help shape a global universal basic intelligence standard, a potentially historic development. How humans ultimately wield the new power that Emad and others unlock, and whether we can control AI long term at all, is much harder to predict, but can ultimately only go one way or the other. Now I hope you enjoy this fast-paced conversation with Emad Mostaque and Sam Lessin.

Sam Lessin: (8:25) I think that large language models and a lot of the AI stuff that we're seeing kind of start to get consumerized right now and become real, it's super cool. There's no question about that. And there are absolutely going to be great product experiences improved by it, right, and opportunities to create more efficiency, create better interfaces. I am not negative on how some of this stuff will find its way into consumer product experiences and make things better.

I mean, you know, my wife's company, the publication The Information, we've already deployed a bunch of AI stuff that makes search for The Information go from absolutely terrible to pretty good. And there's a bunch more stuff coming that will get better. I'm not against that.

I do think the things that I keep in mind, one, as an investor, is I think the case about why a bunch of this technology is going to make Meta and Amazon and Google and a bunch of big players a assload of money is clear. Right? I think the idea that it is a wedge or an angle that's going to allow a bunch of companies from zero to come out of nowhere and then become wildly profitable or compete with those guys, those types of big players, I think is much more sus, as they would say.

And it's because, I mean, to really take advantage of this stuff, you need a ton of distribution. You need a ton of data. And I really see a lot of what I've seen as opportunities to extend innovation, right, that already exists versus kind of completely reshuffle the deck. Right? And so that's like my big thing. I am very bullish on crypto long term. Crypto is undeniably, whatever you think of it, a deck reshuffler. Right? AI and what we're seeing is not a deck reshuffler from my perspective as an extender.

So, you know, people come pitch me, like, we're going to be the Adobe of AI. I'm like, Adobe is going to be the Adobe of AI, right, from a deployment perspective. So I think it's a very tough one to see. Will there be exceptions? Of course there'll be exceptions. There are always exceptions. Right? But I think as a thematic thing, I think it's hard.

I'd also say as an investor, a seed investor, which is, you know, how I earn my daily bread, right, I'd say that the opportunities to deploy a few million dollars, turn over a card, and have an experience where, oh my God, there's something here, now let's have a Series A investor put a ton more money in and see it scale up, I think they're few and far between. And because everyone's so excited, everything's way mispriced. Right? And so for me as an investor, I think it's an extremely hard market to get excited about.

What else can I say? I mean, look. I do think the elephant in the room, which I'm sure we can discuss or not, right, is, you know, for the companies that have gone out so far, you talk about ChatGPT, I think there's huge regulatory problems which are becoming clearer. And, you know, it's not about like the machine is going to eat us all. I think that's a load of crap. Right? And I've been on the record for quite some time being very, very negative and cynical about kind of a lot of those narratives.

I mean, at the end of the day, token guessing, guess the next token, is not a fundamentally dangerous piece of technology. I do think that the copyright issues are deeply real and complicated, and there's a bunch of other challenges that these guys are going to face that, you know, again, because the world has a general viewpoint of like, fool me once, shame on you, fool me twice, shame on me. The era from social media to Uber to whatever, like, I think people are going to be way more quick and reactive to what's going on from a regulatory environment here, I hope, than historically. But I don't know. That's a ton of ground, and I don't know. Where do you want to go?

Emad Mostaque: (11:56) Yeah. No. It is a ton of ground. I think, you know, there's this question of, is this a disruptive or sustaining innovation? And there's a question of what this is. You have the classical big data and then you extrapolate it to sell you ads. That was good old internet. And it created these kind of behemoths in Meta and Google in particular. Then you have the application of computer vision and these other things, largely to the incumbents. So value was captured there.

Sam Lessin: (12:19) I would say in mobile, and mobile's a great example of, like, just double down. Right?

Emad Mostaque: (12:23) Yeah. And that's why kind of Facebook's first shift to mobile was good. Next shift to Meta, I don't know. Maybe they'll rename themselves Spatial or something. But, you know, this becomes very interesting because these models are something a bit different. So with Stable Diffusion, we took 100,000 gigabytes of images, and the output was a 2 gigabyte file. And it was four of the top ten apps on the App Store in December with that as the entire backend. You put words in and images pop out, and it makes pretty pictures of your face. Right? But then they all dropped off and they disappeared because there were more features than apps. There were cool features, but they weren't kind of product experiences.

Sam Lessin: (12:59) That is exactly what happened when the App Store launched. Right? You had fart apps as number one for May. There's a brief moment where it's cool and you're experimenting with it, and you have these kind of boob boobs. Right? But they're not real.

Emad Mostaque: (13:10) Yeah. I think we're like poop poops. Yeah. You're right. Exactly. Literally. But it's not real because you have to have the user experience and build products like normal. But where I feel right now is that we're at the primitive stage and very boring interactions. One-to-one interaction is very boring, I think. It is, again, very surface level without any memory, and it's ephemeral and fleeting. My thing is that we're probably at the iPhone 2G, iPhone 3G bit. We're just getting copy-paste.

Because what's happened is you've got technology that's gone from research and is now starting to go into engineering. What are the design patterns for this? How is it implemented? What's it good for? I think a key question of this is, like, you know, people say hallucinations. I was like, what does that mean? Well, I mean, it doesn't get every single fact completely right. ChatGPT is probably like a hundred gigabytes down from like 10 trillion words. The fact you can get anything right is an absolute technical marvel that no one's really sure exactly how that happens. You know? It's like, you know, Pied Piper from Silicon Valley. That Weissman Score would be even more intense if you can compress all that knowledge.

Because what these really are, they're reasoning machines. They're not fact machines. Because we've got two parts to our brain.

Sam Lessin: (14:16) Are they reasoning machines? Aren't they "guess the next token" machines? Like, that's the, I think that's a really fundamental thing. Like, I think the, my model and the easiest way for most consumers to think about this, I think it's basically accurate, right, is there's no actual intelligence to these systems. Right? All they're doing is saying, okay, based on all the words I've seen in the graph of language that I've been able to observe, here's the most likely next token. And that's really cool, to be clear. That's super useful, but calling that intelligence is a real stretch in my mind.

Emad Mostaque: (14:45) I think it depends on your definition of intelligence. Like, are you applying the free energy principles of Karl Friston, where everything is just intelligence from energy kind of dropping to its lowest state, or a different definition of intelligence? I think what I look at is like this: one-to-one is guessing the next token for language models. For image models, they're diffusion-based and now generate all sorts of other architectures. But it's about output and what can it do.

So one-on-one, it's a bit dumb. It doesn't have memory. You have the Meta paper by Cicero, whereby they had eight language models interacting with each other, and it outperformed humans in the game of Diplomacy. Just like all that good old AlphaGo type stuff, which used reinforcement learning. Is that intelligence? Probably still not, but it can augment intelligence. That's something that we've been focusing on a lot because you can use it for actual intelligence-augmenting things. You can use it for reasoning things. Give it a PDF and say, what on earth is this PDF talking about? You can do that right now, and that's a useful thing.

Sam Lessin: (15:42) That reduces frustration.

Emad Mostaque: (15:42) I used to invest in video games. I used to look at time to fun, flow, and frustration. I looked at things like, you know, this podcast we're doing. In a year, it'll be automatically transcribed and edited and added to our launch base through next token prediction. Does that require AGI? No.

Sam Lessin: (15:59) Yeah. Although, interesting, let's talk about this podcast. Think it's a really interesting case. You know, in the early days of Clubhouse, when Clubhouse was ripping, I used to go after Paul all the time, and I wrote about this being like, you are so stupid for not recording this stuff. I was like, look. Here's the reality. These conversations in Clubhouse are drivel. Right? Like, 99% of them is crap, and I don't want to listen to it.

However, if you've created a magical pump that says the internet is full of SEO shit and Wikipedia, we have a magic pump of people wanting to talk to each other live. Here's the thing: people want to talk. No one wants to listen. But if you transcribe and record it all, and you can create an index out of it, and then all of a sudden have this next generation search engine, that's fucking interesting. Right?

Here's the problem. What Paul said at the time, which I think turns out to be totally wrong given where AI is coming, he's like, yes, Sam, but there's no way to index it and blah. I'm like, there will be. Like, there's clearly going to be. Right? We're just, and it turns out I'd like to, you know, because I like saying I told you so. Like, I told you so. Like, there definitely is a way to do that now. Right? And that would have been super sweet and I think would have created—here's the problem, though, is with a lot of these visions of, like, oh, well, we'll just take all the recorded podcasts, right, and then kind of put a front end inside of them and compress them down and be done, is there's no economic model to that.

And maybe we can get into business models for a second that's going to make sense for anyone to publicly share anything. Right? The reason that people put things on the web was because they were getting paid for it in one form or another because the whole ecosystem of Google, where it created, was a trade. It was, okay, you get to index this shit, but you're going to send me traffic, and I can monetize it. And, you know, the publishers got snowed by that for a while. Right, and almost went away until they figured out paywalls. Right?

We're doing this now because it's kind of fun and bullshit and we'll learn, right, from each other. But we're also kind of doing it, at least I'll do it, because, like, I'll post it. Maybe someone will follow me out of it. It's a fun hour to spend with interesting people. Right? But there's an economics to it in some form, social or financial capital. This model, I actually think that the interesting thing about AI, if you take that view, what you would think is interesting is it's already going to crush the information economy of the web. Right?

I think that if you roll it forward, this conversation will not be in the public domain, right, going forward because there'll be no social economics to it. It'll just be a compression on top of it. And if anything, AI, again, if you take the model of, oh, it'll take a bunch of podcasts and compress them down into tweets, right, will end up kind of collapsing on itself if you need people, which you do, right, to ultimately be the source of truth and information about the world.

Emad Mostaque: (18:36) I'm super with that point of view. I'm not sure I entirely agree, because, you know, sometimes it is fun to shoot the shit. We do have health podcasts, and they've got their ads and things like that. But I think the attention economy is a very interesting element to this, particularly because these models are based on attention. So the differential with these models versus previous is that you have the "attention is all you need" paper, where it's like, from an information theory perspective, information is valuable inasmuch as it changes state. So you take this whole podcast and compress it down to a few tweets, that's all you need to see. But sometimes people want to see the full thing.

Sam Lessin: (19:05) No one really wants to see the whole thing.

Emad Mostaque: (19:07) Oh, no. They do. They do. Sometimes it's quite fun to kind of do it because I mean, let's say the Christensen thing of a job to be done. Right? You have a functional component, a social component, and an emotional component. You know? Why does everyone want to go to a concert? You know? Why do people want to have collectors' items and things? Products have different aspects and different elements to it. People still read full books. They don't kind of read the summaries of books. They don't read the simulacrums of it.

Sam Lessin: (19:33) I mean, look. To me, there's two different, again, this gets into some old Facebook stuff. But I think we can talk about, let's take financial economy out of it and just talk about informational and social economies. There's the entertainment economy. Right? For sure, AI is going to crush in the entertainment economy. Right? There's no question about that. Right? You start with porn and go on through, and the reality is we went from, you know, People Magazine to your friends, and your friends are more interesting than People Magazine. And guess what's more interesting than your friends? Professional friends who are hotter and funnier. And guess who's more interesting than hot, funny, professional friends? It turns out algorithmically finding the best person from the universe. You'll find some niche that's better. What's better than that? Synthetic. Right?

We will get to the point where, hey, there will be a hotter, funnier, more interesting, more personalized AI thing, which is derived. I totally buy that. Right? And I think that's why actually some, it's been funny to watch some pretty interesting influencers who are smart and be like, oh my God, this is the end of the world for us. Right? I agree with that.

Information is a very, very different beast, though. Right? Because the value is not engagement. That is, in the broad sense, the attention is everything where it's totally right. Right? Which is that is for sure true if you're trying to optimize for entertainment. It is not true, right, if you actually need to know what's going on in the world, right, or you need to, you're dealing with the real world. And that interface between the real world and the digital world, where the systems have no knowledge of what actually is truth, is where I think this, that argument probably falls down the most.

Emad Mostaque: (21:17) Well, I mean, maybe this is why, you know, if you say that kind of hallucinations are kind of core and it's a creativity machine, media is where it's more impactful, where the truth isn't the element there.

Sam Lessin: (21:27) Right? What's happened a little bit to date is a few of the AI companies wanted to talk about themselves as information machines, and they realized they can't. Right? And so they'll be like, we're not. Instead we're creative. Don't trust us for facts. That's fine. And I agree there'll be useful entertainment machines, but I think that goes into the whole, like, what are we actually talking about here? What are the actual values? And how scoped it is, which is not, again, it's not zero. It's just not everything.

Emad Mostaque: (21:53) Most of our societies are based on stories. My view on finance, pretty much all of finance is securitization and leverage, telling stories, then how good you tell them. And we can see the power of stories as they move around. So Silicon Valley Bank was a story that was true and led to an $18 billion outflow like that. All of us are kind of familiar with that, probably listening to this podcast.

Sam Lessin: (22:15) I think it's pretty cynical to say it's all in the stories. I mean, that's like a, I think there are, there's reality in the world. Like, the economy is not based just on storytelling.

Emad Mostaque: (22:23) No. I mean, the dollar is a story. The economy is based on the dollar. And so you have the Fed, confidence. You have confidence in the stock markets. It's kind of layers of these things, and then you have this technology.

Sam Lessin: (22:35) You need trust. That's true. And trust, I mean, ultimately goes all the way down to, is there a military behind it, which is somewhat of a story. And that I agree with. But I think that's a pretty abstract view. Right? Companies earn cash flows. They're real or not real. They release products. They do work that's real or not real. It's not just storytelling.

Emad Mostaque: (22:52) What is the multiple now? Maybe it's because I'm a former hedge fund manager, so I always looked at what was the incremental story for a stock that adjusted the multiples and other things.

Sam Lessin: (23:00) Sure. I agree that if you look at the world of multiples, you say, why do you get multiple expansion or compression? Right? And that's based on people's feelings about the world and future cash flows, right, in theory. That is a lot of storytelling. I don't think that's actually the vast majority of the economy. Right? That's the stock market. So I think that separating out what is the stock market from what's the economy is pretty important.

Nathan Labenz: (23:23) Hey, we'll continue our interview in a moment after a word from our sponsors. Omneky uses generative AI to enable you to launch hundreds of thousands of ad iterations that actually work, customized across all platforms with a click of a button. I believe in Omneky so much that I invested in it, and I recommend you use it too. Use CogRev to get a 10% discount. Emad Mostaque: (23:44) I think this is the important thing. We separate it out and we see where does this technology affect, and when does it go to the incumbents versus startups? Are these things fundable? We have one area of media. We can discuss that very concretely. I think it will have a massive impact on media. At Stability, we have a leading media team. We have agreement there, but we can dig into that. The other area is language models. Right now they're chatbots, and it's nice, but Bing is not the top search engine. It's not even top 20 on the App Store because it's still a terrible experience, relatively speaking. Even though some people are like, "Well, I use it for all the things." You don't really. ChatGPT grows really fast and it's useful for things like doing your own work. But do you really use it that much? What I find interesting is really looking at where companies are trying to go beyond the basic search patterns and have the classical feedback loops with engaging content and see how that grows. I think Midjourney is a good example of that, where David Holz built a community, took it to like 14 million people, and is making money hand over fist because he built a good experience on existing infrastructure, even though Discord is weird, a Facebook app style. But how many of those have you seen looking across the entire AI space? Most of the stuff right now is terrible.

Sam Lessin: (25:05) Again, I think the question is who gets the value. Let's talk about entertainment, because we actually agree on the entertainment thing. In a world of closed loop, it's all about what's the most engaging thing, and attention is everything. These systems are quite capable, assuming that you don't end up in legal hell, which I do think is a really big problem around human creativity and copyright and a bunch of other points of legal leverage on these things. I agree that you can make really compelling content, and it's going to hurt a lot of the human entertainment industry. But the question is who's going to win it? Is it going to be the Hollywood studios? Is it going to be the existing publishers who just start adding incrementally more of this stuff? Or is it going to be new startups or new people? Look, there's always exceptions to the rule, but I think almost the entire pie is going to be the people who have the distribution. They have the IP. They have all the pieces they need to just plug this in.

Emad Mostaque: (26:01) Maybe we can look at it in terms of consumption. The consumption of content went to zero with streaming and all these things that led to some winners coming because you have Netflix, you have Spotify. The creation of content basically goes to zero with this technology as well. We may see, I believe, in a few years, full feature films using this.

Sam Lessin: (26:21) Yeah, but I guess you have to own and distribute those. The reality is, I think it'll be the Hollywood studios as they have the distribution.

Emad Mostaque: (26:29) If you believe that's the distribution mechanism. But there's a whole ecosystem that can build around that. Things like DNEG, things like Industrial Light and Magic, do you need that when you have rendering at scale?

Sam Lessin: (26:40) To be clear, I think the thing I think you could totally see changing or evolving is going to be the factory inputs. Meaning, yes, are there capital investments that people have made that will become less relevant because of AI? Absolutely. There's no question. Will you almost certainly still have human writer rooms for the foreseeable future? For sure. There's going to be hybrids. I just think saying that, my basic point is that IP matters, distribution matters. There are things that matter. I agree with you that the factory plumbing in some of these places gets a lot less valuable if you have better AI tooling in certain places. I just don't think it matters.

Emad Mostaque: (27:20) I think it's a bit of a disruptive innovation for that side of things, increasing the pace of output. So Pixar can do six movies a year rather than two. The question around the industry. A few weeks ago, I was at Canon and gave a talk. I used to be a video game investor and player. The video game industry over the last ten years has gone from $70 billion to $170 billion. The average score has gone from 69% to 74%. Movies are $40 billion to $50 billion. The score is 6.49 on IMDB. Are you going to be able to make better movies and have a bigger market, in which case there's more room for people to make money? Or is it going to be a case of it cannibalizes itself? There's some key questions around media and media consumption.

Sam Lessin: (27:59) At the end of the day, the media consumption thing, though, again, depending on how you want to factor and look at it, it really just comes back to there's 24 human hours in a day. The reality is where time is spent, it shifts as it comes to this stuff. Time spent shifted dramatically into social off of other things when that happened. Will social get more compelling with AI? Absolutely. So will more attention shift into Instagram because of it? Absolutely. Do I believe there's going to be another platform that comes out of nowhere and swipes Instagram because the cost of production goes down? Nah. Do I believe that some new studio is going to come out and take out Pixar? Nah. Because they'll just make a few more films. That's cool. People will make money on that in some places. The level, the cost of production, and therefore the war of content gets more intense for sure. You'll get to a point where if you don't use this stuff, you're going to get screwed. But just because the competition level rises doesn't necessarily change the scorecard very much about how these things go.

Emad Mostaque: (29:08) Then this question of do you use legacy systems, or do you do new systems such as Runway ML, such as Wonder Dynamics, and some of these other ones that are engineered differently? I think there's a lot of legacy stuff where you're used to Photoshop and you continue to use Photoshop, and now they're introducing features like infill. But is there room for a ground up interface? And we see that sometimes with a character.

Sam Lessin: (29:31) My assertion is broadly no, but there will be exceptions. And the broadly no is it's not, to your point about innovation, is it sustaining or is it disruptive? It's like, Photoshop will get it 95% right. They already have everyone's payment on file. They already have the infrastructure. This is not like the internet. In the internet, there was a bunch of companies that were fundamentally unprepared for this. I do not think that most of the incumbents are fundamentally unprepared for this.

Emad Mostaque: (29:58) There's a question of do you create brand new markets? I was an early investor in Huya, the Chinese Twitch, and there was two hours a day on average per user. Now on Character AI, I think it's still number two on the App Store. We're seeing two hours a day on average of usage, which is insane engagement metrics. It's quite nice that you have a chat with it. But there's a question, can that become a product or a network? I think that we may be looking at some of the wrong areas here, because what you have is the consumer experience, the media experience, and enterprise experience. I think one of the things that's most interesting for me in terms of where money could potentially be made is actually the regulated experience. At Stability, we make open models, open source, but actually what we do is open auditable models for enterprise, private data, governments. We've got a whole bunch of stuff that doesn't have any web crawls deployed via Bedrock and others. I think that's valuable. One of the things we do is education. That's where I look at some of these areas, and they've been the main contributors to US inflation and CPI, education and health care. I'm like, you can do something different there. And maybe that's where a significant amount of value will be.

Sam Lessin: (31:07) I think it's sad from a Silicon Valley story if the answer is like, well, the money's all going to be made from regulation. I don't disagree with you for what it's worth.

Emad Mostaque: (31:16) Disrupting regulated industries, which is different.

Sam Lessin: (31:19) I do believe that someone's going to make a lot of money on AI regulatory compliance. There's no question.

Emad Mostaque: (31:24) AI insurance. There we go. Easy one.

Sam Lessin: (31:27) A hundred percent. There's a bunch of things that are really sad things that you have to do. People will make money on it. There's no question that people will find niche markets. They're super boring and not the type of thing I want to be involved in, but yes. Some enterprise investors will make bank on whatever Europe comes up with, certifying your models are compliant and GDPR 8.2 to deal with data request removals. That will happen as this stuff happens. I'm pretty uninspired by that. I think that's pretty sad that if the net income of new opportunities in AI is just going to be opportunities to interface with government and rein it in, it'll be sad.

Emad Mostaque: (32:13) It's a regulated industry. The example that I have there is education and health care. One of the things we work with a range of charities and multinationals on is deploying tablets into entire countries in Africa with AI that teaches and learns. You give every kid a tablet, the Young Lady's Illustrated Primer. What does that do to an entire nation? The only thing that's been provably shown to work in education is the Bloom effect, the two sigma effect. Right now, a sister charity, Imagine Worldwide, has been deploying the Global Learning X Prize, adaptive learning, and we're teaching 76% of kids literacy and numeracy in 13 months at one hour a day, with older kids teaching younger kids. I look at this technology and I'm like, there are certain areas where there's a gap that nothing could fill before. What if you had an AI tutor for every child? What does that look like? What if you had a hundred AI tutors for every child?

Sam Lessin: (33:00) I get it. I do think that we can always go back to the industries that tech has been trying to disrupt for a million years and for lots of structural reasons has not and say, "Ah, but now with this new tech, we'll disrupt it." I look forward to the years of debate in the US between the teachers' unions and people trying to deploy tablets for AI. We can say, "Oh, no, we're going to do it in Africa. Skip the regulator, the teachers." But I'm just saying it's like, yes, there's always hope that the next wave of technology will somehow unstick a bunch of problems technologists hate because of the regulatory or the structural issues with them. But I have no confidence that this one is meaningfully different.

Emad Mostaque: (33:39) This is the question, structural issues. Regulation is one thing. You look at Byju's, some of the other Indian education companies, you look at the Chinese ones across emerging markets. Maybe it will be the case here, and this is what I believe, that much of the productivity enhancements, aside from maybe coding and things like that, which we can get onto, and the biggest leaps will happen in the global South. Because they leapt to mobile, and there's a whole mobile economy and massive companies created from that. What if they make a leap to intelligence augmentation with this technology? Because right now, they can't service that. Now they could potentially service it, given the decreased cost of creativity, of engagement, and other things, from education to health care to other things.

Sam Lessin: (34:24) I think if your argument is that there's a bunch of countries outside of the US that have lagged in a bunch of infrastructure effectively or ability to execute certain things in education, that we'll be able to, a la the cell phone, have a leapfrog moment and move forward. Yeah, I don't object to that. I think that's basically true. Again, it goes back to the thing where I'm excited about the US. I think the US lives in the future, relatively speaking, to most other people and countries. I think the thing most people are excited about is how AI changes the top of the top. I agree with that. So I think if your argument is it doesn't change the top of the top, but it does kind of catch up a bunch of the third world, I do think that there are places that'll be true.

Emad Mostaque: (35:08) Well, so let's look at the top of the top then. I think Microsoft put out that 50% of all code is AI generated on GitHub now from Copilot, and there's 40% improvement in efficiency. My top coders really enjoy it because they train their own models because we have code models too, and they are shipping more and better code. What do you think about it with respect to that industry? Because I don't see a large industry, which is technology, disrupted.

Sam Lessin: (35:34) The only thing that I actually think is awesome for ChatGPT effectively is, I'll call it, Stack Overflow 2.0. It's great for that. If you think about it, why is it great for that? I think it is the perfect problem for the existing technology we have. You have a ton of open source code that these models can look at. Plus, you scrape all of Stack Overflow, which bye bye Stack Overflow. That goes back to the whole copyright issue as well as the issue of where some of the inputs come from. Plus, the nice part about computer code is that it's test driven in a lot of cases. It either passes the test or it doesn't pass the test. You have the perfect dataset of digital only self-contained reality, which I totally agree ChatGPT is great at. Frankly, I'm the type of person who codes, but I would never consider myself an engineer. It makes coding for me so much more fun because all this stuff I don't want to deal with, like what is this random error? What package do I have to install to manage this? It's all great at. Now it does lie, and it does make up wrong answers, and it's not perfect. But I fully agree that the Copilot-esque thing is very powerful and a really great specific use case. And I do agree that, talking about business models or what happens, Stack Overflow is the poster child. Stack Overflow is the Yelp of this generation. You know how Yelp had this huge lawsuit with Google that's gone on forever because Google basically just stole all their results? Stack Overflow is going to be that of this generation because they are screwed. It is a great example of a place where the tech is better because it was basically lifted.

Emad Mostaque: (37:21) Yeah. It becomes very interesting as well because now what you have is regulatory arbitrage, like the good old double Irish with the Dutch sandwich on taxation, where Israel and Japan have said, you can scrape anything for any reason, which is kind of crazy, commercial or otherwise. Maybe you scrape in one area, you train in another, and you serve it up in a different country. I think this technology is kind of inevitable. But then what is the implication of that? My take is that as we move through the next five years or something like that, the nature of coding will change. I started coding, what, 22 years ago. We had assembler and Subversion and stuff like that. Kids these days have it so easy with GitHub and all these libraries. What does it look like in a few years when you've got these technologies that you can describe something and start building apps? What does the whole ecosystem look like, again, when the creation of these things?

Sam Lessin: (38:11) It'll just make them much less valuable. Right? That's what it basically comes down to. What is remaining valuable is distribution and data. Right now, you can be a great engineer, solve a problem, whatever, and there's value in that. You can create a product that's actually worth something. If everyone can make products, theoretically, that cost nothing or really easily, then there's just no leverage in that anymore. And, again, this goes back to who wins. Who wins are the people with distribution and data. That's the answer from existing players. Now to your point about regulatory arbitrage in data, I think this is really the sad part about a lot of this AI stuff. Everything is going private. That's what the net of this is going to be. Anything that has historically been an open dataset or people are able to say, "Okay, I'll share this, but in return, I get traffic or notoriety," and that's a fair economic trade, it's over. So what's going to end up happening is walls are going to go up everywhere. Everything's going to go private, and that's going to be the interesting question about where you end up from all of this stuff from an economics perspective in the next few years. For what it's worth, this has happened many times before. This is not the first time in human history this happens. If you look at the news industry, people are always like, "Oh, the news industry used to be so great and then whatever." It's bullshit. The number of times in the history of news, basically, you had growth in distribution, things get super spammy, the elites retreat to private newsletters, it cycles. It's happened like six or seven times. I think this is just going to be a hard pivot. In some ways, I think the biggest thing that I'm very confident of is that AI will be the death of the public web and will be the death of a lot of open information, specifically because of what you said, which is that it makes data too valuable and too important.

Emad Mostaque: (39:55) The reality is AI doesn't need any more information because you've already shot everything.

Sam Lessin: (39:59) It doesn't for entertainment, and that's why I think entertainment is screwed. I think it absolutely, the Oracle problem in crypto where how do you keep a digital system in sync with reality and being meaningful is exactly the same problem that AI has, which is it can go in any direction it wants as long as the data is self-contained. The second it's not and it's trying to be synced to reality or a real world, it does need more data. It does need to be continuously updated or drifted in whatever direction current events go.

Emad Mostaque: (40:31) But then you have public broadcasting data. You have some of these other things as well, where the Oracle problem comes a lot easier to do when you can do retrieval augmented models and other things like that. I mean, there are sources of verifiable data for leasing. Maybe it comes then down to the use case.

Sam Lessin: (40:46) My biggest point is that they're going to be increasingly cut off if there's no economic model for supporting them and they're all getting abstracted and scraped by models.

Emad Mostaque: (40:54) I would disagree with this. I made it deliberately open so that we could highlight how bad scrapes are. I think they're unsafe as well. And we're the only company to offer opt out, but we work with multiple governments on national datasets and national models using broadcaster data and other things like that, that are continuously updated as national infrastructure. Because I think these models are a form of infrastructure. They're a weird type of primitive. They're like a mega codec type thing, where stuff goes in, stuff comes out. But people do want to have relevance and updates. So I think you will have an open version that is updated continuously. But then maybe, again, that's where value is. Which parts of information go private and are served up through models and who is providing them? Is this financial data? Is it this? Is it that? And what is the quality of these fine-tuned models? Because what you've just described as well is a bit of Armageddon for consumer apps in a way, because it goes down to zero. So then what becomes useful? Is it then that Apple takes a massive leap forward because they've got this identity infrastructure, and they have all the data there. They can do apps quicker than anything else.

Sam Lessin: (41:57) Yeah, except for the fact that Apple's entire shtick about encryption and privacy is going to make it literally impossible to play in this. I actually... I think Apple's role in the future of this stuff is going to be one of the most interesting big tech questions because they have positioned themselves so hardcore against all the things you would need to get leverage from AI. It's going to be very interesting to see how they navigate. Google, fine. Meta, fine. But I am very skeptical of what Apple's AI approach is going to be. Or I will say on the flip side, they're incredible at government relations and PR. So if they say you have to figure out a way to totally recant on all their encryption and their approaches to this type of stuff and have a new model where they somehow are the privacy heroes but also doing AI, I'm very curious how that's going to work.

Emad Mostaque: (42:45) They can keep encryption, and they can keep a customized model. Because, again, you don't need to take everyone's data to train it. You have a generalized model.

Sam Lessin: (42:53) Local model, many models on your local device with a general model behind it.

Emad Mostaque: (42:58) Exactly.

Sam Lessin: (42:59) In practice, we'll see how it plays.

Emad Mostaque: (43:03) With an embedding layer potentially, but it is very interesting because, again, the technology doesn't matter. It's the use that matters. What use can you get out of it? Yesterday, they had the thing where they said it learns automatically with a little ML model in there. It learns through a small embedding layer. But they don't talk about the technology that much because Apple always just talks about what the use actually is. I think the question is, what is disruptive? What can engage more? What can attract more? You've got apps coming down there, which is why the bar generally rises. I think we see this with technology as it goes. The bar generally rises, and so attention becomes even more difficult where it does come down to distribution, things like that. But, I mean, what's your take on the nature of virality in this type of age? Because these things are good at optimizing for virality potentially. Again, you can build better content. You can build better engagement once you get the funnels down. And that is the start of many of these apps.

Sam Lessin: (44:04) Yeah, I just think virality is a war in a lot of ways. Look, I think at the end of the day, will news feeds get more compelling for people? Absolutely. Will ads get more compelling for people individually? Absolutely. There's no question that these things are true and the existing players will get the vast majority of the pie of that type of stuff. I do think you'll tend towards more and more niche interests. Let's talk about porn for a second. Porn is always fascinating as a leading edge thing on this type of stuff. You can go on Reddit and find the weirdest porn in the world of all these sub communities that have filtered into these weird things that they're interested in. AI will make this ten times weirder. Or a hundred times weirder. And people are just going to keep filtering. Now why does this weird filtering happen? I mean, there's a bunch of reasons and different things. I think part of it, moving away from porn for a second, the broader ecosystem is people are desperate for a sense of purpose and place. The reality is the internet makes you feel very small because there's millions of people just like you, and that encourages people to seek out right-sized communities that are smaller and smaller. With AI, I think the interesting thing will be when it comes to attention and things like that is, look, for the first time, every single person can have hundreds of characters that like and support them all the time. The math of it all, you used to be, okay, you're trying to find a community that's the right size, that knows you, that you have a place in, you're valued in. But you're not necessarily the hero. So you go find a smaller niche or a different niche where you're more of the hero or you create a spin of it and try to lead that. I think a future where, basically, you log into social media or whatever, and you're like, "Hey, I'm Sam." And it's like, "Cool. What type of people..." Instead of who do you want to follow, it's like, who do you want to follow you? And you end up... Sam Lessin: (45:56) With like hundreds of AI characters, or frankly, I think what's more likely is the mix of humans and AIs, and you're not really sure which is which. But they're the ones commenting on your post, you know, "You're fucking great" or "Here's a cool question" or whatever. Like, I think that's the world we're going to end up in, is more and more segmented niches. Right? Where the ultimate end would be the "Her" model where you just have one AI girlfriend. I'm not sure we'll go there. I think that's really hard to pull off. But if you told me that in the future, you know, on Twitter, good example, you know, everyone has 100,000 followers and you're not exactly sure who's a person and who's a robot, and they all fucking love you and it makes it super compelling and you feel great, like that's a very plausible future.

Emad Mostaque: (46:40) Oh man, birth rates are going to go down. Have you seen that chart of young male virginity in the US from The Washington Post? It went from 8 percent in 2008 to 27 percent in 2018. Do you see what happened with Replika on Valentine's Day this year? Replika was originally an app that was designed to be your mental health buddy, until they realized you could charge $300 a year for erotic role play. Until 02/13/2023 when they got a message from Apple saying, "Shut this off." So on Valentine's Day, they shut that off, and then 68,000 people joined the Reddit the day after and said, "Why'd you take my girlfriend?" You know, it was quite a massacre.

Sam Lessin: (47:20) That's where we're going. And like, look, there's a whole history, I mean, we'll go back to porn for a second. It's always fascinating, such an interesting base human thing. But the whole dynamic of how Tinder has affected sexual inequality is fascinating. There's all these really interesting studies on this. Technology has a deep impact on this type of stuff. But if people ultimately care about validation, titillation, whatever it's going to be, there's no question that's one place you and I will agree that AI does dramatically shift the power on these things. They will end up with weirder subcommunities. Here's my question to you, though. We talk about power dynamics. I still think, and I might be wrong about this, I will admit because it's a bit of a niche, weird industry, but my bet is that Pornhub is still the winner. I assume they're the biggest porn company or whatever Reddit is. The place porn is doesn't shift. The platforms don't shift. It's just going to be weirder and weirder stuff and more and more AI generated.

Emad Mostaque: (48:23) I don't know. I mean, to be honest, Pornhub isn't that big. MindGeek is the company behind it. They were just bought by Ethical Capital Partners because, you know, life is weird. Reddit could be a big winner of this, but I think...

Sam Lessin: (48:36) I mean, Reddit is already just full of porn, right? So it's like, I just assume more full of porn.

Emad Mostaque: (48:41) I'm sure they're going to be very smart about this in generating porn. But really, what you're saying is long AI waifus. This kind of loneliness that they fill in, that could be a good investment theme because, again, you have the whole life stuff that then emerges engaging these people. I think it's going to happen.

Sam Lessin: (48:58) But I don't think it's a good investment theme. Let me just go back to, just because it's going to happen doesn't make it a good thing to invest in. And to me, it's really unclear where the leverage is in that. You'd have to believe that somehow you're going to have dramatically more compelling characters than the next company also providing them. I just don't know there's any lock-in, I think. And I don't think there's any other. So it's really unclear. Just because it's going to happen doesn't make it an investment.

Emad Mostaque: (49:28) Well, I think there is, if you look at hook dynamics, there is that trigger-reward, dopamine rush and lots of stuff that you invest into each character. So there's probably going to be a lot of first mover advantage here. On the other side, you have the licenses, you have the IPs that can be brought to this. Not on the porn side. There's a whole gamut from porn to your mental health buddy. I mean, I think ultimately, if you're basically saying, is there a solution to loneliness and making you feel good? There's a whole gamut of different things that can happen here where you've got IP, where you've got these other things. Again, the example, I think, that comes from that is the Hololive influencers. They're going up like that.

Sam Lessin: (50:07) Not to push back on it, I mean, it sounds like you're agreeing with me, which is the leverage is in IP or the leverage is in distribution for this type of stuff. Because the pure tech stuff to it, it's like, yes, there'll be gazillions of virtual girlfriend or whatever things, but those are not platforms you can invest in, and they're not truly valuable even if there's a lot of them.

Emad Mostaque: (50:30) I think bringing it all together is something that will take time. So I think there will be a lot of first mover advantage. So with Stability, again, data and distribution are key. My thing is take the best of open, which we stimulate and we fund, build a stable series of models with the data, and then get distribution to it. So open data, we can license data, national data, and then we take it through cloud, system integrators, on-prem, and I take a share of all that revenue. So I agree, that's kind of core to a good business. But what I'm saying is I don't believe in this particular area, going from porn at one end to mental health buddies at the other end, there are established distribution networks. I think there'll be a lot of opportunity there for first mover advantage.

Sam Lessin: (51:11) In the history of investing, first mover advantage has generally turned out to be a pretty bad investment thesis.

Emad Mostaque: (51:18) Okay. Maybe not first mover advantage. Let's say first proper entity advantage that takes advantage of classical good company dynamics. There aren't good companies here yet.

Sam Lessin: (51:31) Yeah. Maybe. Again, I think it's a little hard to know exactly. There's a huge spectrum here, so it's hard to exactly react. But I would say, look, I think we're agreeing that entertainment's going to get more entertaining and cheaper to produce. I think we're agreeing that IP is very valuable and maybe it's more valuable. Like, so maybe the answer is buy Disney stock because they haven't been, because Elsa is going to be a way cooler character when she... That's kind of obvious, right? And I think we can all agree on that. I think what is not clear to me is outside of the IP plays, outside of the existing distribution plays, what AI really unlocks as a new disruptive vector for this type of stuff. Because I do think that there are some pure AI type things you can do. Again, we'll talk about the AI girlfriend thing. It's just unclear what the payoff is there because I don't think there's any moats.

Emad Mostaque: (52:24) Well, I think if you look at it, you can scale a certain type of human endeavor, shall we say. For example, there aren't enough therapists in the world. It is a regulated industry, but at the same time, there is a gap for therapists, just like the meditation apps kind of stepped in. And they created Calm and they created some of these other things that were huge. Now this is more engaging. So I think one of the areas to look at is where can you not find enough people that can fill in some of these things and then build good experiences around that if you're looking at companies that can come to the fore, because there isn't an existing solution. This is why, like I said, for me, I look at the global south, I'm like, there's lots of gaps. I look at here, and there's, again, gaps. Where are the gaps that you want to go? Because you can basically create a market. You need to fulfill a key customer need. And so, again, I look at mental health in particular, and that goes from the porn AI waifus all the way through to proper mental health therapists. There's a huge gap in that particular market, and there's a huge chasm of loneliness and a lot of products that could be built that are generally useful and that can go quite fast, enabled by this technology where they were not enabled before.

Nathan Labenz: (53:31) I think this has been fascinating. I have kind of a handful of concrete prediction questions that I want to get you guys on record with if you're up for it and see if you have similar concrete predictions or different, and then we can obviously check back in the future. How does the market for inference shape up? And for a jumping off point, how do you think it might look different from the current cloud infrastructure market?

Emad Mostaque: (53:58) I think inference will be the vast majority, but I think it's like GPUs to ASICs for Bitcoin mining. Because these are big research artifacts that are PyTorch. The output is a little tiny file of binaries, and that's not a complicated thing to run inference on. You see Inferentia 2 on Amazon Cloud. You see the TPU v5s and others. I think there'll be more and more customized solutions as you move from that research to engineering bit. And then the cost competition goes massive in a few years' time. Over the next few years, I think there'll be a shortage because everyone will try to use this technology. There won't be enough. And then eventually, it'll move towards the edge because I think there's just orders of magnitude optimization that we can do from here.

Sam Lessin: (54:37) Yeah. I mean, this is a little bit beyond my direct wheelhouse, but I think at the end of the day, what I'd say is I highly suspect, because the distribution is indifferent and the patterns aren't different in any of this stuff, that what you're going to see is everything from chipsets all the way through to cloud providers, things look basically the same as they do today. Everyone's just making more money.

Emad Mostaque: (54:57) Yeah. And the inference is also interesting because in the cloud, you just move to wherever the cheapest inference is for these models. And so it's quite a mobile thing. So you've got NVIDIA coming hard for that reason.

Nathan Labenz: (55:07) Question 2. What happens to the price of primary care medicine in the United States over the next 10 years?

Sam Lessin: (55:16) Up.

Emad Mostaque: (55:17) Unfortunately, given the issues, I think he's correct. It should go down, but the regulatory capture is far too strong, unless something major happens.

Nathan Labenz: (55:28) Question 3. You guys both have said there's a ton of junk out there. It seems like, broadly, we're not expecting that many major incumbents to be disrupted. What would you guess would be the most likely incumbents to be disrupted if you had to pick some?

Sam Lessin: (55:46) Stack Overflow?

Emad Mostaque: (55:47) I think the BPO process. Right?

Sam Lessin: (55:50) Whoever owns Stack Overflow.

Emad Mostaque: (55:52) I think Prosus is probably fine. You've seen disruption in Chegg and other things. We didn't really get into this, but I do think that some SaaS companies with less switching costs will be at risk from some of these higher context window models where you can put 10,000 words of instructions in because some of them are relatively basic in that way.

Sam Lessin: (56:13) I actually, for what it's worth, I think we once again mostly agree. The only thing I think is at risk are things like Zapier or some of these kind of, and they're kind of 50/50 because they also could get way more powerful. But there I think there's a bunch of SaaS tools that probably end up looking more like features where they used to look maybe like companies because of AI. But real incumbents, like public, big, multibillion dollar companies, I mean, I don't think any of them are really at risk of disruption. I think they're all just going to get stronger. I think a bunch of startups or Series A companies are going to get wiped out or all of a sudden not going to be able to grow because I think the big guys are just getting better faster.

Nathan Labenz: (56:53) Will the big tech companies that are currently open sourcing, for example, Meta, Salesforce, will they continue to do so, or will they stop?

Emad Mostaque: (57:07) Well, I think Meta has moved to noncommercial open source for all their open source. I think Salesforce is kind of continuing to do full open source. I think it's just very difficult because the regulatory environment becomes tougher and tougher, and it's not core to their business to open source.

Sam Lessin: (57:22) I think that it would be 100% driven by business models. So Meta, if you think about it, is incredibly well positioned should the level of AI continue to grow in the world. It's like the way they're going to monetize that is having dramatically better ads and dramatically better content in a bunch of ways. And so I think they have a heavy incentive, if you think about it, to keep open sourcing it. They want the talent. They want... You know, the reason companies also open source is there's a real internal-external interplay in terms of how you build an ecosystem that attracts great talent and things like that. So I think those still keep happening. But I think the list of people who are supporting open source stuff will shrink as people get super competitive about this stuff, and the battle lines are drawn.

Nathan Labenz: (58:12) If you had a billion dollar company, you know, of any kind, could you come up with a story? Could you identify a type of company where it wouldn't make sense or where, let's even frame it more decisively, where it would be defensible to not be investing, say, at least a million dollars in figuring generative AI out today? In other words, is there anywhere where this is not relevant?

Sam Lessin: (58:38) I mean, I'm sure there is, but nowhere I can think of offhand.

Emad Mostaque: (58:41) I think it's relevant just about everywhere just because you always get a level of productivity increase. But, you know, as Sam said, for a lot of industries, this is a sustaining innovation. It's just the next stage as opposed to massively world changing, shall we say.

Nathan Labenz: (58:56) What happens to the marriage rate and the birth rate in, say, the United States as AI companions of all sorts become available?

Sam Lessin: (59:08) It clearly goes down everywhere.

Emad Mostaque: (59:09) I mean, like, look at South Korea. They're at 0.8 now on their fertility rate, thanks to video games and a few other factors.

Sam Lessin: (59:16) There are negative and positive ways to spin this, actually. I personally have the negative take on this. I think it's bad for the future and a bunch of other things. But here's the reality. It's just a simple economics thing, which is, if the world is more entertaining, then that makes doing unentertaining, hard, long things, like having kids and raising them, less appealing. It's like Tinder is going to hurt the birth rate. AI is going to hurt... again, it's a sustaining innovation, which is technology generally is going to hurt the birth rate.

Emad Mostaque: (59:45) Yeah. And then you see places like Japan where you've got declining birth rates really embracing this because they want the productivity increase, which is the other flip side of this. So it'd be more productive, less people.

Sam Lessin: (59:55) Yeah. I mean, that's the irony that you talk about a lot. In The Diamond Age you referenced earlier, the really long term sci-fi story is pretty simple, which is, highest level, technology will drive there to be fewer people. And then because there are fewer people, we need more technology. And it becomes symbiotic. That's the really sad part. I mean, it's all sort of people who think about, oh shit, the entire human population is going to fall off a cliff because we're entertaining ourselves to death.

Nathan Labenz: (1:00:23) Do you think any AI leader, you know, OpenAI right now or somebody who takes the leading position from them in terms of having the best model, can sustain super high gross margins for a few years into the future?

Emad Mostaque: (1:00:43) Based purely on the AI? No. It needs to be distribution and data.

Sam Lessin: (1:00:47) I think on the proprietary side, it's yes. Unless it's super data unique, you're going to zero. I think that you have Google and OpenAI as uneconomic actors, and that's incredibly difficult.

Nathan Labenz: (1:01:00) And so just to unpack that, you mean that, basically, they won't allow, they don't intend to make a ton of money on this, they won't allow anyone else to either because they're going to provide it at cost?

Emad Mostaque: (1:01:09) They don't care about... Yeah. They're probably under cost to get the data. You know, again, they have different business models. Google cost shifts all the time. This is why I went to the other side for open models to private data and standardizing that.

Sam Lessin: (1:01:21) No one's making money on open models alone.

Emad Mostaque: (1:01:24) Well, I mean, there is a way. There is a way. So what I do with my business model is standardizing it and then providing all the services around it as a blueprint for my partners to take forward.

Sam Lessin: (1:01:36) Yeah. I mean, there's a consulting nexus version of this that you can probably pull off. Again, consulting models, I think, are, again, obviously what you're pursuing, but very difficult.

Emad Mostaque: (1:01:47) I build the models. I give it to my consulting partners, and they take it forward. That's my business.

Nathan Labenz: (1:01:52) My theory of Stability has been, well, partial theory. It's obviously a lot of facets to the organization, but I kind of view Stability as the provider for the nonaligned countries, if you will. Like, those that are like, we definitely don't want to buy from corporate America. We want to own our own. We want control. Those folks seem like they have nowhere close to the resources domestically to build their own systems, but they do have kind of a point of pride and also just practicality. If you're an African government and you want to get your own legal system into a language model, who's going to do that for you? That feels like a real sweet spot for Stability. How much of the future do you think is kind of serving that third set of countries?

Emad Mostaque: (1:02:42) Look, we've created subsidiaries in dozens of countries, bringing in all the top family offices for data and distribution and national models and national datasets based on broader data. Take a subset of that, make that open, and we've got the rest of that for our commercial side. So I think the global south is the focus for us, plus some of these big multinational companies building dedicated teams for them, because we're the only company in the world that can build you a model of any single modality or type. Is that sustaining? Who knows? But it's a decent business. So my thing was build a decent business, doing decent stuff, doing something different to other people. I'm sure there'll be more competitors, but again, let's see how it goes.

S4: (1:03:18) Omneky uses generative AI to enable you to launch hundreds of thousands of ad iterations that actually work, customized across all platforms with a click of a button. I believe in Omneky so much that I invested in it, and I recommend you use it too. Use CogRev to get a 10% discount.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Cognitive Revolution.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.