Watch Episode Here
Read Episode Description
We're sharing a special episode of the Moment of Zen podcast focused exclusively on the AI moment with hosts Dan Romero, Erik Torenberg, and Antonio Garcia Martinez, and special guests Amjad Masad of Replit and Flo Crivello of Teamflow. The debates presented in this 30-minute episode span the potential rise of the 1000x developer to analyses of which AI companies will prove enduring.
0:00 Preview of the debate
2:23 Sponsor
3:10 Flo’s take on the AI moment and AGI
5:45 Amjad’s take on the AI moment
8:03 Antonio's skepticism about transhumanism
9:10 Human tendencies color the debate about agency
11:25 The inflection point is a jump in generality
13:14 AI's effect on coding
16:57 Weakness in current models
19:18 Bounties and the future of work
21:48 Antonio's skepticism around bounties
25:00 Who will capture value, incumbents or startups?
27:26 "All knowledge work will be automated in 20 years"
30:10 Who will be the biggest beneficiaries of this shift?
31:50 What AI companies should VCs invest in?
32:26 Sponsor
For more Moment of Zen head subscribe to @MomentofZenPodcast
Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data, to generate personalized experiences at scale.
Thank you Graham Bessellieu for production.
Music ITULGRFPIAFYGXKD
Twitter:
@CogRev_Podcast
@moz_Podcast
@amasad (Amjad)
@antoniogm (Antonio)
@altimor (Flo)
@dwr (Dan)
@eriktorenberg (Erik)
@labenz (Nathan)
Websites:
cognitivervolution.ai (Podcast)
https://cognitiverevolution.substack.com/ (Weekly newsletter)
replit.com
teamflowhq.com
omneky.com
Show Notes & references:
Eliezer Yudkowsky: https://www.lesswrong.com/posts/FKNtgZrGYwgsz3nHT/bankless-podcast-159-we-re-all-gonna-die-with-eliezer
Paul Christiano: https://paulfchristiano.com/
AI Feedback: https://www.anthropic.com/constitutional.pdf
Full Transcript
Transcript
Nathan Labenz: (0:00) Today, we're sharing a special episode from the Moment of Zen podcast. Erik cohosts with Antonio Garcia Martinez, founder of web3 growth metrics company Spindl, and Dan Romero, founder of Farcaster, a sufficiently decentralized social network. The guests for this episode were Amjad Masad, CEO of Replit, a leader in AI coding assistance with its Ghostwriter and Ghostwriter Chat products, Flo Crivello, founder of Lindy, and myself. We cut the original hour and a half discussion down to the parts that focus exclusively on AI. So in the next 30 minutes, you'll hear a mix of visionary and skeptical takes on short and midterm AI impacts, from the potential rise of the 1,000x developer to the question of what makes this AI moment different from previous hype cycles, to analysis of which types of AI companies will prove enduring. This is a fast-paced, wide-ranging discussion among very smart people, all of whom are grappling with AI developments in real time. Enjoy.
Amjad Masad: (1:03) The level of exponential improvement is so tight that you could go to lunch and come back and the world has changed. That's a singularity because you can't know what's next. And then the less wrong branch of that view is that the most likely outcome is death of humanity. And the reason that's the most likely outcome is because it is impossible to align an enormous computing force that at the same time is sort of dumb. It doesn't understand human preferences, and therefore any goal that you give it is not going to be specific enough. And there are a lot of potential interpretations or explanations of that goal that get you in trouble.
Antonio Garcia Martinez: (2:00) Yeah, I call bullshit.
Antonio Garcia Martinez: (2:02) I think they're completely full of it. Sorry. You know how in a lot
Antonio Garcia Martinez: (2:05) of the sci-fi apocalypse literature that appeals to nerds, somehow when the world ends and there's no law and order and it's Mad Max, the guys who make the computers work somehow end up running the show? This is an expression of that fantasy.
Erik: (2:22) The Cognitive Revolution podcast is supported by Omneky. Omneky is the omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms with the click of a button. Omneky combines generative AI and real-time advertising data to generate personalized experiences at scale. I think we're going to
Nathan Labenz: (2:42) see a great convergence of a lot of different AI advances that suddenly all come together and really make a tremendous impact on modern life. And so calling it the Cognitive Revolution because I think the changes that we'll see are going to be every bit as transformative as what we weren't here to see, but as our ancestors saw in the agricultural revolution and, of course, the industrial revolution.
Flo Crivello: (3:08) Yeah, I totally agree with everything Nathan said. I was grabbing dinner last night with Denny Britz at Replit, who described what we're going through as what he calls the third great convergence. He says the first one was 2012 with the ImageNet moment and deep learning, and all of AI turned into that. The second one was transformers with the Attention is All You Need paper in 2017. And then we're going through the third one since roughly 2022, with large language models. You could think of those convergences as happening once every four or five years. So large language models, ChatGPT is the most famous example, are really good at manipulating language. It turns out that they're so good at it that the moment you model a task as a language task, which turns out you can model a lot of tasks as language tasks, it becomes really good at that as well. So we're starting to use large language models for things ranging from, obviously you can ask it questions, but it can also do a little bit of math. We're starting to use it as a search engine. We're starting to use it to code. We're starting to use it in robotics as a sort of reasoning engine. And so I think that alone makes AI dramatically underhyped, and I say that knowing full well how hyped it is. I think even if we stopped the progress and the discoveries that we've made right now, which we're not stopping, they're going exponential, even if we stopped them right now, all of civilization, I think, is going to be dramatically impacted in the next 10 years. Then I think zooming out even further, I am more and more a believer in the AGI moment. My timelines are compressing rapidly, meaning I actually believe that AGI is going to happen sooner and sooner, and my concerns are slowly increasing. As a reminder of what AGI is, it's that recursive loop of self-improvement. AGI becomes better and better, and so it ends up with an IQ of a billion. I think that the steel man for it is that the reason why people are so worried is that Moore's Law means that there won't be just one AGI. Moore's Law means there will be a lot of AGIs, and so in the limit, anyone with a laptop has an AGI. The risk that Eliezer Yudkowsky talks about is that it's impossible to align a single AGI, let alone a million AGIs. So we need to align it. We need to do that impossible thing a million times. Anyway, that's not to say that we should bomb all the GPU fabs or whatnot, which I think is what Yudkowsky may prescribe. I just think it is something that is under-discussed. I think it is a real risk, and I just wish there was a lot more funding and attention brought to this issue.
Nathan Labenz: (5:30) Yeah, so the steel man of the Flo side of the argument, which is essentially Eliezer Yudkowsky is the major authority there. Now there's another guy, Paul Christiano, who's an offshoot of LessWrong or the Yudkowsky thing. But that branch of AGI is going to kill us all was started by Eliezer. And the main argument there is that if you accept the secular view that humans have these meat computers, then there's no fundamental physical law that says you can't build these meat computers in Turing machines. So if you accept that, then you also accept that at some point we're going to have human-level AI. If you accept that we're going to have human-level AI, you also have to accept that there's going to be an AI explosion. So an AI takeoff event is basically when the AI comes online and trains the next generation of AI, and that next generation of AI creates the next generation of AI, and that could start as a slow process over a year or two. Actually, people at OpenAI are already using GPT-4 to train GPT-5, and so that's already in some way happening, and that will shorten to milliseconds at some point. And that's the Ray Kurzweil singularity, where the level of exponential improvement is so tight that you could go to lunch and come back and the world has changed. So that's a singularity because you can't know what's next. And then the LessWrong branch of that view is that the most likely outcome is death of humanity. And the reason that's the most likely outcome is because it is impossible to align an enormous computing force that at the same time is sort of dumb. It doesn't understand human preferences, and therefore any goal that you give it is not going to be specific enough. And there are a lot of potential interpretations or explanations of that goal that get you in trouble.
Antonio Garcia Martinez: (8:03) Yeah, I call bullshit.
Antonio Garcia Martinez: (8:04) I think they're completely full of it. Sorry. You know how in a lot
Antonio Garcia Martinez: (8:07) of the sci-fi apocalypse literature that appeals to nerds, somehow when the world ends and there's no law and order and it's Mad Max, the guys who make the computers work somehow end up running the show? This is an expression of that fantasy. There's always been this deep latent thing. Never mind the weird Western obsession. The original robot was coined in a play in which robots took over, or the Golem legend inside Jewish folklore in which humanity creates a thing and that thing rebels. There's always been this deep latent fear that that's going to happen, but I think it's basically bullshit. And not only that, the transhumanism is also bullshit. You can literally take any of Ray Kurzweil's little spiels and global regex, replace singularity with rapture, and you get an evangelical sermon. It's basically Christian eschatology expressed in scientific form. They're just not aware of it because they don't actually read any religion. That's what I think it is. And I don't think it's ever going to happen. I don't doubt the house cat theory, though, the H.G. Wells Time Machine thing where there were the Eloi and the Morlocks, and it turns out we just become Eloi in which we're living in the service of it, and the Morlocks are machines or something else. Maybe that happens. I think that's already happened in the sense that we get worked up about these Twitter fights over nothing in which nothing happens.
Amjad Masad: (9:10) I think fundamentally people ascribe agency to things. Anyone who has kids knows that one of the first things that kids do when they grow cognitively is they give names and they give personalities to their toys or even to simple things like boxes or whatever. Humans have this tendency to ascribe agency. I think people just extrapolate from this idea that we see some kind of glimpses of agency in these things to the fact that these things can formulate abstract goals and desires and go execute on them, which I don't think is entirely true. So AI is ultimately a tool for humans to do things in the world. Think of LLMs as another computer. That's how I build on top of LLMs. We're doing a lot of things with LLMs at Replit. My mental model for it is it's a computer, and it's a very powerful new type of computer. Let's have the intellectual honesty to say that we do not understand what consciousness is, and we don't understand the thing that really gives us agency. And I think it's just a tool as it were right now. I think we need some kind of science of agency, of consciousness, before we're able to say that we can build these things. To accrue enormous amounts of value in the world requires a lot of planning, a lot of emotions. Just think about being an entrepreneur. You've done it multiple times. Just how much convincing you have to do. You have to have some kind of theory of mind. You have to think what other people think, and I'm not sure we can build that just yet.
Erik: (11:02) So no AI girlfriends? You don't think that's it?
Amjad Masad: (11:06) It might act like it. It might fool some people. ChatGPT is already fooling some people, but that doesn't make it real, and that doesn't get it to the level of power that kills us all.
Amjad Masad: (11:20) What the transformer model brought is a kind of generality. So in a way, there's a generality jump here. I think if you go back to the early era of AI, they would probably think that the large language models that we have today are some kind of AGI, because it's sort of this boiling frog phenomenon where slowly we're increasing generality. There's not going to be any point in time where we're going to say, this is AGI. I think every jump is going to be a significant jump in generality, but it's still going to feel somewhat slow. So this creates a sort of rising tide and makes everyone more productive, makes software a lot easier to create. Creating software used to mean learning all sorts of arcane knowledge, and now you just have to write English to create a piece of software. You can create a meaningful piece of software. So now you take something that was the capability of expert software engineers, and you give it to everyone in the world. And I think the impact of that is going to be, A, hugely deflationary. B, it's going to give people new superpowers. I think a new type of entrepreneurship is on the rise, and we're seeing a lot of people that would have required companies and armies of people around them to build something useful who are now able to do it on their own. And I think we're going to see a new crop of winners and entrepreneurs and millionaires and billionaires coming out of this phenomenon. And so it's also a fundamentally new way of working and automating things.
Antonio Garcia Martinez: (13:08) So would you teach your kids to code?
Amjad Masad: (13:11) Yeah, I would teach my kids to code, and the reason is because code is still going to be super relevant. We have ways to generate more accurate code. We're coming up with a ChatGPT-like thing inside Replit that uses a larger model and will give you more accurate code, but still the accuracy is not going to be 100%. So you need to learn how to debug that code. I think surprisingly, most programmers will spend most of their time reading and understanding code. That's going to put pressure on tooling for debugging and comprehension, and LLMs will help there. LLMs can explain code for you. But I think there's going to be more innovation in visualizing code, other ways to debug and comprehend code. On the frontend side of things, you're going to see super productive frontend coders that are heavily powered by code generation. It might be the case that they don't actually code. They're actually just plumbing things together. They're talking to a lot of different LLMs. They're acting more like project managers and product managers than actual programmers. On a recent podcast, I called this the Steve Jobs black pill. In the early vision of computing, we thought that everyone was going to be a programmer. There wasn't this user-programmer dichotomy. And then Steve Jobs popularized the idea of end users with user interfaces, really lovely user interfaces, and that became the dominant thing. Most people are consumers of software, as opposed to the creators. I think the idea of software creation will come back. I think there's going to be a lot more people wanting to create personal software and software for their business use case, and there's going to be a lot of end user programming. Those people will not have to read code because the level of accuracy needed is probably not 100%. Optimization needed, performance needs are not that high. A lot of the code is going to be throwaway code. And so if you're a professional, you probably don't have to learn to code. But if you want to be a low-level programmer, I think that's still going to be relevant. Or if you want to be a frontend engineer, you need to have some knowledge of code, but maybe not that deep.
Dan Romero: (15:43) It feels like it's going to make the 10x engineer a 100x engineer in the sense that there's still a base level of you need to be able to guide it in the right direction. It's a tool. And so for the person who already has the skill level or the talent, it's going to be increasing the advantage. Whereas for maybe the mediocre engineer, it doesn't really make a huge difference, or maybe they just lose their job completely. I'd be curious how you guys think about that.
Flo Crivello: (16:11) Think of it as a rising tide lifting all boats. I agree. I think the 10x engineer is going to become a 100x engineer. I think the 1x engineer will become a 10x engineer, and I think that's what your question earlier about a new step change. I think folks who were not engineers before will become engineers and be able to perform simple tasks at first and more and more complicated ones.
Nathan Labenz: (16:29) I don't think this is a replacement dynamic, but it's also really worth looking into Anthropic's recent publication that they're calling Constitutional AI.
Flo Crivello: (16:40) And it's kind of the next generation of reinforcement learning from human feedback. They're now doing reinforcement learning from AI feedback. I think it's interesting also figuring out whether AI is going to replace the code. Are we going to run models instead of running code, or is it going to write the code? I do agree with Amjad that these models have a weakness right now, which is that they suck at systematic thinking. For example, you can give them a task of "pass this date" instead of having it go day, month, year, have it go month, day, year. If you give it enough examples, GPT-3 will succeed at this task something like 99.8% of the time, but it will fail 0.2% of the time, which shows that it's succeeding enough that it understands the task, but it's not thinking about the task in a systematic way. The other weakness of these models is that they cost a fortune to run. They're very, very expensive. It's always going to be more expensive to use a large language model to parse a date than it is to run a piece of code. I think we're in this very interesting time right now where nobody really knows how these things will shake out, but I think that these AIs will learn how to use tools, and I think that on some level they will be smart enough to discriminate and understand, "Hey, is this a task that I expect to perform a lot? Is this a task that I am performing a lot right now? And is this a task that benefits from more systematic thinking?" If so, I'm going to write a piece of code to perform this task for me because it's going to be cheaper and more reliable and more systematic.
Dan Romero: (18:08) And you think that the model will know that, or is it the human guides it to saying, okay, this is not a good use of the model?
Flo Crivello: (18:14) I think there are ways to build systems that are part of the model or upstream of the model such that the model would either way know that.
Amjad Masad: (18:21) I actually think that the engineering is lagging way behind the capability. It's kind of frustrating to me, because I'm working on Replit, working really hard to build some LLM into Replit capabilities, so we have the Ghostwriter product. But I feel like we're all scratching the surface. There's so much to do. I think tool usage is possible today. It's possible to build a ChatGPT that has a Python interpreter on the side, that has a search engine on the other side, but nobody's really built it. So I think it is possible today to ask it a question like, book me a flight and it going to Google, whatever, and then maybe writing a program, hitting an API, and booking you a flight. I think that's totally possible today. And the reason why we started thinking about the world this way is that I'm actually a crypto canon, but The Sovereign Individual had a description in it that talks about the future of work in a way, and it talks about how AI, crypto, the future of the internet, would support this world where people are less full-time employed. They're more like freelancers. They're able to jump from work to work. They're able to construct companies on the fly and dissolve them right after the work is done. And that's been the picture in my mind for a long time, and I think for the first time, it's really possible. All these technologies are maturing in a way that allows this new crop of entrepreneurs to be able to be hyper productive, and be able to get things done super quickly and super cheaply. You know, when we talk to younger programmers, almost without fail, their ambition is no longer to join Microsoft or Facebook or whatever. They want to build businesses. They want to make money. They want to go into freelancing. They want to be this free spirit. They want to build a career that is freedom maximizing. And I think having the ability to use an army of AI assistants and being really powered, supercharged by this technology will give people amazing opportunities in the future.
Flo Crivello: (20:59) Yeah. I think that just like the PC was a sort of great individual empowerment, I think you're right that AI is even more leveraged to the individual. I think code let companies like WhatsApp sell for $20 billion with, I think, 40 or 50 people. I think you're right. I think we're going to see a $1 billion, $10 billion, maybe $100 billion company in a decade or so with perhaps one or two people. I totally agree with that. Man, if you paid software engineers by the ticket, I think you would see a lot fewer rest and vesters, and you would actually see 10x engineers making 10x the salary. I think if you really believe in a 10x engineer, which I do, I think you should see software engineers make $10 million a year at big companies, and you don't see that. Conversely, you see people who might call professional coffee bribers who just rest and vest and who make $500k a year of equities. So I'm very excited by the potential here.
Antonio Garcia Martinez: (21:48) I think that's totally not going to work, by the way. The bounty system is one of the parts of Web3 that totally doesn't work. There's entire companies that are based around bounties, and those companies that are based around paying someone a bounty to do a thing are always the ones that you have to route around and somehow use the product without it. I mean, think about it. Do you pay anybody in your company in bounties? No. Would a 10x engineer who actually wants to make generational wealth, we're talking $100 million exits, sit there and do basically Mechanical Turk for coding all day, or would they actually... I mean, you're creating an arbitrary binary duality between rest and vesters and people working like coding chipmunks on bounties. Right? But the reality is that most coders who make lots of wealth, it's neither one nor the other. Right? It's people who actually do work their asses off with some sort of committed product in which they have an overarching design ethos. Right?
Amjad Masad: (22:30) Google needs 120,000 employees, and most of them are just making lattes every day. They get really good at making lattes. But at the drop of a hat, they would need them, and they would find them there. And there's very low friction on using their labor and their talents. Right? So it's a very rational behavior to hoard talent. It's sort of like the billionaire that has all these assistants, you're like, oh, they're just sitting around doing nothing. But at the drop of a hat, when something important happens, he needs all that labor, and he can't really hire it from the market at large. So anytime technology reduces the cost of going to the market, you see us going to the market more. Again, people had a lot more drivers and personal drivers before Uber came around, and now everyone has access to the market at a very low transaction cost. And that's the same across the board. Anything that you use today that you find very useful, like DoorDash, people used to have people that worked with them for them. People had servants that were around all day just waiting for that one order of the day to go get that. And now you're able to go to the market and get that labor on demand on the fly. Right? So I think bounty-type systems, and I think crypto as well, could allow us to work on some coordination problems in order to solve the transaction cost. So you can have hundreds of coders at any given point working for you so that you can focus on building billion-dollar ideas. Right? Some way to pay people for their contribution, I think, is the ethical thing to do. The idea that being able to pay people for the data that they create is not a bad idea. If we can trace the Wikipedia contributions on a character-by-character or token-by-token level that gets fed into GPT, a hypothetical company in the future that wants to do the right thing would assign some kind of hypothetical value per token, and they can just pay out some revenue share to the authors of the data that they were trained on or maybe the data that gets used in production.
Erik: (25:04) For people listening to this who want to join AI companies or want to invest in AI companies, what are the kinds of AI companies that are going to be enduring versus the kinds of companies that are going to be commodities and not capture a lot of value?
Flo Crivello: (25:12) I view three categories of companies that are created right now. There's what they call big model companies. It's OpenAI that creates these giant models. Then there's the application AI on top of those, the application AI is going to be made up, I'll just arbitrarily slice it as horizontal applications, vertical applications. The moat for the large language models is going to be economies of scale, meaning it costs a fortune to build large language models. It's costing more and more money. I think GPT-4 is going to cost on the order of, call it, $100 million to train between the resource cost and the compute. It's very expensive, which is why OpenAI is raising all that money, plus they have to train it using RLHF, as I just described. So it's just very expensive to train, but that's a one-time cost. Then the inference cost, meaning the cost that it costs to run the AI once you've trained it, is on the order of magnitude lower. So again, if you think of $100 million to train the AI, running it is more on the order of one cent.
Antonio Garcia Martinez: (26:13) I mean, you're right, Flo. We've already saw that with Google, for example. Right? I mean, the reason why Google is as good as it is is not because their AI is so amazing, although no doubt their data is, it's because they have literally the entire world typing in what they want. Right? It was always the dataset. It was never the actual algorithm. Algorithms aren't particularly defensible. Right? At the end of the day, AI isn't a product anymore than linear regression is. Right? You actually have to apply it to something to create something that someone will pay you for. Right? And that's where I kind of don't quite get a little bit of the AI hype. Right? It's like, where is the actual product? If Google can't actually find a way to turn that into actual cash, and a lot of these companies are basically thinly veiled just prompt UIs on ChatGPT, what is the actual product?
Flo Crivello: (26:50) Right. Production is not expensive. Production of text and production of music and production of images has not been where the value lies. You can hire folks on Fiverr to do these things just fine and you'll still not be able to go anywhere. So the value is in the distribution. I think that if generative AI is going to change anything to the structure of the industry, it's that anytime you make anything cheaper, its complements become more valuable. So here we're making content cheaper, not that it was really necessary because it was already pretty cheap, and so the complement to content, which is distribution, is going to become more valuable. So I actually think that this is going to be great for TikTok and YouTube and all of that. All knowledge work will be automated in the next 20 years. I think code is the first one that's automated. That's huge. I think we're going to see a lot more come out in the next two years. We're going to see action-oriented knowledge work that actually does stuff. I think it's going to be automated pretty soon. I expect support. I think it's going to be a pretty big one. When you think about it, all the knowledge worker is is a function that sits in between a keyboard and a monitor. And I think AI is going to be really good at approximating that function.
Antonio Garcia Martinez: (27:59) But just to be clear, you're saying that all knowledge work will be replaced in 20 years? We've been hearing that for 20 years. And in fact, if there's any bit of skepticism around AI being at the very pivot of changing everything, I at least have been hearing that since I've been in tech. And if you go back even further, Marvin Minsky in the sixties and seventies was saying the same thing, which is why I was trying to quantify what the change is that we're seeing. Because again, there's clearly a trend line. No doubt, obviously AI and automation have changed lots of things. I'm not saying it hasn't been a big deal. I'm just trying to understand if we actually are at some sort of clear, real inflection point, or are we continuing along the same trend line, which already, by the way, I think is a big deal. Right? What I cited, going from the Kernighan and Ritchie's of the world to Replit, there's been a big change. I'm not trying to underwhelm it, but it's not quite AGI, Terminator, end-of-the-world levels of change. When something is not able to do a task, it just can't do the task.
Nathan Labenz: (28:48) But we're now hitting that moment where across very broad sets of tasks, the best AIs are outperforming the average human. They're not yet at the level of the expert human at any specific domain. They can outperform employed college grads on very wide distributions of tasks. Big Bench is a big benchmark that they compared PaLM to, a pool of software QA testers essentially, and AI won.
Antonio Garcia Martinez: (29:23) I don't know. This is the same discourse I heard about when Deep Blue defeated Kasparov at chess. I mean, it's the exact same feeling. In times like this, I think a lot of this is actually intellectual narcissism on the part of humans who are assigning a certain anthropic value to computers thinking. I think of the famous Dijkstra quote. Again, we've been debating these questions for decades before any of us were alive. He commented that asking whether a computer can think is like asking whether a submarine can swim. It doesn't matter. The point is that it does 40 knots and a human does one knot. And so along that particular dimension, which is moving through the water, the computer actually does do better than the human. And when it comes to ranking ads or filtering through Stack Overflow and coming up with a net result, clearly the computer does better. But that's very different than saying that it's more intelligent, and this causes a crisis in the knowledge economy.
Amjad Masad: (30:11) The generality of LLMs can't be understated. For the first time, they can learn your intent from just one example. I can build Google Translate, Antonio, in three seconds. I'll just say English, hello, French, bonjour. And then I can say English, home, and that'll give the French word. That's pretty freaking amazing. That hasn't happened before. Any time we have a jump in generality, like the last time was the Turing machine, is an enormous thing. I think every app will get better. And I think actually the biggest beneficiaries of this shift are going to be growth-stage startups. And the reason is because you've got distribution. Flo talked about distribution. So Notion adding AI is much more interesting than a new knowledge management app that's AI-first. So let's call it bearish on AI-first companies, usually bullish on AI infrastructure companies. So OpenAI or anyone really building dev tools around prompts, prompt IDEs, building fine-tuning technology, anyone who's making it easier to build with transformer models, that's going to be something that every company will buy. Because again, I think it's a diffused technology similar to cloud. Every company will integrate this kind of technology into their products. So the financial answer to your question is that if you want to invest in it, probably do another FANG-type strategy, because I think there's another S-curve there. Plus maybe invest in infrastructure like NVIDIA, whatever. Then on the startup side, DevTools, AI intelligence layer like OpenAI, those are the beneficiaries, and maybe the growth-stage startups, early growth-stage like us or Notion or things like that will benefit a lot from this.
Erik: (32:24) The Cognitive Revolution podcast is supported by Omneky. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms with the click of a button. Omneky combines generative AI and real-time advertising data to generate personalized experiences at scale.