VC Insights on Investing in Artificial Intelligence with Sarah Guo and Elad Gil of No Priors
Investors Sarah Guo and Elad Gil discuss AI investment strategies, human-AI interaction, and the future of AI in a thought-provoking podcast episode.
Watch Episode Here
Video Description
Nathan Labenz and Erik Torenberg sit down with Sarah Guo and Elad Gil, notable investors and co-hosts of the AI-focused No Priors podcast. They discuss how Sarah and Elad are approaching AI investment opportunities right now, how that differs from how they've thought about investing in the past, where in the stack from hardware to applications they expect to see value accrue, what modes of human-AI interaction they are most interested in, and more.
Sarah is the founder of $100M AI-focused venture fund Conviction VC, which she launched last fall. She was previously General Partner at Greylock. Elad is a serial entrepreneur and a startup investor. He has invested in over 40 companies now worth $1B or more each, and is also author of the High Growth Handbook.
This episode is the first in a series centered on talking to rising voices in AI media, people who are now only working overtime to understand everything going on in AI, but also creating thought leadership and educational content meant to help others get up to speed as well.
We highly recommend Erik Torenberg’s interview show "Upstream". Guests include Ezra Klein, Balaji Srinivasan, David Sacks, and Marc Andreessen. Subscribe here: @UpstreamwithErikTorenberg
LINKS:
No Priors Podcast: https://www.youtube.com/@NoPriorsPodcast
No Priors on Apple Podcast: https://podcasts.apple.com/us/podcast/no-priors-artificial-intelligence-machine-learning/id1668002688
Elad Gil's blog: https://blog.eladgil.com/
Sarah Guo's blog: https://sarahguo.com/blog
TIMESTAMPS:
(00:00) Episode preview
(04:43) What is software 3.0
(09:14) Disruption coming from startups or incumbents?
(13:42) Sarah and Elad identify overlooked investment opportunities in AI
(15:24) Sponsor: Omneky
(15:46) Future of social media
(22:45) AI agents & personal co-pilots
(25:32) Where to invest in AI?
(31:11) How our kids will interact with AI
(34:50) How to gain conviction as an investor in AI
(45:07) When should founders raise money and when should they bootstrap?
(46:28) How should startups spend their capital now that we have AI capabilities?
(48:10) Sarah & Elad’s favorite products in AI
(51:39) Would Sarah & Elad get a neuralink implant?
(53:41) AI hopes and fears
TWITTER:
@CogRev_Podcast
@labenz (Nathan)
@eriktorenberg (Erik)
@sarahnormous (Sarah)
@eladgil (Elad)
Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
More show notes and reading material released in our Substack: https://cognitiverevolution.substack.com/
Full Transcript
Transcript
Transcript
Elad Gil: (0:00) I started digging around. I tried to find people to build an OpenAI competitor, and I couldn't convince anybody to do it. Everybody said, well, it's not that interesting of a business. Are these APIs that good? And all this other stuff. I pitched person after person, and nobody was willing to try it.
Sarah Guo: (0:15) Within consumer, Character is one of the most interesting companies. Replika is one of the most interesting companies. A lot of people don't like this, even though after looking at consumer company metrics for a decade or more, you're like, shit, right? I'm going to pay attention if people are spending hours a day on this service because it is so rare.
Elad Gil: (0:35) There's base models and there's based models with a D. What kind of model do you want your kid to interact with, and what do you want them to learn over time? How does that get selected, and who adjudicates what that selection process is? Or what's the ethical framework based on your location around the world that should be applied or shouldn't be applied? I think there's lots and lots of interesting questions here.
Nathan Labenz: (0:58) Hello, and welcome to the Cognitive Revolution, where we interview visionary researchers, entrepreneurs, and builders working on the frontier of artificial intelligence. Each week, we'll explore their revolutionary ideas and together we'll build a picture of how AI technology will transform work, life, and society in the coming years. I'm Nathan Labenz, joined by my co-host Erik Torenberg.
Erik Torenberg: (1:21) Before we dive into the Cognitive Revolution, I want to tell you about my new interview show, Upstream. Upstream is where I go deeper with some of the world's most interesting thinkers to map the constellation of ideas that matter. On the first season of Upstream, you'll hear from Marc Andreessen, David Sacks, Balaji, Ezra Klein, Joe Lonsdale, and more. Make sure to subscribe and check out the first episode with a16z's Marc Andreessen. The link is in the description.
Nathan Labenz: (1:49) Hi everyone, and welcome back to the Cognitive Revolution. After taking a deep dive into prompting, process automation, and jailbreaking over several recent episodes, we're now zooming out a bit and talking to some fellow AI scouts. These are people who are not only working overtime to understand everything that's going on in AI, but also creating thought leadership and educational content meant to help others get up to speed as well. Today, our guests are investors Sarah Guo and Elad Gil, co-hosts of the AI-focused No Priors podcast. Sarah was previously a partner at Greylock and is now the founder of the $100 million AI-focused venture fund Conviction VC, which she launched last fall. She blogs on her website, saraguo.com. Elad Gil is a notable angel investor with investments that include Airbnb, Coinbase, Figma, Square, Stripe, and many more, including recent AI companies such as Character AI, Harvey AI, and Perplexity AI, whose CEO, Aravind Srinivas, you may remember was a guest on the Cognitive Revolution back in Episode 7. He also blogs at blog.eladgil.com. We spoke about how they are approaching AI investment opportunities right now, how that does or doesn't differ from how they've thought about investing in the past, where in the stack from hardware to applications they expect to see the most value accrue, what modes of human-AI interaction they're most interested in developing, and plenty more. I hope you enjoyed this conversation with Sarah Guo and Elad Gil. Sarah Guo and Elad Gil, welcome to the Cognitive Revolution.
Sarah Guo: (3:31) Thanks for having us.
Nathan Labenz: (3:32) Yeah, very excited to have this conversation with you guys. We're in such a moment right now of just the whole world turning attention to AI. And that's something that I think we're probably four or five months into since the release of ChatGPT. You guys have each been thinking about AI very seriously, both independently and together, I think, well before that. So I thought we'd maybe just start by revisiting a couple of things that you guys had published about six months ago, and then ask you to just take us through this period of time where new models are being released, new tools, new paradigms, attention piling in, investment dollars, I'm sure, piling in from all directions as well. And then we'll take it from there. So, Sarah, starting with you. You announced about six months ago this $100 million Conviction fund where you are investing in Software 3.0. So I thought for just a start, can you set a foundation for us and tell us how you think about Software 3.0 and what that means?
Sarah Guo: (4:43) Yeah. I think it's shorthand for just believing that there's a very unexpected new set of software businesses emerging that can be very important, right? So everybody knows you have this exponential creation of capabilities in machine learning. I remember I actually had a debate six plus months ago, hey, there have been a lot of ML-first companies that haven't worked in the past and what changes. But I think the exponential creation of new capabilities is the thing that got me really excited for the fund. And when we think about what's so different about the software, it's what it does. I think it attacks lots of categories of services or areas that were not big software markets before: copywriting, illustration,
Elad Gil: (5:32) law,
Sarah Guo: (5:34) what the companies look like. So this could be how many people does it take to create a $1 billion or $10 billion company? Now we have empirical proof: 20, right? And that was not true in previous generations of software. And then I think one that is still unexplored is just how the product should work from a UX and a human interaction perspective. I think we're going to get a lot more than just the single chat box.
Nathan Labenz: (6:00) Yeah, I definitely look forward to unpacking that because I just see so many different possibilities that it feels like we're so early in exploring what the modes of human-AI interaction are going to be. One thing that jumped out to me about your announcement was just, there's the AI focus, but then beyond that, just an extremely broad investment thesis, right? Totally up and down the stack, all the different verticals. How has, if at all, your thinking evolved in terms of which parts of the stack or which verticals are most interesting over the last six months?
Sarah Guo: (6:37) Some of the areas that are really attractive from a demand perspective—a value for end users or for customers perspective—are obvious in hindsight but somewhat unexpected. So anybody who has heard of Babel Fish or interacted with somebody who speaks a different language, I think they intuitively understand that translation is interesting, and the idea of dubbing as a service is interesting. But I think if you zoom out more broadly, the question around synthetic voice and the ability to take one form of media and translate it to others cheaply and easily, obvious in hindsight, I think I've been somewhat surprised by the demand on that side across a range of use cases. In terms of things that we are interested in, but the bar is just very high because the cost to build a company and the advantage of incumbents are so high, we haven't done a chip company, but we've done companies that are up and down the stack otherwise.
Nathan Labenz: (7:38) Cool. Well, I want to ask you guys also about some specific portfolio companies that you've invested in that you're excited about and get a little tour of some of the use cases and some of the things that will be coming at us from a consumer or business standpoint in the not too distant future. But I also want to do the same thing for you, Elad, because about six months ago, you published this essay, "AI Startup versus Incumbent Value." And that hit me at a pretty opportune moment. I was just at the end of a period of 60 days of super intensive red teaming on GPT-4. And I basically hadn't even really tried to synthesize what I had seen at that point. I was really just scouting all the different use cases and everything I could think of to test and try to understand what this thing could do. And right as that closed, you published your essay. And so I read that and I was thinking, for me, it seemed like, boy, this GPT-4 is incredibly powerful. And the conclusion that I started to leap to is I think that the enterprises are largely going to be able to apply this technology fast enough that they largely won't get disrupted by somebody who's starting with a language model and then thinking, how do I build all this other stuff around that? How would you grade my intuition from six months ago? How has your thinking evolved on that question now that you've had the advantage of seeing GPT-4 launched and the deal flow that you're seeing downstream of that?
Elad Gil: (9:14) I got really interested in generative AI about 4 or 5 years ago, when all the GAN stuff was happening, simply because I thought the GAN-based art was super interesting. Before that, I'd been investing in AI and I also worked on AI-related products myself directly for 10 to 15 years. When I was at Google, I worked on mobile and ads targeting. Ads targeting were big ML systems. Then I sold a company to Twitter. At Twitter, one of the teams that worked for me was search, and that was all ML and AI. Then I invested in the area for about a decade. And for about a decade, nothing worked. Or I should say a lot of things worked for incumbents, but it didn't work for startups. You had the Facebook News Feed, you had Alexa from Amazon, you had all these really big products. But the startup ecosystem in terms of companies that were started to specifically do ML just really didn't seem to go anywhere in terms of building really massive companies. Then this generative AI wave hit. I think things started to get really interesting around GPT-2, and then maybe as GPT-3 came out with a big step function in functionality, you realized how compelling it was. I remember I went on a recent podcast and we talked about it specifically, and I think at the time a lot of people were ignoring it. I started digging around. I tried to find people to build an OpenAI competitor, and I couldn't convince anybody to do it. Everybody said, well, it's not that interesting of a business. Are these APIs that good? I pitched person after person, and nobody was willing to try it. But a lot of people who'd worked in the area before wanted to build applications. So I started investing in companies like Character. Noam Shazeer is one of the main authors on the Transformer paper. I helped out some of the early team that was working at Adept, although I never got involved there as an investor. I got involved with things like Perplexity and Harvey and a variety of companies that basically ended up forming some of the more interesting companies now a year or two later in hindsight in terms of this wave of AI. A lot of the question in my mind is if you look at the history of technology waves, there tends to be differential capture between incumbents and startups, and each technology wave is different. If you look at the first internet wave, it was like 80% startups. It was Google and Amazon and all these new companies. People like Microsoft benefited too. Then you look at mobile and that was 80% incumbent value. It was the big platforms like Apple and Google, which were already incumbents. People were talking then about what is Salesforce on your phone going to be and who's going to build it? It turned out to be Salesforce building Salesforce for your iPhone. Or search on your phone was Google. But there were new things like Uber and Instacart and Instagram. Basically anything with Insta in it, you should have just invested in. Then you had other types of platforms that emerged. For crypto, it was 100% startup value. There was basically no incumbent capture of crypto. For the first decade of AI, with all the CNN and RNN and GAN related approaches, all the value went to incumbents. That first wave of AI was an incumbent wave. Now we're seeing something really interesting where I think it's going to be a differential split, and maybe it's 80-20, 80% incumbent, 20% startup. But 20% is a lot for what I think is probably the biggest platform shift in a decade plus, maybe two decades, maybe longer. Because to Sarah's point, you're changing a few things in a massive way, in an underlying way. You're changing the compute model and how to write code. You're changing the user interface, but you're also changing the baseline functionalities and what this wave of computing can actually do in terms of both applications as well as other deep areas. There's tons and tons of places that I think this is going to impact. Sarah mentioned voice and dubbing and text-to-speech. I think those are super interesting areas. She and I've talked about those in the past. There's tons of room for social products. I'm really interested in what does a generative social product look like? There's lots of apps on the B2B side. There's lots of tooling like LangChain or LlamaIndex or other things like that. And then obviously there's the base LLM layer. There's just a ton. A bunch of that stuff will go to incumbents. Probably the base models are largely incumbents with maybe Anthropic and one or two others being the counterexamples, OpenAI, Microsoft, Google, etc. But there's lots and lots of room for people to build brand new de novo things that will be super exciting.
Erik Torenberg: (13:21) Just on that for a moment, you mentioned, Elad, that you tried to get people to build OpenAI competitors and you couldn't get people to bite. What are you guys trying to get people to build today? For the talented people listening to this podcast who want to do something, what are the things that people aren't spending as much time as they should perhaps, or maybe overlooked opportunities within the space?
Elad Gil: (13:42) I would say voice applications, social, certain big B2B applications, and then certain types of infrastructure. I don't know, Sarah, what you think, but those would be kind of the four quick ones.
Sarah Guo: (13:51) Yeah. I'd add there's this idea of tool use. An LLM can be a reasoning engine against knowledge that it holds in the model itself or some database or some repository of information. But it can also take action now. If you think about UiPath or an automation in the previous generation of products in the RPA category, I think that's going to get a lot more interesting when the approaches get more robust. There are a lot of workflows in every part of the enterprise, but especially the back office and some verticals like healthcare, where you have a lot of people moving data around between systems or filling out forms based on some policy. We've been unable to do that flexibly today, but it is a basic tool use in natural language task. So I think that's a really interesting one. There are some areas that we think are just going to get more important from a core architecture perspective over time. The idea of retrieval and just how do you guarantee retrieval and memory are two concepts that I think are really interesting in research that people can't figure out how to use in actual enterprise applications. Then I'm really interested in some of the more emergent stuff. If you look at companies like Midjourney, the idea of democratizing capabilities that people didn't have before, if that's illustration, which I promise you not a lot of VCs were focused on before. But generally, I think media creation is really interesting.
Erik Torenberg: (15:18) Elad, can you say just a little bit more about what AI social could look like?
Elad Gil: (15:23) I actually have a lot of ideas, but they're probably really bad ideas. I think with social products, the key thing is you want to launch things and then quickly iterate and see what gets adopted. I know people doing a range of things, but I also don't want to dox them in terms of what they're doing. But over the last couple of months, I think I've heard a couple of really interesting ideas on the social side. It's a variety of different formats and approaches and everything else. I just think there's a ton to do there. The issue is when I look at social products today, people are basically constantly trying to rebuild Twitter for some reason. Every month there's a new, hey, we're doing Twitter, but it's decentralized. We're doing Twitter, but it's whatever, not with generative AI. Or people are trying to do Facebook throwbacks. Where is the technology heading? And what does that mean in terms of entirely new types of interactions where you're still taking advantage of core social behaviors? Ruloff has this seven deadly sins. Every social product basically is gluttony or lust or one of those seven deadly sins. I think if you think of that through a generative aspect, there are really interesting ideas that you can start coming up with versus saying, I'm just going to throw things back. It reminds me a lot actually when I left Google. It was a long time ago either way. Whenever I left Google, whatever year that may have been, a lot of people were building things that they shouldn't have been building because they were building for the past. They were like, oh, I'm going to build this SEO-able thing and get traffic that way, instead of saying, hey, I'm going to build a developer tool, which is a new thing, or I'm going to build a mobile company, which is a new thing. I think a lot of the social products I see are reflections of the past versus the future. That may work. It may actually create really big companies. But the flip side of it is there's probably some really interesting things that we can all squint and imagine that's coming.
Sarah Guo: (17:16) I think that there is a lot of open-mindedness required for some of the consumer stuff. Within consumer, Character is one of the most interesting companies. Replika is one of the most interesting companies. A lot of people don't like this, even though if you look at consumer company metrics for a decade or more than a decade, you're like, shit, I'm going to pay attention if people are spending hours a day on this service because it is so rare. I think it's very easy not to like it because it's a weird thing that people want to have these parasocial relationships, and there's demand for NSFW use cases. But that's how a lot of things on the internet start.
Nathan Labenz: (17:58) Yeah, we had Eugenia from Replika on as a guest. It was certainly one of the more fascinating conversations that we've had to understand, first of all, just what the user base is today and has been historically while the models have been so limited, frankly. And then to kind of extrapolate that into the present and the future where it's like, this was not honestly super compelling to me, but I see how it could easily become much more compelling. There is a phase change or kind of a threshold that we've hit, I think, that is going to take Replika 1 and make it look pretty quaint as we hit 3.
Elad Gil: (18:36) Well, I think it's going to be deeper than a lot of people imagine.
Sarah Guo: (18:39) Seven, eight years ago, we started investing in what you'd think of as mobile coaching marketplace applications for different areas. That could be something in health, like nutrition or people are doing fitness training and such. As you might imagine, having an accountability partner or somebody you're building an emotional relationship with means you can affect behavior change, which is really hard for humans. One of the most interesting things I've seen recently is they can convince and coax people to do things, plan their days, change behaviors. I think that's something we're going to see a lot of.
Elad Gil: (19:19) Yeah, I think the applications of that are going to be broader than anybody thinks. If you look at it, if you think about education, how do you revamp education? Everybody's going to have a bot that's going to teach them things and help them with stuff and maybe becomes their best friend. There are very positive and very negative implications of that. I do think people are dramatically underestimating the degree to which, on the one hand, there's a bunch of lonely people or people who want to interact online more and they don't have the capacity to do it otherwise. And then on the other hand, there are these really deep fundamental societal use cases that are coming through the generation of these agents that interact with you like a real person. In some cases, every parent is going to want the thing that's going to educate their kid in a hyper-customized way. That's going to be both very powerful, but which company is going to control that? What does that mean for our kids and how they're taught and raised and all the rest of it? I think there are some very deep fundamental things here that people are just barely touching the surface on. Some of it's in old sci-fi literature, like The Diamond Age, the young lady's illustrated primer. But in some cases, I think people haven't really thought about it very deeply. There's another book called Lady Amazes, where every time there's a sufficiently large block of people that believe something, that substantiates into an AI agent that represents them in Congress. So why even vote when you can have a perfect representative that can suddenly appear and actually fight for and adjudicate the things that you truly care about? I think there are all sorts of crazy things that are coming.
Erik Torenberg: (20:49) On a more quotidian, day-to-day level, I want an agent that just helps me maximize my productivity, that's watching me at all times, watching all my interactions with people and tells me when I'm acting out of line or says, no, say this. This would be a better thing to say, kind of like a personal trainer for all things life coach that's watching me at all times.
Elad Gil: (21:09) Yeah. I just want a sycophantic AI that's like, you're so good. That was such a good joke. You know, just pretend. Yeah, it'll be amazing.
Nathan Labenz: (21:21) There are a couple of really interesting questions here that I think these examples get at. It's funny you mentioned the training one. Going back to my GPT-4 experimentation, one of the things that I tried to do was just see how many specialized chat agents this thing could play. I did the physical exercise coach one, and I also did one simulating tech support for my 90-year-old grandmother, which was even more eye-opening to me because it really spoke to a pain point that we have in my family. But how do you guys think about that as investors? Because I'm sitting there using base model GPT-4 and it's basically working. And then I'm kind of thinking, this feels like it sort of hits the threshold. I can certainly wrap this up into an app or somebody can. But at that level of "I used to go hire a human to do this, now I can maybe slot in an AI to play that role," are there businesses there? Or is all that value accrued to OpenAI in your minds, or foundation model providers in general?
Elad Gil: (22:27) Yeah, I mean, I think there's tons of room for applications. I think a lot of them will be building workflow against it, or some form of storage or history or memory, or something else that associates with the chain of stuff that you did relative to that. It's funny, I'm going to give an extreme example which doesn't quite apply, but in the nineties everybody thought that everybody was going to set up their own email servers. "Oh, email's a protocol and everybody's going to use it. It's so easy to use." And then obviously everything just centralized to Gmail and Yahoo Mail and whatever your corporate server was. And I think the same thing happens with a lot of these things where there may be interfaces like ChatGPT or the like, and things like memory and some of the other things that some sort of recursive interaction across a language model will come into play, and chaining and all sorts of things that'll be more complicated. But I think fundamentally, people will need very specific workflow for very specific applications in many cases. And in some cases you'll have a general-purpose tool where ChatGPT will be good enough to just do a bunch of stuff for you, or a version of an agent you're using in the future. So I do think we're going to end up in a multi-agent world, but there may be specific things that are dominant for specific use cases, just like everything else that exists today. I feel like the best indicator of how things will evolve is kind of like, how do market structures evolve in the past? And I think it's going to be kind of the same thing.
Nathan Labenz: (23:46) So how about use cases? Or not use cases exactly, but modes of interaction. This is something I've really been trying to organize my thinking around. Erik's got this vision for the AI that kind of rides shotgun all the time and helps him maintain his social graces. And that's kind of the Reid Hoffman vision, I would say, the copilot for every profession, copilot for every phase of life. And then you're speaking to also, on the other end, people are going to need specific workflows. I kind of think of that, and Sarah you mentioned RPA, there's this sort of process automation context where it's like, I'm a big corporation, I have these call centers which are humans that have to do these tasks. I've never had any way to even think about automating these tasks in the past, but now I sort of have that. And then there's kind of this third way that's emerging that's like the agent model, which I kind of think of as bridging those two, because I can talk to it in a sort of ad hoc real-time way, but I can also kind of send it off and say, "You go figure out the plumbing and how things connect together, even get me out of the business of having to design or architect the workflow." I guess that's enough for me. How does that framework resonate for you? Do you have a different one that you kind of bump company deal flow up against? And what modes of interaction do you think are ultimately going to predominate? And is that the same as those that give you the best return on your investment?
Sarah Guo: (25:11) I think when you start to actually look at the tools that have succeeded at scale, there's a whole range of ways that users want to interact with this stuff depending on the task. So prompting is not an easy thing for 99.99% of humans today. So just because you enjoy, Nathan, messing around with GPT-4, and lots of users of your podcast might, it's hard to ask a good question. And I think one of the things that I've seen in companies that I think will just become more common is multimodal input, passively using context. I think there are a lot of companies that figured out giving end users in a particular category 20 prompt templates that made sense for their use case and an easy button so they don't have to figure out how to engineer a good output. That's a company right now. And so it's not clear to me that we're going to have generic interfaces for all the different use cases. One of the things that I think will happen, to the point of, I think search is going to break. And I'd love to get Elad's point of view on this since he actually worked on search. But having been an investor in search companies prior and having friends starting them now still, search has many use cases. It is weird to me that getting information from the internet has fallen into one box at Google. And I imagine that many of the use cases, the stereotypical one being travel planning or buy something, that's something I think an agent should be able to much better do for you in the future. So I think there's certain things that will fragment from a market perspective, and every slice of that market is plenty valuable to go after for a new company. So one of the basic frameworks we use, I don't know if I've got the overarching unification right now. The ground is too unstable.
Elad Gil: (27:02) Yeah, I kind of have two answers to it. I think there's almost a 2x2 matrix of, is a person busy or do they have a lot of free time? And then context, maybe it's not 2x2, it's like 3x2 or 4x2 or something, right? It's like busy versus free time, and then one of a series of contexts around B2B use cases, commerce/action-based use cases, et cetera. And then based on that, you're going to have a different modality. So I think you can almost come up with a map on it. And it reminds me a little bit of, if you work at a big company, the way that you interact with the CEO is different from how the CEO interacts with you. You'll write this long email of multiple paragraphs and the CEO will send you "yep" as a single word or whatever. Execs tend to leave a lot of voicemails, or they used to, because it was a more performant way to communicate. So that's busy versus not and all the rest. I think there'll be all these modalities. I think the other answer is, in some sense, it doesn't matter. It's funny, I met with this hedge fund guy who's really sharp, amazing investor. We were talking about AI and a lot of his questions tended to center on this kind of stuff. He's like, "Well, will you just talk into your phone?" I was like, who cares? It doesn't matter. That's missing the main point, which is, what does this technology fundamentally enable? And we'll figure out the interface and we'll iterate on it, and it'll be one of a series of interfaces that we use today. Fundamentally, we have N senses, and we'll have different modalities that match with different senses depending on the use case. But it's clear that, say you use Alexa, people with kids love using Alexa because it's voice-based, the kid can yell at the thing and it'll reply. But you're not going to have a lengthy information extraction conversation with it unless you're a three-year-old. Right? And so I think it kind of maps to, what are you trying to accomplish? And I think the most interesting aspect of all this stuff is just, what are the fundamentally new capabilities that all this enables? And what does that mean in terms of the applications that can be built, in terms of how it reshapes our lives? I'll give you an example. Obviously, these models now perform better than many doctors on standardized medical exams or other types of tests. And you can imagine a world where you start having models that are basically available to anybody in the world, which allows you to upload an image and describe some symptoms, and then you end up with medical care that's, in some sense, on par with what you get at Stanford or whatever top medical association. And you can do that anywhere in the world as long as you have a phone that has certain characteristics. That's really, really, really powerful. And to some extent, the interface is secondary to that impact. And so I'm not trying to denigrate the interface question. I think it's really important stuff. I just think fundamentally, the capabilities are so rich that it's almost like, okay, where do the capabilities take us? Then based on that, what happens as the output? There won't be multiple types of outputs.
Nathan Labenz: (29:49) Interface is definitely interesting. I'd be interested to hear if you guys have seen any really creative ones that you would recommend that people check out. But I'm also kind of thinking even a little bit more big picture than that. Just how do we relate to these damn things in the first place? The copilot feels kind of like a peer or something, like a real-time collaborator. And that could be an audio interface or a text or UI or whatever. But then there's the agent, you're kind of delegating to it, and then there's the sort of supervision mode perhaps where you largely trust it, but you kind of maybe trust but verify, and hopefully you actually do the verification and don't just start rubber stamping everything. What about on that level? Do you have a sense for kind of where this is going? Another way maybe to ask this question is, how weird do you think things are going to get as these tools...
Elad Gil: (30:45) I think eventually we're going to call all these things "your highness" as they sort of take over the world. "I love my boss, the AI." Yeah, no, I think the world is going to get really interesting and weird, and I think that's back to the education point, for example. If you have a bot helping raise your kids, what does that mean? Right? If one of the primary sources of information no longer becomes YouTube, it becomes some agent that's not only working them through a math and history and other curriculum, but potentially is choosing which form of history gets presented to the kid. You talked about base model, I can't remember if it's biology or something else. It sounds like something you'd say about it, I don't know if you said it. There's base models and there's based models with a D. What kind of model do you want your kid to interact with? And what do you want them to learn over time? And how does that get selected? Who adjudicates what that selection process is? And so I do think there's a lot of these really interesting things that are coming because you're RLHFing something. Who's that cohort of people who's training the thing that are providing the human feedback? How do you select who those people are? What's the ethical framework based on your location around the world that should be applied or shouldn't be applied? Should the Western viewpoint be applied to somebody in another country that may have very different values and mores relative to the model and its output? And so I think there's lots and lots and lots of interesting questions here.
Sarah Guo: (32:11) I think it's useful to try to imagine the interactions we have with agents in a few different ways. And so, a lot of saying, your kid or you, today, we are at the mercy of a bunch of algorithms that control our information flow. And we can lightly curate them by swiping correctly on TikTok or Twitter or whatever. But if you can instruct your bot more directly for yourself or your kids or whatever, that's quite interesting. I think a fun interaction to imagine is on the enterprise side. You have different influences in different teams because there are incentives for, for example, compliance or security in an R&D team. I think there's a simple version of that that's a bot that's like an early warning system. Imagine in a fintech company, "Oh, you're hitting these merchant network rules. The thing that you're trying to do is a definite no-go." But you can also imagine a debate with that bot or a fight with it as an extension of that security champion or whatever. I think another really powerful one that I like, just looking at some of the examples of how people use ChatGPT and Copilot, is code generation. Right? We're not that far off from, especially in areas where there's just so much content online, like web development, a junior web developer, junior Python developer available to everyone. Not "complete my code," which requires a bunch of previous knowledge, but write to a file, run code, deploy to the cloud, use APIs. That is really powerful. And so I think of it as there's new capabilities that are thought of as human capabilities. There's interactions where I actually have to negotiate. And then there's the things that I can control that are personal. So I think we're going to have a lot of really weird interactions with agents.
Nathan Labenz: (34:08) How does that sort of expectation of weirdness change how you guys are thinking about your role as investors or the investment decisions that you're making? I would imagine that it would shift you more toward team relative to current product, for example. But I'm wondering kind of what shifts you're finding compared to previous cohorts of companies.
Elad Gil: (34:32) It's interesting. I think it was Chris Dixon who said that the next great company starts off looking like a toy. I think that's true both on consumer, that's true with crypto, that was true with certain types of enterprise. I don't think it changes that much. I think the really weird stuff is often the most interesting stuff. Then there's going to be the standard stuff that you just know is going to work. I think it's going to be that same mix. I think social products in general tend to be weirder in terms of the things that actually work, or at least the behaviors tend to break with other affordances generationally. Snap, we're going to make every image disappear. And obviously that product morphed quite a bit over time. But at the time, everybody was like, What are they doing? That's so weird. My sense is it's just Evan taking selfies of himself and then they would disappear, that was the whole network for a while. So I think behavior will always start off seeming strange. I tend to be—Sarah and I don't have any formal business relationship. We're just collaborating on stuff and we have a podcast together. So I'm speaking for myself only, but I tend to be very much a market driven investor, not a team driven investor, I should say. The team is incredibly important. I've started two companies myself. So if I didn't think teams were important, I never would have started a company. But I think the markets are more important or the product market is the most important thing. And so often what I look for is, what are early signs of product market fit? And then do I think it's in a big TAM? Do I think the team is great? Do I think there's defensibility? Is there a why now statement? There's all these other things around it. But fundamentally, I've always looked at it as product market. And the question is, if something's really weird, how can you tell if the product market is there? And I think almost every great startup has to be non-obvious because if it was obvious, everybody would already be doing it. So by definition, these things have to be somehow off or there has to be some hurdle to overcome. Otherwise, it's not defensible.
Sarah Guo: (36:22) I think it's kind of funny that some number of months ago, Aleph couldn't convince anybody to start an OpenAI competitor. And now he probably can't convince anybody because they're like, oh, they're too far ahead. It's too big of an incumbent. So it is a really interesting pacing. Related to that, I think one big mistake investors make is they are just kind of blind based on their current view of the world. It's very easy to project your existing view of where the value is, especially if you are very focused on the more sophisticated customer. And you take that point of view and then you don't see the actual demand, which might start with people who don't have access to something or where their use case is less sophisticated. And so I think it's really easy to see this in the media space. So if you look at something like illustration, there's this view of, oh, it's never going to be good enough to make picture books or do Coke ads. And pretty sure, directionally, we are going to get there sooner rather than later. But if you take that point of view—oh, it's not going to work for the triple A games. Or it's not going to work for people who need Super Bowl quality video. Or yeah, I wouldn't use that in my production code base. You're just going to miss a lot of, well, what are people actually using it for and directionally where are we going?
Elad Gil: (37:49) Yeah. I think people also over-index on defensibility related to that. And so everybody at the beginning of a company asks too many questions around, how does this become defensible? How is this defensible now? Can't people just build it because two people built it in six months? And that's sort of every SaaS company. What was defensible about Retool in the early days or Notion in the early days or sort of choose your startup in the early days? And in the very early days, nothing was defensible. It took two people, three months to build the thing. And so I think that's kind of similar here where there's a lot of questions around, okay, what's a defensible business model? And you want to have defensibility over time. Absolutely. And if you look at it traditionally, there's all sorts of ways to do that, either in terms of platform effects or certain aspects of sales or certain aspects of integrations or other things that you do over time. Or network effects. There's all sorts of forms of defensibility. But I think people, really early, tend to ask almost too much of the thing. And the real question is, does anybody care and is anybody using it? And then I think later it becomes, okay, now that people are using it, how does it become defensible and not a commodity? And how does it scale and how much does it scale? And is this an end of one company or product?
Nathan Labenz: (38:59) So what trends are you guys seeing in usage data? I feel like right now everything has a waitlist. The waitlists, some of them are moving, some of them are not moving. This feels like a moment probably where everybody can post good signup numbers, but I would guess retention probably varies widely in the deal flow that you're seeing. I'd love to get a sense for the trends that you're seeing there.
Elad Gil: (39:25) My favorite waitlist example of all times was this early AI company. I'm not going to name which one it was. It was 10 years ago. And the founders of the company claimed it was an AI company, but in reality, they had a bunch of ops people answering queries in the background. And the cofounder of a very well-known large tech company went onto it. And anytime he'd go on, they'd ping all the ops people and they'd all jump on and answer all of the queries really fast for that one person. They ended up getting bought by this major tech company and it was completely false. It really wasn't working the way that they were claiming it was, and they always were in private alpha. They'd say, oh, there's so much demand and look at this giant wait list and all this other stuff, and they never actually really launched and they got bought for a bunch of money. So I feel like often these infinitely closed wait lists are kind of a negative sign that the traction may not be real. Now, sometimes it's a real sign of demand and there's some scalability issue in the background or they want to test it and all the rest of it. But I think if you have such raw organic adoption, usually you just open the thing up because you know more and more people will join unless, again, there's some constraint that prevents you from doing it. I think in general, the last decade has taught people some wrong lessons on how long they should take before launching a product. And people point to Figma or they point to Notion or they point to other companies where there is a longer period of time for development. And if you talk to the CEOs of those companies, they say, I wish it was faster. We should have done certain things faster. It didn't work the first time, so we changed it and it worked the second time, but we tried. We tried actually to get people fast. So I think there's this whole bespoke artisanal movement. Or similarly, one company I'm involved with was doing hand-to-hand onboarding of every company superhuman style. Eventually, one of their customers said, why are you getting in the way of your customers? I just want to sign up and use the thing. Why are you onboarding me? So they stopped doing it, they had a spike in usage. And so, in general, I think you want to get out of the way of your users. And that means you don't necessarily need a waitlist unless there's very specific reasons behind it or you really need to test certain things. But after you've tested things enough, if you keep having a waitlist two years later, it means the thing isn't working.
Nathan Labenz: (41:32) Of the companies that you guys are taking a look at, how many of them seem to have sustaining usage versus kind of that surge of interest that if you give it a month, will already start to look like it's tailing off?
Sarah Guo: (41:46) It's some tiny percentage, right? But that's kind of the point of venture. It's all about the tail of companies. It is not your average company in the market. There's a big hype cycle in the market right now. And I think it's very easy to feel smart by being cynical and dismissive in venture and in investing in general. And this is totally useless. You want to be intellectually honest, but if that's your orientation, don't be an investor in startups that are generally weird ideas. What I'm looking for is there any real data at all because you're trying to invest in the outliers. And the outliers right now are insane. I know a founder in the many, many millions of revenue that was dealing with some sort of cash flow issue, GPUs or SVB or something. And he removed some part of the free tier of his product, and revenue went up a multiple. And I'm like, that is some pretty impressive demand. That's not fake when there are consumers who are paying that much for the product. And so I think the most exciting thing right now is that there are capabilities that consumers and enterprises get that much value from, where one could argue that this company that is doing this much revenue hasn't played any of the good growth games or good product management games that companies play. Oh, I'm going to do hand onboarding because the good company did that. Or I'm going to have this wait list with the sexy brand companies in first because that's what the best companies did. But the real thing you're trying to figure out is, can I make something that just creates so much of its own demand? And nothing else is important. And so, yeah, the hype cycle is useful for fundraising. But if your investors know what the data should look like, then they're not going to buy it. I think it's especially easy to generate that sort of waitlist activity right now because Alad mentioned the seven deadly sins, but greed, social signaling, fun, fear—people feel all of these things about AI right now, and it's very novel. But anybody who's worked on these products understands the distance between demo and product is very large. And so there are a huge number of companies that have these massive waitlists because it looks really cool as an idea, and then it doesn't actually work. Or people thought they wanted it, but they don't. And so that's the whole game of trying to understand. When I talk to a company that's actually got a reasonable waitlist, the first question I ask is, well, what would happen if you gave everyone access? What is blocking real usage growth? And often, the answer is, oh, we know that those are garbage users anyway. So why don't project that. But the thing I get excited about is in the tail, there is usage that is unlike anything I've seen in the last decade plus.
Elad Gil: (44:50) I think there's also a founder perspective on this, which is when should you raise money versus just bootstrap? And I think all too often people get on the venture train when they could just bootstrap. If the thing is just growing organically really rapidly and it's spinning off tons of cash, just go for it in some cases. And then secondly, in some cases, you know that you're going to have something that goes viral, a bunch of people pay for it, and then it dies off. And that happened in prior social waves, for example. All the social gaming companies. And people made real mistakes by raising tons of money and then having to play the venture game on those things and blowing up their companies instead of saying, I'm just going to dividend out cash and it'll last for a year and I'll make a bunch of money off of it and I'll move on to the next thing. That way people got stuck for four or five years working on something that was never going to work because they had that initial burst. You see that with some of the social and mobile apps that are using stable diffusion as an underlying thing. And some of them have ramped to $200 million of revenue in nine months, then they die off really fast. And some of those companies probably would have been better served just running it off cash and then distributing cash to themselves instead of raising money. So I think as a founder, you should also think through, is the right thing for you to raise external capital? And in many cases, the answer may be no. If you can avoid it, why would you do it?
Nathan Labenz: (46:10) How are you seeing the use of funds kind of shifting? Because it's kind of a trope at this point to say, you can probably get by without your social media manager, or you can semi-automate recruiting in ways you couldn't in the past. And so you don't need as much headcount as you used to. Arvin from Perplexity said that on this show, in fact. But then on the flip side, there are foundation model costs. I feel like I see some companies that are basically taking the investor checks and turning around and spending them on OpenAI. So how are you seeing—I mean, and some of the checks are pretty big, right? They definitely are not everybody is following that bootstrap, if you can, advice. People are grabbing the 8 figures.
Elad Gil: (46:51) Yeah, most people aren't, and I'm not saying most people should. I'm just saying it's another path, and people always forget that it's another path. There are a lot of companies, I think, that have raised $50 million to build a model when they should have just gone with GPT-4 and they would have had the same rough outcome. They could have done some prompt engineering or something. So I do think there are a lot of people who went down the wrong path, and there's a subset of use cases where maybe the model really, really matters. That could either be specific vertical use cases, maybe certain types of healthcare where you have unique proprietary datasets. It could be if you're really training for specific interaction modalities or applications. And then obviously there's a big price differential if you're training a diffusion model versus if you're training a transformer-based model, right? You're talking about something that may cost hundreds of thousands or millions of dollars instead of tens of millions of dollars, depending on what you're doing. So I think it also depends on whether you're doing things in image and video or you're doing things in text.
Nathan Labenz: (47:50) Because you guys are seeing so many things, I think I'd be particularly curious about your answer on what are the coolest products, the most useful products, the most interesting glimpses into future paradigms or interfaces that you would say? And especially if they don't have a waitlist that you think people should go check out.
Sarah Guo: (48:11) For your average consumer, I think the list starts by being very simple. A lot of the incumbents have very nice magical features now. And incumbent is a broad word, but I do think that the things that Canva, Adobe, Figma have shipped are really useful and cool. Everybody should try ChatGPT. I don't know that one of these services is public yet, but just because it's the bane of my existence, rich contextual email completion is going to be here very, very quickly. And I think it's in waitlist beta. There's this meme about, oh, somebody's going to make your email longer and another agent's going to make it shorter. And there's going to be this adversarial interaction between the email agents. But I do think that being able to do drafted responses to everything in your inbox is one way to resolve this stupid issue. I think those are some of the fun ones. On consumer stuff, Character and the other bot parasocial interactions, I think, are great. I think the agent stuff is probably too immature for your average consumer to have a good interaction with it. But I think we're less than six months away from bots that actually do work for you.
Elad Gil: (49:25) I think there are all sorts of cool things happening right now. I think some of the dubbing things feel really magical. If you play around with something like EasyDub and you just drop in a video and it tries to capture the tonality of the voice of the person as it translates them into Spanish and stuff like that, or from Spanish into English or whatever languages you want to do. I'm still waiting for them to add Hebrew, which is why I'm not a power user yet. I just want to translate all my content into Hebrew. I'm just kidding about that small audience. But I think there are lots of magical things to come or things that feel magical. And I think it's interesting. For example, AutoGPT got a ton of attention, but for anybody who is in the AI community, it's the obvious thing that was going to happen or some form of it was going to happen. And so I feel like there are a lot of things that are coming. It's back to the old saying that the future is here, it's just not equally distributed. And I think there are a lot of things that people have been thinking about or realizing is going to be really cool or they built demos of that are going to start hitting broader audiences reasonably soon. And I think some of those things are just going to be really magical. It's just going to be amazing how these things work.
Nathan Labenz: (50:36) So another bit of the future that's starting to take shape is the Neuralink implants, which they've taken as far as the great apes.
Elad Gil: (50:48) Yeah, I have two of them.
Nathan Labenz: (50:50) Well, then you've answered that question already. I was just going to frame a hypothetical for you and say, let's imagine a near-term future where, say, a million people have one of these things, and it's broadly seen to be safe. They're walking around doing okay.
Sarah Guo: (51:08) We're a couple thousand dead monkeys away from that, man, but I'm excited to imagine it.
Nathan Labenz: (51:13) Yeah, we're not quite there. So anyway, would you guys be interested in getting one and being able to interface directly with the computing world with your thoughts?
Sarah Guo: (51:21) Yeah, of course. Right. I think that's an easy yes, but I have a house full of Gen 1 broken consumer hardware. And here is one where I'm probably not going to be an alpha tester. But if you just think of it, I actually looked at a series of companies around this. Let's say it's a NIC for your brain, right? And there are a whole bunch of things that are still immature from a technology perspective. But I think it's very difficult to not imagine that we'd want communication bandwidth to be higher with all of our devices and everyone else. And it's also hard for me to imagine that if, as an input mechanism for knowledge capture, this works and it's an advantage for people, which it will be, that it won't become very popular if it works.
Elad Gil: (52:18) Yeah. I'm very skeptical on timeframe for this kind of stuff, so we'll see. I think our understanding of the brain is so de minimis that, with the exception of a handful of systems that are easy to interrogate, like the visual system and things like that, I think the depth by which we understand how most of the stuff works is really shallow. Most of the deep brain stimulation stuff, which Neuralink is based on, has really been for treatments of things like depression or a few other diseases. So I'm quite skeptical about it, but we'll see, at least anytime soon. And by soon, I mean five to ten years even.
Nathan Labenz: (52:53) That might be the most bearish take we've had on the Neuralink question.
Elad Gil: (52:57) You should talk to neuroscientists.
Nathan Labenz: (53:01) So last one's a classic big picture zoom out. I feel like we're just seeing this wave starting to build. It's coming right for us. What are your biggest hopes for and fears for society at large as this AI wave washes over us?
Sarah Guo: (53:23) Let's see. So on hopes, I feel like we actually talked about a lot of this stuff. I think if you're open-minded, it's just a really amazing wealth of capabilities. And there's actually, I forget the name of the paper, maybe we can look it up and put it in the show notes. But what's interesting is a lot of the enabling technologies today, they help people with a lower skill base more than with a higher skill base, which makes sense. If you think about the training set of code or writing or music generation, if you're training off the entire set, you're going to help people with the minimum of these skills or no skills more than the highest set of skills. And so I think it's quite interesting from a democratization perspective.
Elad Gil: (54:09) Sounds like technology's going to help me, but not Sarah.
Sarah Guo: (54:11) It's very sweet, Elad. I think we're both screwed. Maybe I'm going to get speared for saying this, but I do think not understanding that alignment and safety research is deeply tied to capability research is challenging, right? If people don't understand that, if policymakers don't understand that. And so I certainly think that should be a broad democratic conversation. But I think there's a version of the world where we halt a lot of this progress, or there's regulatory capture of a lot of the technologies before we really figure out what they can do, which I think is going to be pretty problematic. I think there are obviously going to be nefarious actors that use all these technologies for different things, but we build defensive technologies against that just like we have in the past. Everybody wants the best opportunity for their kids and for them to grow up well-adjusted and able to go be useful to the world and feel good about themselves. I actually don't know what to do about that in this current environment and this unstable ground. What do I teach them? How should they interact with technology? Do I want them to be really good prompt engineers, or do I want them to go to Waldorf and not interact with tech? I don't know. I think very smart people have different takes on this, but that's one thing that concerns me personally.
Elad Gil: (55:35) Yeah. I guess on my side, short-term, and by short-term I mean next five to ten years, I'm very optimistic about what all this means globally in terms of, you have that chart of all the things that go up in price over time, which is education and healthcare and all these things and all the things that go down. And I think this is one of the few technologies that may actually help address those things that have become incredibly expensive in part due to regulatory capture. So in the short run, I'm incredibly optimistic about the global implications of this technology for health, education, and other areas. And in the long run, and by long run I mean a few decades, I'm a huge doomer in terms of eventual species competition with AI or AGI. I think the biggest short-term risk to the area, in some sense actually, is regulation. I think there's a very one-sided call right now to regulate these things, in part because incumbents have an incentive to say that, because they want to do regulatory capture on the models, prevent new entrants from coming in some cases. And in some cases, they have specific concerns, but I feel like there is some chance, maybe it's a one in five chance or something, that with this next election, we're basically going into the first AI-driven election. We're going to see ad copy and targeting campaigns and robo-dialing with real-sounding voices and all this stuff going into the presidential election. And I think a lot of, there's a lot of potential for that to turn into a giant regulatory storm, depending on who wins or loses. Just like when Trump won, there was a giant backlash against social companies who were blamed for that win. I think similarly with this presidential election, AI may be blamed for all sorts of things that it may not have really impacted that much, but that may be the moment that it starts to get regulated. And I think there are certain types of regulation that make sense. I don't think we should have export controls on advanced chips and stuff like that. Maybe there's some NIST-style approach. But I think most regulation tends to distort markets in really bad ways and tends to really kill innovation and tends to lock in incumbents in bad ways. I think it's way too early to regulate the vast majority of the things in this area. All the calls I've heard have been very one-sided to regulate it. I think that's the wrong thing to do right now.
Nathan Labenz: (57:46) Cool. Well, thank you, guys. This has been a lot of fun.
Sarah Guo: (57:48) Thank you, guys. Thanks for having us. Good to see you.